entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03336v1 | 20240905082836 | Eetimating Indoor Scene Depth Maps from Ultrasonic Echoes | [
"Junpei Honma",
"Akisato Kimura",
"Go Irie"
] | cs.SD | [
"cs.SD",
"cs.CV",
"cs.MM",
"eess.AS"
] |
Global prescribed-time control of a class of uncertain nonholonomic systems by smooth
time-varying feedback
Kang-Kang Zhang, Bin Zhou, Chenchen Fan, James Lam, Fellow, IEEE
This work was supported by the National Science Found for
Distinguished Young Scholars (62125303), the Science Center Program of
National Natural Science Foundation of China (62188101), the
Fundamental Research Funds for the Central Universities
(HIT.BRET.2021008), and HKU CRCG (2302101740). (Corresponding authors: Bin Zhou)
Kang-Kang Zhang is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China, and the Department of Computer Science, KU Leuven, B-3001 Heverlee, Belgium; James Lam is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China;
Bin Zhou is with the Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin, 150001, China; Chenchen Fan is with the Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China (email: [email protected], [email protected], [email protected], [email protected]).
Received ?? ; accepted ??
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Measuring 3D geometric structures of indoor scenes requires dedicated depth sensors, which are not always available.
Echo-based depth estimation has recently been studied as a promising alternative solution.
All previous studies have assumed the use of echoes in the audible range. However, one major problem is that audible echoes cannot be used in quiet spaces or other situations where producing audible sounds is prohibited.
In this paper, we consider echo-based depth estimation using inaudible ultrasonic echoes.
While ultrasonic waves provide high measurement accuracy in theory, the actual depth estimation accuracy when ultrasonic echoes are used has remained unclear, due to its disadvantage of being sensitive to noise and susceptible to attenuation.
We first investigate the depth estimation accuracy when the frequency of the sound source is restricted to the high-frequency band, and found that the accuracy decreased when the frequency was limited to ultrasonic ranges.
Based on this observation, we propose a novel deep learning method to improve the accuracy of ultrasonic echo-based depth estimation by using audible echoes as auxiliary data only during training.
Experimental results with a public dataset demonstrate that our method improves the estimation accuracy.
Deep learning, echo-based depth estimation, ultrasonic echoes
§ INTRODUCTION
The geometric structure of a scene is essential for a variety of applications, including navigation, path planning for autonomous mobile robots, and spatial layout design for indoor scenes.
Measuring geometric structures requires specialized optical sensors to acquire the depth of a scene, such as infrared light, LiDARs, or specially configured camera devices such as stereo cameras.
However, such measurement devices are not always available as these are often costly and require strict setup conditions for accurate measurements.
While deep monocular depth estimation, which uses deep learning to estimate depth maps of scenes from monocular RGB images captured by ordinary cameras, has also been explored for a decade <cit.>, there are many spaces where cameras cannot be used, such as dark rooms or spaces with privacy protection or legal restrictions.
In this study, we consider echo-based depth estimation.
Suppose we have a microphone array consisting of multiple microphones at different spatial locations in the scene.
A known sound emitted from a sound source (i.e, loudspeaker) is reflected by surfaces such as walls, windows, furniture, etc., and arrives at each microphone.
The time of arrival to each microphone depends on geometric properties of the surfaces in the scene. That is, the time difference of arrival of the echoes contains information about the geometric structure of the scene.
The problem of recovering the depth map from the echoes is an inverse problem and is difficult to solve analytically, hence is usually solved using deep learning.
Several efforts on echo-based depth estimation have been reported in the literature <cit.>.
These primarily focus on exploring effective network architectures for this task.
For example, the use of U-Net <cit.>, spatial pyramid pooling <cit.>, and bilinear attentions <cit.> have been investigated.
Besides these, multi-modal approaches combined with RGB images have also been discussed <cit.>.
However, one major drawback of the existing methods is that they all assume the use of audible echoes observed using audible sound sources.
Obtaining effective echoes that can stably acquire the 3D structure of an indoor scene requires generating sound loud enough to reverberate throughout the room.
Therefore, the existing methods cannot be used in rooms where generating audible sound is prohibited or where harmful effects on the surrounding environment or human body are concerned.
In this paper, we examine echo-based depth estimation using inaudible ultrasonic sound sources. To the best of our knowledge, this is the first work to explore ultrasonic echo-based depth estimation.
On one hand, an ultrasonic wave has a short wavelength, which has a theoretical potential to provide high measurement accuracy.
On the other hand, however, a critical drawback is that it is sensitive to noise or interference and tends to attenuate quickly.
Due to this nature of ultrasonic echoes, practical applications of ultrasonic measurements in air have been mainly limited to point measurements within a short distance range (typically < 1m), and the actual accuracy in depth estimation, which requires measurements of two-dimensional surfaces in a longer range (typically < 10m <cit.>), has remained unknown.
Therefore, we first conduct preliminary experiments to investigate how the depth estimation accuracy changes when the frequency of the sound source is gradually limited from the audible range to the ultrasonic range.
From the results we found that the estimation accuracy decreased when the frequency range was limited to the ultrasonic band only (discussed later in Sec. <ref>).
In light of the finding, we propose a novel deep learning method that uses audible echoes as auxiliary data only during training.
Our method generates synthetic echoes for training by linearly mixing the spectral information of ultrasonic and audible echoes <cit.>, and uses the synthetic echoes as auxiliary data (Fig. <ref>). This enables learning of a depth estimation network that is robust to missing audible frequency bands.
Experimental results with Replica <cit.>, which is one of the most popular public datasets for echo-based depth estimation, demonstrate that our method improves the depth estimation accuracy using ultrasonic echoes.
§ PRELIMINARY EXPERIMENTS
We first conduct preliminary experiments to assess the viability of ultrasonic sound sources in the context of depth estimation for indoor scenes.
Specifically, we evaluate the depth estimation accuracy when the frequency band of the sound source is gradually limited toward the ultrasonic band.
§.§ Dataset
We consistently use Replica <cit.>, one of the two standard public benchmark datasets for evaluating the accuracy of echo-based depth estimation <cit.>[The other dataset, Matterport3D <cit.>, does not provide room impulse responses that can reproduce ultrasonic bands so cannot be used for this paper.].
Replica has data of a total of 18 indoor scenes covering hotels, apartments, office rooms, etc, and this has been used in research using machine learning for egocentric computer vision, semantic segmentation in 2D and 3D, and performing navigation development. For training and testing, we follow the official training-test split provided by the dataset publisher. The original Replica dataset publishes ground truth depth maps and binaural echoes for various spatial locations and orientations within each scene. However, the echoes provided are limited to audible range up to 16,000 Hz, so experiments with ultrasonic echoes cannot be conducted.
We therefore synthesize ultrasonic echoes by using the room impulse responses (RIRs) associated with the Replica dataset.
The RIR is the echo observed at a certain location when an impulse signal is emitted at (another) location.
Hence, by convolving the RIR with the input sound emitted at the location of the sound source, the echo observed at a given location can be simulated.
More specifically, let h(t) and x(t) denote the RIR and the sound emitted at the location of the sound source, respectively. The echo y(t) is simulated by the following equation.
y(t) = ∑^t_k=0 h(t-k)x(k)
We use a chirp that varied from 1 Hz to 22,050 Hz in 0.05 seconds as the sound source and apply a high-pass filter to it to limit the frequency range. In order to account for higher-order reverberation components, we use a sampling frequency of 44,100 Hz and a sufficiently long period of recording time (0.12 seconds).
The synthesized echoes are used as input for estimation, and the depth map provided by the original Replica dataset is used as the ground truth output.
§.§ Depth Estimation Method
We design our depth estimation framework shown in Fig <ref> by following the state-of-the-art echo-based depth estimation method <cit.>. Note that several more recent echo-based depth estimation methods have been proposed <cit.> but these methods are not applicable to our problem because they assume that special images other than echoes are available as input (stereo images <cit.>, RGB-D images <cit.>, and spherical images <cit.>).
First, the short-time Fourier transform (STFT) is applied to the binaural echoes generated by the procedure described in Sec. <ref> to get the spectrograms of the observed echoes.
The upper limit of the effective frequency is limited to 22,050 Hz based on the sampling theorem. The depth map is estimated by using a depth estimation network from the obtained spectrograms. For the depth estimation network architecture, we use exactly the same architecture as EchoNet used in a recent echo-based depth estimation method <cit.>, which is an encoder-decoder type CNN consisting of three convolution layers and seven deconvolution layers. We train it by Adam for 300 epochs with the batch size of 8 and the learning rate of 0.0001.
The CNN is trained to recover the ground truth depth map from the spectrograms by minimizing the RMSE between the estimated and the ground truth depth maps.
§.§ Results
We evaluate the depth estimation accuracy in terms of RMSE between the estimated and ground truth depth maps. We basically follow <cit.> on the training protocols.
We report the average of five runs with different random seeds.
To evaluate the estimation accuracy with echoes in different frequency bands, we use the following eight cutoff settings of the high-pass filter: 1 Hz, 15,000 Hz, 17,500 Hz, 19,000 Hz, 19,500 Hz, 20,000 Hz, 21,000 Hz, 22,000 Hz. 20,000 Hz and above are considered ultrasonic, so three out of the eight settings of the bands correspond to the ultrasonic cases.
The results are shown in Fig. <ref>. Up to 19,500 Hz, the accuracy tends to be improved (i.e., RMSE decreases) as the frequency band is limited.
This may be due to the dominance of the high-frequency band, which has high measurement accuracy in theory.
However, as the frequency band is further restricted above 20,000 Hz, the accuracy is observed to decrease.
The reason may be due to a decrease in power in the ultrasonic band due to the effect of attenuation and a decrease in the amount of information due to the limitation of the frequency band.
To conclude, we observed that the depth estimation accuracy decreased with the ultrasonic band only, and improved with a slightly lower frequency band included.
§ METHOD
Based on the results of the preliminary experiments reported above, we propose a novel deep learning method to improve ultrasonic echo-based depth estimation.
The overview of the proposed method is illustrated in Fig. <ref>.
The key idea is to use audible echoes obtained from lower-frequency audible sounds only during training, with the aim of obtaining a depth estimation network robust to missing lower-frequency bands.
§.§ Auxiliary Echo Generation
The core of the proposed method is to generate an “augmented echo" synthesized by combining the spectrograms of the ultrasonic echo obtained from an ultrasonic source and of an auxiliary lower-frequency echo obtained from an audible source.
The combination is performed in the Mixup data augmentation manner <cit.> as follows.
X_a = α X_u + (1 - α) X_l,
Y_a = α Y_u + (1 - α) Y_l.
X_u, X_l are the two spectrograms to be mixed; in our method, X_u and X_l are the spectrograms of the ultrasonic echo and auxiliary lower-frequency echo, respectively.
Y_u, Y_l are the corresponding ground truth depth maps.
X_a, Y_a are the synthesized spectrogram of the augmented echo and the ground truth depth map, respectively.
α is the mixing ratio drawn from the uniform distribution on [0, 1].
Note that our method always synthesizes two echoes observed at the same location and orientation, so the ground truth depth maps to be mixed are exactly the same, i.e.., Y_a = Y_u = Y_l. Hence, it is not necessary to explicitly mix the depth maps.
Since there is a concern that mixing two spectrograms with significantly different frequency bands may not provide effective augmented echoes for training, the proposed method limits the bandwidth difference between X_u and X_l to 1,000 Hz or less.
§.§ Loss Configuration
Our depth estimation network is trained with the RMSE loss. However, training the network with only the augmented echoes is not desirable.
The final goal is to improve the depth estimation accuracy when only the ultrasound echo is fed into the network.
Meanwhile, the synthesized augmented echoes always contain audible band spectra (except for the rare case where α=1), which are useful in the early stages of training but not suitable for its later stages.
Based on this idea, the proposed method uses a total loss function defined as a weighted combination of two (sub-)loss functions, one for ultrasonic echoes and the other for augmented echoes, and schedules the weight as the learning proceeds.
Let ℒ_u(X_u, Y_u) be the loss function for ultrasonic echoes and ℒ_a(X_a, Y_a) be that for augmented echoes. Our total loss function ℒ(X_u, X_a, Y_a) is defined as:
ℒ(X_u, X_a, Y_a) = λℒ_a(X_a, Y_a) + (1-λ) ℒ_u(X_u, Y_a),
where λ is a hyperparameter to control the balance between the two loss functions.
This configuration allows us to always evaluate the loss when only ultrasonic echoes are used. λ is scheduled as the learning progresses by changing from λ=1 to 0 in a linear scheduling manner.
§ EXPERIMENTS
We evaluate the depth estimation accuracy of our method for three frequency settings of the ultrasound band, i.e., when the cutoff frequency of the high-pass filter is set to 20,000 Hz, 21,000 Hz, and 22,000 Hz. The cutoff frequencies of the corresponding auxiliary lower-frequency echoes are set to 19,500 Hz, 20,000 Hz, and 21,000 Hz.
For comparison, we evaluate two baselines: the method in <cit.> applied to ultrasonic and augmented echoes, respectively.
Other experimental conditions are the same as those used in our preliminary experiments (see Sec. <ref>).
§.§ Quantitative Results
The quantitative results are shown in Fig. <ref>.
Our method achieves the best accuracy for all the settings.
First, <cit.> with augmented echoes surpasses that with ultrasonic echo, which demonstrates the effectiveness of using augmented echoes during learning.
Second, Ours is better than <cit.> with augmented echoes. This is because low-frequency components are always mixed into the training data in <cit.> with augmented echoes, resulting in overfitting to low-frequency bands that are not actually used for estimation.
These results verify the effectiveness of the proposed method.
§.§ Qualitative Results
A few examples of depth maps estimated by the proposed method are shown in Fig. <ref>. The estimated depth maps by ours are closer to the ground truth depth maps than those estimated by <cit.> with ultrasonic echoes. This result further emphasizes the superiority of the proposed method.
§ CONCLUSION
We explored ultrasonic echo-based depth estimation. We proposed a novel method of transferring knowledge of audible sound to ultrasound, based on the solid support of our analysis (Fig. <ref>). As the first paper addressing ultrasound-based depth estimation, we believe that this paper can provide a new direction to the community. Performance evaluation on real datasets based on our proposed method is a future work.
Acknowledgment. This work was partially supported by JSPS KAKENHI Grant Number 23K11154.
IEEEbib
|
http://arxiv.org/abs/2409.03629v1 | 20240905154151 | Critical transition between intensive and extensive active droplets | [
"Jonathan Bauermann",
"Giacomo Bartolucci",
"Job Boekhoven",
"Frank Jülicher",
"Christoph A. Weber"
] | cond-mat.soft | [
"cond-mat.soft",
"physics.bio-ph"
] |
Department of Physics, Harvard University, Cambridge, MA 02138, USA
Department of Physics Universitat de Barcelona, Carrer de Martí i Franquès 1-11, 08028 Barcelona, Spain
School of Natural Sciences, Department of Bioscience,
Technical University of Munich, Lichtenbergstraße 4, 85748 Garching, Germany
Max Planck Institute for the Physics of Complex Systems,
Nöthnitzer Straße 38, 01187 Dresden, Germany
Center for Systems Biology Dresden, Pfotenhauerstraße 108, 01307 Dresden, Germany
Cluster of Excellence Physics of Life, TU Dresden, 01062 Dresden, Germany
Faculty of Mathematics, Natural Sciences, and Materials Engineering: Institute of Physics, University of Augsburg, Universitätsstraße 1, 86159 Augsburg, Germany
§ ABSTRACT
Emulsions ripen with an average droplet size
increasing in time.
In chemically active emulsions,
coarsening can be absent, leading to a non-equilibrium steady state with mono-disperse droplet sizes.
By considering a minimal model for phase separation and chemical reactions maintained away from equilibrium, we show that there is a critical transition in the conserved quantity between two classes of chemically active droplets:
intensive and extensive ones.
Single intensive active droplets reach a stationary size mainly controlled by the reaction-diffusion length scales.
Intensive droplets in an emulsion interact only weakly, and the stationary size of a single droplet approximately sets the size of each droplet.
On the contrary, the size of a single extensive active droplet scales with the system size, similar to passive phases. In an emulsion of many extensive droplets, their sizes become stationary only due to interactions among them.
We discuss how the critical transition between intensive and extensive active droplets affects shape instabilities, including the division of active droplets, paving the way for the observation of successive division events in chemically active emulsions.
Critical transition
between intensive and extensive active droplets
Christoph A. Weber
September 9, 2024
===========================================================================
§ INTRODUCTION
Coarsening or ripening refers to the growth of larger domains at the expense of smaller domains that eventually shrink. For passive systems, the kinetics of coarsening stops when the system reaches equilibrium, corresponding to a single domain in a finite system. Coarsening occurs in various systems, ranging from spin systems <cit.>, liquid emulsions <cit.>, and crystallized precipitates <cit.>. The kinetics of coarsening is universal and determined by conservation laws, symmetries, and the dimension of the system <cit.>.
For active systems persistently maintained away from equilibrium <cit.>, the kinetics of coarsening is altered and can be even suppressed <cit.>. The paradigm is reaction-diffusion systems that give rise to non-equilibrium steady state patterns with various spatial morphologies <cit.>. Another example is Model B+, which, in contrast to the classical Model B, also accounts for contributions to the diffusive fluxes that do not arise from free energy <cit.>. These fluxes give rise to anti-coarsening with a condensed phase that stopped growing and a “bubbly” morphology of material-poor domains <cit.>. Finally, suppressed ripening was also observed in liquid-liquid phase-separated systems with chemical reactions maintained away from equilibrium <cit.>. These systems are also called chemically active emulsions <cit.>.
The formation of steady-state patterns in reaction-diffusion systems relies on the reaction flux that breaks the detailed balance of the rates. Together with diffusion, this gives rise to various reaction-diffusion length scales that are crucial but not exclusively responsible for pattern morphology. Chemical processes generically come with conservation laws for mass, and if incompressible, also for volume. It has been shown that conservation laws are key determinants for the emerging patterns in mass-conserving reaction-diffusion systems <cit.>. Key implications are that the pattern-forming transition is typically sub-critical, reminiscent of discontinuous phase transitions <cit.>.
Active emulsions also give rise to reaction-diffusion length scales. These length scales are crucial for various non-equilibrium phenomena in active emulsions, such as dividing droplets <cit.>, formation of steady liquid shells <cit.>, and the suppression of Ostwald-ripening <cit.>. Despite such interesting phenomena, a limitation of some minimal models <cit.> is that two components were considered lacking conservation laws. Similar to reaction-diffusion systems <cit.>, conservation laws can qualitatively alter the nature of the transition and instabilities in active emulsions.
In passive, phase-separated systems with chemical reactions, the reaction-diffusion length scales do not determine the equilibrium state.
In this passive case, conservation laws, i.e., so-called lever rules for quantities conserved by the chemical reactions, determine the volume of the condensed phase(s) at thermodynamic equilibrium. When the chemical reactions are maintained away from equilibrium (active emulsion), the emerging reaction-diffusion length scales compete with the conservation laws. It remains unclear whether such reaction-diffusion scales or the conservation law dominate the selection of pattern length scales in active emulsions.
Here, we study the role of conservation laws in multi-component mixtures for the droplet size and the droplet number in monodisperse active emulsions that do not undergo coarsening.
Our key finding is that the conserved quantity controls a critical transition between
intensive and extensive chemically active droplets that differ in their physical behavior when increasing the system size; see Fig. <ref> for an illustration. In the case of intensive droplets, single droplets in large systems are stationary, with droplet sizes independent of system size.
For extensive droplets, the stationary droplet size increases with the system size.
We study the consequences of this transition for the collective dynamics of many droplets in emulsions and show how monodispersity can arise for both classes of chemically active droplets.
§ MINIMAL MODEL FOR AN ACTIVE EMULSION WITH A CONSERVED QUANTITY
§.§ Dynamics of the concentration fields
An incompressible binary mixture with two molecules A and B converting into each other via a chemical reaction has a conserved quantity, the total mass of A and B. However, in a binary mixture, this conserved quantity is trivial as it is constant in space and thus does not change dynamically <cit.>.
A mixture containing chemical reactions must comprise at least three different molecules to have a conserved quantity that can vary in space.
In the following, we introduce a minimal model for such a ternary mixture that is incompressible and composed of a non-reacting solvent S and the molecules of types A, B, that can react via the following reaction scheme
A k_BA B ,
B k_AB A
For this chemical reaction,
the dynamics of average concentrations ϕ̅_i(t)=V^-1∫_V d^3x ϕ_i(x,t) (i=A,B,S) are constraint to a specific conserved line in the ϕ̅_A-ϕ̅_B-plane which can be expressed as
ψ̅ = (ϕ̅_A+ϕ̅_B)/2
where V denotes the volume of the system and ϕ_i(x,t) are the concentration fields that depend on position x and time t. The conserved quantity is denoted as ψ̅.
The dynamical equations for the fields ϕ_A(x,t) and ϕ_B(x,t) are
∂_t ϕ_A = ∇·( Γ_A ∇μ_A )+ k_ABϕ_B - k_BAϕ_A
∂_t ϕ_B = ∇·( Γ_B ∇μ_B ) - k_ABϕ_B + k_BAϕ_A
where μ_i = δ F /δϕ_i
are the chemical potentials of i=A,B, descending from the Helmholtz free energy F=∫_V d^3x ( f_0+ κ_A (∇ϕ_A)^2/2 + κ_B (∇ϕ_B)^2/2 ) <cit.>. Here, Γ_i denote the mobility coefficients.
The first terms of Eqs. (<ref>) are diffusive fluxes that are driven by local gradients of the chemical potentials, while the remaining terms, k_ABϕ_B - k_BAϕ_A, are chemical fluxes. For passive systems,
the reaction rate coefficients k_ij are concentration dependent such that the free energy F determines thermodynamic equilibrium <cit.>. If, however, k_ij are also dependent on some external reservoir or are chosen independently of F, the system is inherently chemically-active <cit.>.
Without chemical reactions, the dynamic system governed by Eqs. (<ref>) relaxes to phase equilibrium, characterized by equal chemical potentials μ_i^I=μ_i^II, and equal osmotic pressures
Π^I=Π^II, where Π= -f+ ∑_i=A,Bϕ_i μ_i, see Ref. <cit.> for a general introduction.
Such phase equilibria are also relevant for the dynamic system with chemical reactions since
they locally govern the dynamics of the interface.
The phase equilibria of the mixture are portrayed by the phase diagram.
In ph_d(a), we show sketches for three different mixtures where
A always phase separates from S when B is absent.
The molecular interactions of B with A and S, shape the phase diagram.
In the left panel of ph_d(a), we show a sketch of a phase diagram where B molecules are similar to S and phase separate from A; in the central panel, B molecules that are neutral and do not interact differently with A or S. In the right panel B molecules are identical to A molecules and therefore phase separate from the solvent.
To mimic these interaction characteristics and to avoid tying our conclusions to specific free energy models, in the following, we take a geometrical view on phase diagrams.
Phase equilibria are characterized by the slope of the tie lines in the phase diagram.
The relevant geometrical property of the phase diagrams that we consider is the local angle α between the tie lines formed with ϕ_A-axes of the phase diagram (ph_d(b)).
This local, geometrical view on a phase diagram allows us to consider free energies that are mathematically simpler even than mean field theories such as the Flory-Huggins theory, allowing us to obtain analytical results for stationary droplet states.
For ease of presentation, we consider cases where tie lines are locally orthogonal to both the dense and dilute branches of the binodal line.
These geometrical properties can be captured by introducing the shifted concentration field ϕ_i →ϕ_i-ϕ̅_i and the following Ginzburg-Landau-like free energy density f_0:
f_0(ϕ_A, ϕ_B;α) = b_1/2 (ϕ_1 + )^2 (ϕ_1 - )^2+ b_2/2ϕ_2^2 ,
[ ϕ_1; ϕ_2 ] =
[ cosα sinα; -sinα cosα ][ ϕ_A; ϕ_B ]
where b_1 and b_2 are parameters characterizing interactions and entropic contributions.
The qualitative cases shown in ph_d(a) can be described
by varying the angle α that parameterizes the rotations of the energy density f_0.
Specifically, different values of α correspond to different types of interactions of the B molecules with A and S.
Without loss of generality, we restrict ourselves to the domain -π/4 ≤α≤π/4. Indeed, due to the symmetry of the free energy, the transformation α' = α + π/2 is equivalent to re-labeling A to B and B to A.
For α = -π/4, B and A interact equally with the solvent, for α = 0, B does not interact with A and solvent, and for α = π/4, B and S interact equally with A.
As a consequence, in the case α = -π/4, the two reactants A and B localize in two distinct phases, i.e., they segregate. In contrast, for α = +π/4, molecules of type A and B phase separate together from the solvent S. Following Refs. <cit.>, we refer to α = -π/4 as the segregative case and α = +π/4 as associative case. In the literature on mixtures composed of a large number of components, these two cases are often called demixing and condensation, respectively <cit.>.
In the literature of biomolecular condensates, the shape of the phase diagrams is often related to the underlying molecular interactions.
According to Ref. <cit.>, heterotypic interactions correspond to the segregative case and homotypic interactions to the associative case.
Since the molecular interactions determine the composition of the coexisting phases, we refer to α as compositional angle.
Varying α in the range -π/4<α<π/4 interpolates between the limits of the segregative and associative case.
In ph_d(b), we show the phase diagram associated with the free energy density f_0 in f.
Note that the binodal lines (dark blue) separating the mixing from the demixing regimes (white and blue shaded area, respectively) are straight lines tilted by the compositional angle α from the vertical.
The tie lines (represented as dashed blue lines) are parallel to each other, perpendicular to the binodal, and thus tilted by the compositional angle α from the horizontal.
Finally, we comment on the role of ϕ_0 in f (ph_d(b)):
the value ϕ_0 sets the scale of the concentration axis since for cos(α)ϕ_A + sin(α) ϕ_B inside the interval [-ϕ_0, ϕ_0], the passive system can phase separate.
In the following, we consider constant chemical rate coefficients k_ij that are independent of the free energy F. As a result, the system is chemically active since in general no phase equilibrium of coexisting phases exists where the chemical reactions are at a steady state in all phases.
These homogeneous steady states of chemical reactions are governed by the condition
k_ABϕ_B = k_BAϕ_A
defining the reaction nullcline shown as a solid green line in ph_d(b).
We refer to the angle β between tie lines and the reaction nullcline with
β = arctan(k_BA/k_AB) - α
as the activity parameter. This is because it
characterizes the strength of non-equilibrium driving.
Indeed, only for the special case β = 0,
the system is passive and can settle to thermodynamic equilibrium because the coexisting concentrations (phase equilibrium) are also connected by the
reaction nullcline (Eq. (<ref>)). Therefore, for β=0, the phase equilibrium and chemical equilibrium coincide.
For β≠ 0, coexisting concentrations connected by a tie line do not lie on the reaction nullcline.
Thus, no stationary state of reactions (reaction_nullcline) simultaneously fulfils phase equilibrium. Instead, reaction fluxes and diffusive flux between the phases balance each other leading to a non-equilibrium steady state.
The activity parameter is restricted to the domain 0 ≤β < π/4 because for β = π/4 the reaction nullcline becomes parallel to the tie lines.
In summary, the active emulsions with the chemical reaction (<ref>)
is characterized, when using a geometrical representation of the phase diagram (f), by the
compositional angle α, the activity parameter β, and the conserved quantity ψ̅ (Eq. (<ref>)).
§.§ Dynamics of the conserved and non-conserved fields
To understand the role of conserved quantities in the ripening kinetics, we introduce the conserved field ψ(x,t) and the reaction extent field ξ(x,t), which are defined as:
ψ(x,t) =(ϕ_A(x,t) + ϕ_B(x,t))/2
ξ(x,t) = (ϕ_A(x,t) - ϕ_B(x,t))/2
For the Ginzburg-Landau type of free energy
(f), we consider a constant mobility for simplicity and also write Γ_i = Γ, leading to:
∂_t ψ = Γ∇^2 μ_ψ (ψ,ξ; α)
∂_t ξ = Γ∇^2 μ_ξ (ψ,ξ; α)
-K ξ + K (π/4 +α+β) ψ
where we have introduced the chemical potentials of the conserved quantity, μ_ψ =(μ_A +μ_B )/2, and the one of the non-conserved quantity, μ_ξ = (μ_A-μ_B)/2.
When recasting the free energy f_0(ψ,ξ) in terms of the conserved and non-conserved fields, these chemical potentials can be obtained from functional derivatives, μ_ψ=δ F/δψ and μ_ξ=δ F/δξ.
Furthermore, we define the overall reaction rate as
K= k_AB + k_BA
In Appendix <ref>, we show how
the rate coefficients k_AB and k_BA can be expressed in terms of the overall rate K, the activity parameter β and the compositional angle α.
We can identify two special cases where the chemical potentials of the conserved field and of the reaction extent field decouple, i.e., the free energy density is devoid of coupling terms proportional to (ψ ξ):
(i) For the segregative case α=-π/4, we find f_0(ψ,ξ) = b_2 ψ^2/4 + b_1(ξ-√(2)ϕ_0)^2(ξ+√(2)ϕ_0)^2/8. Thus, the free energy density contribution in ξ is a classical double-well potential, which leads to a Cahn-Hilliard dynamics with reactions for the reaction extent field ξ( x, t) (dyn_xi). The quadratic term of ψ to the free energy density gives rise to Fick's law of diffusion for the conserved field ψ( x,t) ((dyn_psi)).
(ii) On the contrary, in the associative case, α=π/4 gives a free energy density f_0(ψ,ξ) = b_1(ψ-√(2)ϕ_0)^2(ψ+√(2)ϕ_0)^2/8 + b_2 ξ^2/4. In this case, the free energy density leads to a Cahn-Hilliard dynamics for the conserved field ψ( x,t) (dyn_psi), while the reaction extent field ξ( x, t) follows Fick's law of diffusion with linear reactions ((dyn_xi)).
Though the dynamics of the conserved field ψ evolves independently from the reaction extent field in both cases (i) and (ii),
the dynamics of the reaction extent is affected by the conserved field ψ( x,t) via a source or a sink term in the reaction dynamics (dyn_xi).
The ripening dynamics of these two special cases α=±π/4 is known and has been excessively studied in the past:
(i) For the segregative case α = -π/4 (A and B segregate in different phases), the conserved variable ψ follows a simple diffusion dynamics that is agnostic to the chemical reactions and the dynamics of the reaction extent ξ.
Thus, the field ψ settles in a homogeneous state on long-time scales.
Moreover, the dynamics of ξ is the dynamics of a binary mixture composed of A and B with active chemical reactions between these two components. This leads to the suppression of Ostwald ripening, i.e., a mono-disperse emulsion with a stationary droplet size <cit.>.
Once droplet size exceeds this stationary value,
droplets shrink or can undergo shape instabilities, giving rise to droplet division <cit.>, or form stationary, active shells <cit.>.
(ii) For the associative case α = π/4 (A and B phase separate together), the conserved variable ψ follows a classical Cahn-Hilliard dynamics. Thus, droplet-like domains, e.g., rich in ψ, undergo Ostwald ripening in a sea of low ψ, where bigger droplets grow at the expense of smaller shrinking ones that eventually disappear.
On large time scales and in a finite system, the system evolves toward a single droplet with a volume that scales with the system size V.
The dynamics of the reaction extent ξ
is affected by the ψ field without disturbing it.
A similar decoupling occurs in models for reacting diluted “client” particles corresponding to ξ and phase separating scaffold components corresponding to ψ <cit.>.
In the remaining general cases -π/4<α<π/4, the dynamics of the conserved field ψ and the reaction extent field ξ are coupled in the free energy.
The key property as compared to the special cases (i,ii) is that the dynamics of the conserved field ψ is influenced by the non-conserved field ξ. This coupling is a generic consequence of the different interactions among the molecular constituents of the mixture (see ph_d(a,b) and related discussions).
Therefore, the active chemical reactions in the dynamic equation for the reaction extent ξ (dyn_xi) also affect the dynamics of the conserved field ψ (dyn_psi).
In the next section, we will show that, strikingly, this remains relevant in the thermodynamic limit of large systems (V →∞), i.e., on length scales much larger than the reaction-diffusion length scale.
Note that solely the non-conserved field ξ gives rise to a reaction-diffusion length scale λ = √(D/K), which can be obtained when linearizing near phase equilibrium. Here, D = Γ∂^2 f_0(ψ^±, ξ^±)^2/ ∂ξ^2 which can be calculated in our model:
D = √((Γ[ b_2+ 4 b_1ϕ_0^2 +(b_2- 4 b_1ϕ_0^2)sin(2α)])/4).
The reaction-diffusion length scale λ is finite and real (for Γ>0, K >0) for all values -π/4 ≤α≤π/4 and independent of β and the conserved quantity ψ̅.
§ STATIONARY SINGLE DROPLETS
To study the effects of the conserved quantity ψ̅ on the droplet size, we first consider a single spherical droplet in a finite, spherically-symmetric system of radius R_sys=( 3V/(4π))^1/3, where V is the system volume.
We consider a sharp interface limit, which is valid when the droplet size is large compared t exceeds the width of the interface <cit.>.
Phase equilibrium is imposed at the position of this sharp interface. This boundary condition couples the reaction-diffusion equations in each phase. These reaction-diffusion equations can linearized near phase equilibrium at the interface; for details, see Appendix <ref> and Ref. <cit.>.
In the following, we discuss the stationary solutions for a single droplet, i.e., ∂_t ψ(x,t) =0 and ∂_t ξ(x,t)=0, and a stationary interface position R_stat.
For large enough system sizes R_sys, we always find two solutions for R_stat (solid and dashed lines in system_size(a)) at which the total reaction flux of every component in one phase is perfectly balanced by the diffusive flux of the same component over the interface.
The lower branch (dashed) of these two solutions is the critical nucleation radius corresponding to an unstable fixed point. The upper branch (solid) of R_stat is a stable fixed point corresponding to a non-equilibrium steady state.
See Appendix <ref>, analytics_Rstat, for an analytical expression of the stationary radius as a function of the activity parameter β and the compositional angle α for large system sizes.
The conserved quantity ψ̅ (cons_quantity) affects the non-equilibrium steady state and leads to a changed behavior of the stationary radius R_stat as a function of the system size R_sys.
This changed behavior is depicted in system_size(a) which shows R_stat(R_sys) for two different values of the conserved quantity ψ̅ = -0.61 (blue) and ψ̅=-0.425 (red) in a system with α=0 and β=π/8.
Interestingly, in the case of ψ̅= -0.61 (blue), the stable stationary solution converges to a finite value in the limit of large systems, i.e., R_sys→∞, while for ψ̅= -0.425 (red), the solution scales linearly with the system size, for large systems. In other words, there are two different cases:
(i) Intensive active droplets (blue) for which the stationary droplet size R_stat quickly saturates with system size R_sys implying that R_stat
is set by molecular and kinetic parameters for large system size.
(ii) Extensive active droplets (red) where the stationary droplet size R_stat increases linearly with system size R_sys implying that R_stat is set by R_sys.
To explore these two behaviors, we numerically solved the dynamic equations with a continuous interface (Eqs. (<ref>)) in three dimensions.
system_size(b) shows the field ϕ_A(x) in four different stationary states, corresponding to the two different cases (i,ii) and two different system sizes (L=600 and L=150 in inset).
We consider the same values of the conserved quantity as in system_size(a).
In the case of ψ̅=-0.61 (blue), the radius of the stationary active droplet for the larger system is almost identical to that of the smaller system in the inset.
In the case of ψ̅=-0.425 (red), however, the stationary droplet radius in the larger system is much larger than in the smaller box.
We conclude that the solutions of the continuous dynamic Eqs. (<ref>) confirm the trend obtained from our single droplet study in the sharp interface limit discussed in system_size(a).
The qualitative change in the scaling of the stationary radius R_stat with system size R_sys
upon changing the conserved quantity ψ̅ suggests that there is a bifurcation between the regimes of intensive (blue) and extensive (red) active droplets.
To determine the nature of this bifurcation, we use the sharp interface model of a single droplet and consider the limit of an infinite system; details see Appendix <ref>.
We calculate the stable (solid) and unstable (dashed) stationary radii R_stat as a function of the conserved quantity ψ̅ (system_size(c)).
We find that R_stat diverges at the threshold values (see Appendix <ref> for details on the derivation)
ψ̅_crit^± = ±ϕ_0/2( cos(α) + sin(α) )
In the vicinity of these critical values, the stationary droplet radius diverges with R_stat∝ |ψ̅-ψ̅_crit^±|^-1 within the blue domain, see App. <ref>.
Furthermore, stationary active droplets can only be found within a certain range of the conserved quantity ψ̅∈ [-ψ̅_bin,ψ̅_bin] (dashed-dotted black line in system_size(c)) with
ψ̅_bin^± = ±ϕ_0/2cos(α+β) + sin(α+β)/cos (β)
This range in the conserved quantity graphically corresponds to steady states for which the reaction nullcline lies within the binodal domain of the phase diagram (ph_d).
Moreover, for -ψ̅_bin<ψ̅<0, we find A-rich droplets in a solvent-rich phase, while for 0<ψ̅<ψ̅_bin, we find solvent-rich droplets in a A-rich phase.
By means of Eq. (<ref>) and Eq. (<ref>), we can study how the underlying molecular interactions (parameterized by the compositional angle α) and the activity parameter β affect the existence of droplets and the critical bifurcation; see Fig. <ref>(a,b).
Neglecting the finite size effects,
these results are independent of the specific values of kinetic parameters such as Γ and K, and otherwise only depend on thermodynamic parameters in the free energy (f).
We find that for β>0, the active driving leads to a regime of intensive droplets for all values -π/4 ≤α< π/4, see Fig. <ref>(a). Recall that in the segregative case, α = -π/4 corresponds to an effective binary model of components A and B, which phase separate from each other and are converted into each other via chemical reactions; this case was studied in Ref. <cit.>. Only in this limiting case are droplets always intensive, and the conserved quantity does not affect the stationary radii of chemically active droplets. For all values -π/4<α< π/4, there is an additional domain with extensive droplets. The transition between these two regimes occurs at a value of the conserved quantity expressed in Eq. (<ref>).
In the associative case α=π/4, =, i.e., no intensive droplets can be found, independently of the value of the activity parameter β. In this case, droplets behave like droplets in passive systems despite the presence of an active chemical reaction. Chemical reactions still drive diffusive fluxes of A and B between the phases. However, these fluxes do not affect the spatial distribution of the stationary conserved field ψ(x).
In the passive case with β=0, we find that ψ̅_bin = ψ̅_crit independent of the compositional angle α, see Fig. <ref>(b) leftmost panel (green domain). Indeed, passive droplets are always extensive since they behave as thermodynamic phases that scale with the system size.
However, for β>0, ψ̅_crit^±≶ψ_bin^±. Therefore, intensive droplets of finite size exist inside a domain that increases for larger activity parameter β (see Fig. <ref>(b) from left to right).
In summary, our analysis of single active droplets shows that there are two different classes of stationary states in the limit of an infinitely large system:
intensive active droplets and extensive active droplets. While an intensive active droplet adopts a finite size in a large system, an extensive active droplet scales with the system size.
There is a critical transition between both types of non-equilibrium steady states.
This transition is controlled by the interactions among the components, characterized in our model by the compositional angle α, the conserved quantity ψ̅, and the activity parameter β.
In the next section, we study emulsions composed of many active droplets and explore their behavior in the parameter regimes corresponding to intensive and extensive active droplets.
§ INTERACTIONS OF MANY DROPLETS
In this section, we study the dynamics in active emulsions by numerically solving the dynamic equations (dyn_phi) for the continuous fields ϕ_A(x,t) and ϕ_B(x,t) with a continuous interface in three dimensions.
We initialize the system in a homogeneous state at local chemical equilibrium, with the average concentration values ϕ̅_A and ϕ̅_B inside the binodal domain but outside of the spinodal domain.
Moreover, N_init small spherical droplets above the critical nucleation radius are randomly positioned in the system with ϕ_A=ϕ_0 inside (Eq. (<ref>)).
To avoid the fusion of such initially placed droplets right after initialization, we only accept random configurations with inter-droplet distances above a threshold value.
We now discuss representative results corresponding to the intermediate values of the compositional angle α=0 and the activity parameter β=π/8.
With this choice, psi_bin and psi_crit imply that the bounds on the conserved quantity ψ̅ for the existence of droplets are given by ψ̅_bin^± = ±ϕ_0/√(2) and the critical bifurcation between intensive and extensive droplets is ψ̅_crit^± = ± 0.5 ϕ_0.
The initial concentrations outside are chosen such that the total conserved quantity in the system is either ψ̅=-0.6 ϕ_0, or ψ̅=-0.4 ϕ_0.
These two values lie, respectively, in the regime where chemically active droplets are intensive (ψ̅<ψ̅_crit^-), or extensive (ψ̅_crit^-<ψ̅<ψ̅_crit^+).
The left-most panels of many_d(a) (grey) depicts the initial state of the active emulsion corresponding to an initial droplet number of N_init = 8 and N_init = 96, respectively.
In the remaining columns of many_d(a) (blue and red),
we show the
the stationary states of the active emulsion for these two initial conditions and the aforementioned conserved quantities ψ̅=-0.6 ϕ_0 (blue) and ψ̅=-0.4 ϕ_0 (red).
We find that for ψ̅=-0.6 ϕ_0 (intensive regime), all droplets grow to the same size, i.e., a mono-disperse active emulsion emerges.
During this kinetics to the stationary state, a few droplets vanish compared to the initial condition.
Interestingly, for ψ̅=-0.6 ϕ_0, the stationary droplet radius changes only slightly for the two initial conditions.
In the right column of many_d(a) (red), we show the stationary states for ψ=-0.4 ϕ_0 (above the critical transition, extensive regime) for the same initial conditions as above. For the initial state composed of eight droplets only (first row), droplets elongate and sometimes branch, thereby forming tube-like structures. However, some of these branched structures can separate during the dynamics such that, in the end, N=17 different tubes of different lengths exist.
When initializing 96 droplets (second row), some of them elongate, while others stay almost spherical, dependent on the initial distance to other droplets. However, none of them dissolve, such that we find N=96 objects in the stationary state. When even more droplets are initialized, some of the initial droplets dissolve, while the rest stay spherical and try to maximize their distance towards each other, see many_d(d).
Despite these complex changes in droplet morphologies, there is a common principle in the regime where single active droplets are extensive (red):
Despite varying the initial conditions and different numbers of droplet-like domains in the stationary state, the total phase volume is almost constant (many_d(b)).
In other words, when there are more droplet-like domains, they are smaller such that the total phase volume is approximately conserved.
In contrast, in the regime where active droplets are intensive (blue), the total volume increases with the amount of initialized droplets. Most importantly, in this regime, the average droplet volume is roughly constant (many_d(c)) and approximately equal to the volume of one single droplet up to finite size effects.
Monodisperisty in chemically active emulsions can be explained as follows:
Intensive droplets (blue) adopt a fixed size independent of the system size. When just a few droplets are initialized in a large system, they can be far apart, and whenever they are separated by a distance much longer than the reaction-diffusion length scale in the outside phase, they hardly interact with each other. As a consequence, their dynamics becomes stationary and their average radius takes a value similar to the stationary radius R_stat obtained from the single droplet analysis (see Sect. <ref>).
When more droplets are initialized, such that they are closer than this reaction-diffusion length scale, they weakly interact and get stationary at smaller sizes. This leads to the slight decrease of average droplet volume for large N, seen in many_d(c).
In all cases, droplets reach identical radii.
For extensive droplets (red), monodispersity is only found when many droplets are initialized. Indeed, when only a few droplets are initialized, they tend to grow until they feel the presence of their neighbors. The more droplets are initialized, the sooner this arrest occurs, decreasing the radii of droplets.
If the number of initialized droplets is too small, droplets grow to sizes where shape instabilities can occur.
Tubes form via an elongation instability, discussed in the next section, or spherical shells form via a spinodal instability at the center of large droplets <cit.>.
Crucially, when enough droplets are present, the arrest of growth happens earlier than shape instabilities can occur. As a consequence, droplets are spherical and reach identical radii (many_d(d)).
In summary, for extensive droplets in an emulsion (red),
the history of the emulsion matters for the final stationary mono-dispersed state or the emerging shape instabilities.
To confirm the validity of these arguments relying on a single droplet analysis, we compare the phase volume and the average droplet size with the analytic results obtained from the single droplet case in the sharp interface limit.
To this end, we considered N identical subsystems arranged in a hexagonal close-packed lattice that fills the total system volume V_sys, and calculated phase volumes and average sizes.
These analytically derived results (solid lines in many_d(b,c)) are in good agreement with the measured values in the numerical simulations of the active emulsions.
In the case of extensive droplets (dashed lines in many_d(b,c)), the total volumes of the phases are well captured by considering the limit of fast diffusion <cit.>.
§ SHAPE INSTABILITIES AND DROPLET DIVISION
In the previous section, we have seen that for extensive active droplets in an emulsion, an elongation instability can occur.
This instability is reminiscent of the Mullins-Sekerka instability that occurs in the diffusive growth of a single droplet without reactions in a large system.
It requires a (weak) deformation of a droplet's spherical shape and relies on the material depositing at interfacial domains of larger mean curvature.
As a result, these domains grow faster, further deforming the droplet shape.
Deformations are counteracted by surface tension, which tends to flatten the mean curvatures of the interface. However, this effect weakens the larger the droplet becomes. Thus, there is a critical radius above which droplets constant growth and deform (Mullins-Sekerka instability).
A similar instability can occur for both extensive and intensive active droplets.
Indeed, in a binary mixture where droplets are always intensive, such instability was shown to lead to droplet division <cit.>.
Here, we show that droplet dynamics after elongation vastly differ between intensive and extensive active droplets.
In the upper row of division (blue), we show snapshots of the dynamics of two intensive droplets (ψ̅=-0.6 ϕ_0) and compare it to the dynamics of two extensive droplets (ψ̅=-0.4 ϕ_0) in the lower row (red). For both cases, we choose identical kinetic parameters.
However, we have chosen the overall reaction rate K (overall_reaction_rate) such that the stationary droplet radius in the intensive regime exceeds the critical radius for onset of the elongation instability.
In both cases, the two droplets are initially separated by a distance smaller than the reaction-diffusion length in the outside phase. Thus, the presence of the neighboring droplets leads to asymmetries that induce initial deformations, which get amplified by the elongation instability.
However, the division predominantly happens for intensive droplets; see division (upper row). Extensive droplets elongate (division lower row) and form tubes, as seen before.
This difference can be explained by focusing on the neck region. For intensive droplets, there is almost no growth in this region, such that a Plateau-Rayleigh instability can lead to a pinch-off. However, extensive droplets constantly grow, including in the neck region, thereby inhibiting the Plateau-Rayleigh instability.
Interestingly, with such cycles of growth and division of intensive droplets, the total phase volume can trivially grow just by increasing the number of droplets. As a result, these shape instabilities allow intensive droplets to fill the space. Thus, macroscopic phase volumes are reached, even in infinite systems, similar to the regime of extensive droplets.
§ MOLECULAR INTERACTIONS AFFECT THE ARREST OF RIPENING
In section <ref>, we have shown that extensive droplets grow until the interaction with neighboring droplets arrests their growth. However, in section <ref>, we discussed that in the associative case (α = π/4) where A and B have identical interactions with the solvent, the conserved field decouples from the active driving, leading to passive phase-separating dynamics.
In this section, we study how the emulsion dynamics is affected as molecular interactions approach the limit of the associative case (α→π/4). We find that extensive droplets can initially ripen like in the passive case until the effects of the active driving manifest and ripening arrests.
In ripe(a), we show snapshots of the dynamics of concentration field ϕ_A for four different values of the compositional angle α; see Supplementary Material for videos on the ripening dynamics. All numerical calculations are conducted with the same kinetic parameters, an activity parameter β=π/4, and a conserved quantity of ψ̅ = -0.35 ϕ_0. For these settings, all systems are populated by extensive droplets.
We initialize at t=0 (not shown in the figure), the concentrations homogeneously at chemical steady state with small fluctuations (around 0.1% of ψ̅).
For the values of the compositional angle α considered, the system is within the spinodal regime of a non-reacting phase separating mixtures. For triggering the nucleation of many droplets, seen in the first row in ripe, we did not allow for chemical reactions in the early times of the spinodal instability (t≈ 2). Insights on the change of the nucleation dynamics in chemically active systems can be found here <cit.>.
While the nucleation process is similar for all the compositional angles α considered here, the long-time ripening dynamics depend strongly on α.
For α=0.19 π (red/first row in ripe(a)), some of the initial droplets dissolve, but the interaction between droplets causes the ripening to arrest.
As the interaction of A and B with the solvent gets more similar, i.e., α→π/4 (associative case), this arrest of ripening occurs at a later stage of the dynamics. Prior to the arrest of ripening, smaller droplets dissolve while larger ones grow. Thus, as the arrest occurs and the emulsion becomes monodisperse, stationary droplet radii are larger as α→π/4 (associative case), see second (α = 0.23 π) and third (α = 0.24 π) row of ripe(a), in red. Only in the associative case α = π/4, the arrest is absent and chemically active droplets ripen as passive emulsions (green/last row in ripe(a)). This means that the classical power-laws of Ostwald ripening hold in the associative case (α = π/4) with ⟨ R ⟩∝ t^1/3 and N∝ t^-2/3 <cit.>,
while for decreasing values of α < π/4, the transition to such classical ripening laws happens at later stages of the dynamics (see ripe(b,c)).
§ DISCUSSION
We have introduced and studied a model of a minimal ternary mixture of the components A, B, and solvent that can phase separate while A and B convert into each other.
This conversion is maintained away from chemical equilibrium
by choosing reaction fluxes independent of the chemical potentials.
This choice prevents the system from simultaneously reaching chemical and phase equilibrium and allows it to adopt a non-equilibrium steady state.
The model is minimal because it exhibits a non-trivial conserved quantity of the reaction with an associated conserved field that is generally position-dependent, also in the non-equilibrium steady state.
Within this minimal model for active chemical reactions in phase-separating mixtures, we found a critical transition, determining whether the size of a single droplet scales with system size (extensive active droplet), like in passive systems,
or whether it reaches a finite, characteristic size (intensive active droplet).
The transition found in the single droplet analysis has crucial consequences for the behavior of emulsions composed of many droplets. For a value of the conserved quantity in the regime of an intensive droplet, a mono-disperse emulsion forms where the average size is approximately given by the stationary size of a single droplet. The reason for this approximate agreement is that droplets interact only weakly in the intensive droplet regime.
In contrast, extensive droplets in an emulsion constantly grow until they come close enough to interact strongly with each other. Once they interact, there can be initial ripening, but growth arrests.
The morphology of the resulting structures depends crucially on initial conditions, i.e., the history of the emulsion. In other words, few droplets deform via shape instabilities, while many droplets remain spherical and monodisperse. However, the total phase volume is independent of the number and sizes of droplets and scales with the system volume similar to passive thermodynamic phases.
Our findings highlight the importance of several components in chemically active systems.
For example, accounting for a solvent component fundamentally alters the dynamics of chemically active droplets, reflected in the appearance of the critical transition when varying the conserved quantity.
We propose that understanding the effects of the conserved quantities(s) in reacting system away from equilibrium is crucial to correctly interpret emulsion dynamics in biological and chemical systems <cit.>.
Specifically, our theory can unravel the conserved quantity's effects in synthetic chemical systems, such as droplet size control in multi-component mixtures leading to chemically active coacervates <cit.>.
According to our theory, droplet division is more likely to occur in the regime α≃π/4 corresponding to intensive active droplets and the case where A segregates from B from each other (segregative case).
We propose studying a single active droplet in a reaction container for different container sizes <cit.> and test whether or not
the active droplet scales with the container size (intensive active droplets, see Fig. <ref>(a)).
In the regime of intensive active droplets, division is more likely to be obtained. This procedure could be used to test synthesized actively reacting components for their division propensity and may thus pave the way to observe successive division events in larger, mass-supplied containers containing many active droplets.
§ APPENDIX
§ CHEMICAL REACTION RATES
In the following, we will use the overall rate K = k_AB + k_BA and the activity parameter β as parameters for the chemical reactions. These parameters are related to the forward rate k_BA and backward rate k_AB, appearing in Eqs. (<ref>) via the following relationships: k_BA = K sin(α+β)/[cos(α+β)+ sin(α+β)] and k_AB = K cos(α+β)/[cos(α+β)+ sin(α+β)], where α is the compositional angle. Using this parameterization, the contour lines of ψ̅, where the conserved quantity is constant, have a fixed slope of -1 that is
independent of the compositional angle α. However, note that the slope of the chemical steady state changes when varying α. Moreover, changes in α do not affect the activity in the system, in the sense that they do not alter the angle between the reaction nullcline and the tie lines. Thus, with this new parameterization, we can vary the compositional angle α for studying how different interactions affect active droplets while keeping the activity parameter β fixed. The rate K sets only a time scale.
§ THE SHARP-INTERFACE LIMIT
Similar to the droplet dynamics of passive phases (see Ref. <cit.> for example), we can study the dynamics of a single active droplet in the limit of a sharp interface.
This approach for general multicomponent mixtures is discussed in detail in Ref. <cit.>. In this section, we derive the stationary state conditions of a chemically active droplet in the sharp-interface limit for a ternary mixture described by the free energy given in Eqs. (<ref>). With this choice of free energy, analytical results for the stationary radius and the critical value for the transition between intensive and extensive active droplets can be determined.
For a single droplet with radial symmetry sitting at the origin (r being the radial coordinate), we split the field into two domains (inside/outside, abbreviated in/out in the following) and couple these domains via boundary conditions at an interface at position r=R.
Furthermore, we linearize the dynamics of the fields ϕ_A and ϕ_B in dyn_phi around the corresponding ϕ-values at the interface.
In this approximation, the stationary state of a chemically active droplet in our ternary mixture is determined by the ϕ-values of A/B on both sides of the interface, named Φ_A^, Φ_A^, Φ_B^, and Φ_B^, and the interface position R. In the following, we explain how these five values are determined.
Locally, at the interface, we assume an equilibrium of phase separation, i.e., identical chemical potentials and osmotic pressure across the interface.
Furthermore, global conservation laws and vanishing flux differences across the interface fix a unique phase equilibrium locally at the interface and, therefore, a unique non-equilibrium steady state of the droplet, see Ref. <cit.>.
For our simple free-energy model (<ref>), we can obtain the four values of Φ_i^in/out generally as a function of R and α.
The determination of the stationary position of the interface R_stat additionally needs flux relations (see below). Assuming local phase equilibrium at the interface constrains its concentrations to the two straight binodal lines shown in ph_d(b) for an infinitely large system. Laplace pressure effects of an interface with surface tension γ of finite-sized droplets are corrected up to linear order via the so-called Gibbs-Thomsom coefficients c <cit.>. We parameterize this phase equilibrium by the intersection of its tie line with the ϕ_2-axis in its ϕ_1 and ϕ_2 representation (as defined in f). This intersection we call ϕ_2^inter, and write
[ Φ^in/out_A; Φ^in/out_B ] =
[ cosα sinα; -sinα cosα ][ ± + γ c H; ϕ_2^inter ]
where we assumed that the conserved quantity is enriched inside and diluted outside. For such a spherical symmetric droplet, the mean curvature H is given by H = 2/R. If the dilute phase builds up the spherical droplet, the two labels in/out in interface_cond_app have to be swapped and H=-2/R.
Finally, we have to determine the intersection ϕ_2^inter via a global conservation law of the conserved quantity. Therefore, we note that while ϕ_A and ϕ_B have gradients in space, the conserved quantity is constant in space but jumps at the interface.
Thus, for a finite radial-symmetric system (system size R_sys) with an average amount of the conserved quantity ψ̅, we know
1/2(Φ^_A+Φ^_B) R^3 + 1/2(Φ^_A+Φ^_B) (R_sys^3 - R^3) = ψ̅ R_sys^3
Using interface_cond_app, we find
ϕ_2^inter =
2 R R_sys^3 ψ̅ + (2 R^4 - (2 c γ+ R) R_sys^3) (cos(α) + sin(α))/R R_sys^3 (cos(α) - sin(α)).
In an infinite system, however, the finite-sized droplet does not contribute to the average. Thus, the outside concentrations must fulfill
ψ̅ = 1/2(Φ^_A+Φ^_B)
and therefore, again by using interface_cond_app, we find
ϕ_2^inter = 2 R ψ̅ - (2 c γ + R) (cos(α) + sin(α))/R (cos(α) - sin(α))
We note by passing that it is only possible to derive the interface concentrations independently of the diffusivities and kinetic reaction rates due to our very symmetric form of the free energy density. Thus, these kinetic coefficients have to be taken into account only for the derivation of the stationary radius. In general, all five values have to be determined in parallel, a step that typically requires numerical solving schemes, see Ref. <cit.>.
An interface can only be stationary when the diffusive fluxes j_i^in/out are balanced across the interface, i.e. j_A^(R)= j_A^(R), and j_B^(R)= j_B^(R).
These diffusive fluxes in the stationary state arise from the stationary concentration profiles ϕ_i^in/out(r) via j_i^in/out= - D_i^in/out∂_r ϕ_i^in/out(r), where D_i^in/out is the diffusion coefficient of component i in the corresponding domain obtained by linearizing. By construction, it is guaranteed that j_A^ = - j_B^ and j_A^ = - j_B^, such that once one of the flux balances is fulfilled, the second follows trivially.
Therefore, we derive only j_A^in/out in the following.
After linearizing, we can determine these fluxes through the analytical stationary ϕ_A-profiles in the two domains (solving the corresponding inhomogeneous Laplace equation in radial symmetry). From here, we can derive the total flux profiles, including the fluxes at the interface. We find
j_A^(R) = (Φ_B^(R) k_AB^ - Φ_A^(R) k_BA^)
×( (λ^ R)/λ^ - 1/(λ^)^2 R)
where the rates k_ij^ are the linrearized reaction rates from dyn_phi and λ^ = √((D_A^ k_AB^ + D_B^ k_BA^)/(D_A^ D_B^)). Please note, that we use the dependency of the interface concentration Φ_i^(R) on the position of the interface R derived above.
However, in the outside domain (r>R), the solutions read
j_A(R)^(R) = (Φ_B^(R) k_AB^ - Φ_A^(R) k_BA^)
×λ^ (R - R_sys) cosh(λ^ (R - R_sys)) + ((λ^)^2 R R_sys-1) sinh(λ^ (R - R_sys))
/(λ^)^2 R
(λ^ R_syscosh(λ^ (R - R_sys)) + sinh(λ^ (R - R_sys)))
for a finite radial symmetric system with a no-flux boundary condition at the system size R_sys, or
j_A(R)^(R) = -(Φ_B^(R) k_AB^ - Φ_A^(R) k_BA^)
× k_BA^ + D_A^/D_B^ k_AB^ + (k_AB^ + k_BA^) λ^ R
/(k_AB^ + k_BA^)(λ^)^2 R
for infinite large systems, where again k_ij^ are the linrearized reaction rates from dyn_phi and λ^ = √((D_A^ k_AB^ + D_B^ k_BA^)/(D_A^ D_B^)).
The stationary droplet radii can now be found by using the interface concentrations stated in interface_cond_app and Bintersec_finit_app or Bintersec_infinit_app, and numerically searching for the positions R at which jAin_app equals jAout_finite_app or jAout_infinite_app, depending on the system size.
§ THE CRITICAL VALUE Ψ_CRIT AND THE SCALING OF THE STATIONARY RADIUS IN ITS VICINITY
In App. <ref>, we explained how the the stationary droplet radii can be obtained in the sharp interface limit. Here, we argue first how we can obtain the value of the conserved quantity at the transition in an infinite system and, second, show how the stationary radius scales in the vicinity of this transition.
To find the critical value of the conserved quantity, we balance the fluxes across the interface for an infinitely large droplet,
lim_R→∞ j_A^in(R) = lim_R→∞ j_A^out(R)
from jAin_app and jAout_infinite_app. When interface_cond_app and Bintersec_infinit_app are applied in these expressions, a dependency on the average conserved quantity ψ̅ follows.
We find that there is only one value of this conserved quantity for which crit_trans_con_app is true. This value reads
ψ̅_crit =
-ϕ_0/2 (D_A^ k_AB^ + D_B^ k_BA^) (k_AB^ + k_BA^) + D_A^ D_B^λ^λ^
((k_AB^ - k_BA^) cos(2α) + (k_AB^ + k_BA^) sin(2α)
)/sin(α)(
(D_A^ k_AB^ + D_B^ k_BA^) (k_BA^ + k_AB^(α))
+ D_A^ D_B^λ^λ^
(k_BA^
+ k_AB^(α))
)
When we apply our parameterization of the reaction rates, i.e.,
k_AB^in/out = K^in/outcos(α+β)/[cos(α+β)+ sin(α+β)] and k_BA^in/out= K^in/outsin(α+β)/[cos(α+β)+ sin(α+β)], see App. <ref>, we find
ψ̅_crit as a function of the kinetic rates K^in/out, diffusivities D_i^in/out, and the angles α and β. With the assumptions D_A^ = D_A^, D_B^ = D_B^ and K^= K^, this expression simplifies and is solution of psi_crit.
Furthermore, we considered a droplet of the dense phase in a dilute environment, which is the case for ψ̅<0 (- branch). For ψ̅>0, the interface concentrations swapped resulting in + branch of the solution of psi_crit.
Furthermore, we can check the scaling of the stationary radius of intensive active droplets in the vicinity of the critical transition. For this, we expand j_A^ (R)= j_A^ (R) for large R. We find the stationary radius R_stat as a function of ψ̅ that scales like R_stat∝ |ψ̅ - ψ̅_crit|^-1. The general solution can be obtained from the equations above straightforwardly. Due to its length, however, in this appendix we restrict ourselves to the special case of D_A^ = D_A^ = D_B^ = D_B^ = D and K^ = K^ =K. We find for the case of a droplet of the dense phase in a dilute environment (ψ̅<ψ̅_crit<0)
R_stat = - Λϕ_0 tan(β) cos(α) - sin(α)
/ψ̅- ψ̅_crit
where we defined the length scale Λ = √(D/K) and used the fact that for large droplets, the Laplace pressure becomes negligible.
§ PARAMETER CHOICES AND METHODS USED FOR FIGURES
To numerically solve the dynamics equations in this work, we rescale time t · K with K denoting the overall rate (overall_reaction_rate),
and position x/ℓ where ℓ=√(ϕ_0^2 κ_A/b_1), thus, half of the interface width in the continuous model.
For simplicity, we consider the overall rate equal inside and outside for all studies (K^in=K^out=K).
These rescalings yield the following non-dimensional parameters in our numerical studies:
D_i^in/out /(ℓ^2K), γ c/ ℓ, and Γ/(ℓ^2K b_1 ).
§.§ Figures showing results obtained in the sharp interface model
In the previous sections, we used the sharp interface limit to calculate the results shown in
Fig. <ref>(a,b).
Using a standard root-finding scheme, we can numerically solve the specific values of the stationary interfaces shown in Fig. <ref> in the main text. The parameter values used to produce these figures are D_A^/(ℓ^2K)=0.0014^-1, D_B^/(ℓ^2K)=0.0014^-1, α = 0, β = 0.12 π, γ c/ℓ = 1/6, and ϕ_0 = 1.
§.§ Figures showing results obtained in the continuous model
All the results obtained in the continuous model were obtained by using an implicit-explicit Runge-Kutta solver of the second-third order. We considered periodic boundary conditions and approximated the higher-order derivatives using pseudo-spectral methods on a regular lattice.
For simplicity, we used b_1/ϕ_0^2=b_2=ϕ_0=1, α = 0, κ_A = κ_B and β=0.12π were not otherwise explicitly mentioned in the figure caption.
The mobilities were set to Γ/(ℓ^2K b_1)=0.0014^-1 in Fig. <ref>, Γ/(ℓ^2K b_1 )=0.0028^-1 in Fig. <ref>, Γ/(ℓ^2K b_1 )=0.00112^-1 in Fig. <ref>, and Γ/(ℓ^2K b_1 ϕ_0^2)=0.004^-1 in Fig. <ref>.
Furthermore, we used N_1 × N_2 or N_1 × N_2 × N_3 grid points for two-dimensional or three-dimensional systems of sizes of L_1 × L_2 or L_1 × L_2 × L_3, respectively.
The specific values for the corresponding figures are listed in Table <ref>.
|
http://arxiv.org/abs/2409.02889v1 | 20240904172521 | LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture | [
"Xidong Wang",
"Dingjie Song",
"Shunian Chen",
"Chen Zhang",
"Benyou Wang"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.CV",
"cs.MM"
] |
Single Pion Production off Free Nucleons: Analysis of Photon, Electron, Pion and Neutrino Induced Processes
M. Kabirnezhad
September 9, 2024
===========================================================================================================
§ ABSTRACT
Expanding the long-context capabilities of Multi-modal Large Language Models (MLLMs) is crucial for video understanding, high-resolution image understanding, and multi-modal agents. This involves a series of systematic optimizations, including model architecture, data construction and training strategy, particularly addressing challenges such as degraded performance with more images and high computational costs. In this paper, we adapt the model architecture to a hybrid of Mamba and Transformer blocks, approach data construction with both temporal and spatial dependencies among multiple images and employ a progressive training strategy. The released model LongLLaVA (Long-Context Large Language and Vision Assistant) is the first hybrid MLLM, which achieved a better balance between efficiency and effectiveness. LongLLaVA not only achieves competitive results across various benchmarks, but also maintains high throughput and low memory consumption. Especially, it could process nearly a thousand images on a single A100 80GB GPU, showing promising application prospects for a wide range of tasks.
§ INTRODUCTION
The rapid advancement of MLLMs <cit.> has demonstrated their remarkable capabilities across various applications <cit.>. However, multi-image scenario remain an important yet to-be-explored aspect. In particular, expanding the context of MLLMs to understand longer videos <cit.>, higher-resolution images <cit.>, and make decisions based on more historical messages <cit.> is crucial for enhancing user experience <cit.> and further broadening MLLMs' application scope <cit.>.
However, extending the context length of MLLMs to improve their usability poses challenges related to degraded performance and high computational costs when processing more images.
To maintain the performance in longer context, some studies <cit.> have concentrated on curating long-context training data involving multiple images to enhance performance. Additionally, other research efforts have explored innovative training strategies <cit.> to mitigate performance declines.
Regarding the issue of high computational costs, <cit.> have made strides in improving multi-node efficiency by reducing communication costs. However, there remains a gap in solutions for accelerating the computation itself when managing longer contexts.
To address the challenges mentioned above, we propose a systematic solution called LongLLaVA, especially using a hybrid architecture for acceleration.
This solution comprehensively optimizes across three dimensions:Multi-modal architecture, Data construction, and Training strategy.
* For Multi-modal architecture, we adopt a hybrid architecture combining Transformer with Mamba and propose an efficient image representation method that applies 2D pooling to compress image tokens, significantly reducing computational costs while maintaining performance.
* For Data construction, we have designed unique formats for different tasks, enabling the model to distinguish between temporal and spatial dependencies among images.
* In terms of Training strategy, we employ a three-phase method for multi-modal adaptation—Single-image Alignment, Single-image Instruction-tuning, and Multi-image Instruction-tuning—to incrementally enhance the model’s ability to handle multi-modal long contexts.
Experiemntal results show that LongLLaVA excels in understanding multi-modal long contexts with high efficiency. It leads in retrieval, counting, and ordering tasks in VNBench <cit.> and achieves nearly 100% accuracy with 1,000 images on a single 80GB GPU for Needle-In-A-Haystack evaluation <cit.>.
Our summarized contributions are as follows:
* We introduce LongLLaVA, a solution optimized through data construction, training strategies, and multi-modal architecture, effectively balancing performance and efficiency. To the best of our knowledge, this is the first hybrid architecture for MLLMs.
* LongLLaVA demonstrates exceptional performance in multi-modal long-context understanding, excelling in retrieval, counting, and ordering tasks.
In our commitment to transparency and community research, we will open source all models, codes, and datasets associated with LongLLaVA.
§ TOWARDS SCALING UP THE IMAGE NUMBER IN MLLMS
§.§ The Curse of Image Numbers
The model's input length increases rapidly with the number of images, leading to issues such as degraded model performance and high inference costs.
With the evolution of MLLM technology, many existing open-source MLLMs have demonstrated capabilities on par with closed-source models in single-image tasks <cit.>.
However, when the task expands to multiple images, the capabilities of these models degrade significantly. Whether it is a time-related or semantically-related multi-image task, the models struggle considerably, lagging far behind the performance of closed-source models <cit.>. Such capability degradation severely limits the application scenarios of open-source MLLMs, and the open-source community urgently needs a systematic solution to address this issue.
Excessive Input Length
Aside from the degraded performance, another challenge is managing the excessive input length when processing a larger number of images.
The visual encoder components of MLLMs typically transform each image into a large number of tokens. For instance, the CLIP encoder[openai/clip-vit-base-patch32] <cit.> generates 576 image tokens for an image with a resolution of 336 pixels. Representing a three-minute video at 1 FPS would require 3×60×576=103,680 tokens.
This results in a rapid increase in computational demand and memory usage, significantly degrading user experience and escalating service costs.
Although some works have proposed compressing image tokens in 2D or 3D formats <cit.>, this often comes at the expense of performance. Solely relying on compression methods to accommodate multiple images without substantial performance degradation is insufficient.
High Computational and Memory Complexity
The excessive input length results in high computational and memory complexity.
The LLM part of existing MLLMs primarily relies on Transformer architectures, whose computational complexity grows quadratically with increases in sequence length. As the number of images (or sequence length) increases, memory usage becomes exceptionally high due to the need to store KV-Cache.
As shown in Figure <ref>, current models on a single 80GB GPU can theoretically handle up to 384 images using quantization.
Current efforts mitigate this through techniques like ring attention <cit.> or sequence parallelism <cit.>, but these solutions introduce greater time overhead, or by switching to the Mamba architecture <cit.> to address these challenges. However, the capability for In-Context Learning (ICL) in multi-image scenarios is indispensable, posing a challenge for Mamba models <cit.>. Therefore, there is a need for a balanced approach that considers model architecture in multimodal contexts, addressing both the high number of image tokens and reducing computational complexity.
§.§ Motivation for Hybrid Architecture
Effectiveness-efficiency Balance
To balance effectiveness and efficiency in multimodal long-context scenarios, we propose to use a Mamba-Transformer hybrid architecture.
As shown in Table <ref>, the Mamba-Transformer hybrid architecture <cit.> is distinguished by its superior performance on ICL task and other benchmark tests, matching the performance of Mixtral-8x7B <cit.>.
In terms of efficiency, the Mamba-Transformer hybrid architecture is more efficient due to Mamba's linear computational complexity. For example, a popular hybrid architecture Jamba's throughput is typically three times that of Mixtral, which already have utilized sliding window techniques to reduce inference complexity. Jamba only requires of KV-Cache memory to process a 256K token sequence under activation parameters with precision, substantially less than comparable models. Moreover, in scenarios using a single A100 80GB GPU with precision, Jamba can handle up to 140K tokens in context length, significantly surpassing the 80K of LLaMA2-7B <cit.> and the 60K of Mixtral 8×7B.
§.§ The Benefit of Scaling Up the Image Number
Adopting more images significantly broadens the application scenarios for current MLLMs. We will explore this from two dimensions: Temporal Expansion and Spatial Expansion.
Temporal Expansion.
Understanding the temporal dependencies between images is crucial for a variety of applications. In multi-modal assistants, it enhances real-time recall capabilities, which is particularly beneficial for the elderly <cit.>. For mobile agents, it enables more personalized services and improves task planning <cit.>. In the healthcare sector, it assists in anomaly detection in 3D medical videos, thereby reducing diagnostic errors <cit.>.
Spatial Expansion.
When dealing with high-resolution images <cit.> or when detailed understanding of images <cit.> is required, images are often decomposed into sub-images. This process highlights the importance of grasping spatial dependencies among these sub-images. In remote sensing, an increased number of images enhances both coverage and granularity <cit.>. In pathology, it minimizes information loss and improves diagnostic accuracy <cit.>. In the field of Molecular Learning, it facilitates the processing of complex reactions and the analysis of larger molecular graphs <cit.>.
§ LONGLLAVA: SCALING LLAVA TO LONGER CONTEXT
To address the aforementioned challenges and enhance the model's adaptability to long-context, multi-image scenarios, we introduce improvements from three perspectives: multi-modal model architecture (Sec. <ref>), data processing protocol (Sec. <ref>), and training strategy (Sec. <ref>).
§.§ Multi-modal Architecture
Our multimodal architecture is constructed around three core components inspired by LLaVA <cit.>: the Vision Encoder, the Projector, and the LLM.
Vision Information Processing.
We employ CLIP[] as the vision encoder to encode visual information and a two-layer MLP as the projector to map vision features into the text embedding space suitable for the LLM. Prior to projection, bilinear pooling is applied, reducing the token representation of an image from 576 to 144 by aggregating 2×2 patch units into a single token. This approach effectively conserves training and inference time while maintaining essential spatial relationships between patches. Further details on the effectiveness of this strategy are provided in Appendix <ref>.
Hybrid LLM Architecture.
Our model employs a hybrid LLM architecture that integrates Transformer and Mamba layers in a 7:1 ratio, as depicted in Figure <ref>. It also features a Mixture of Experts (MoE) approach in every other layer, utilizing 16 experts and selecting the top-2 experts for each token. RMSNorm <cit.> is used between layers to enhance normalization, although positional embeddings are omitted. The model incorporates Grouped Query Attention (GQA) <cit.> and SwiGLU activation functions <cit.>, similar to other large language models. The total parameter count of the model is , with activation parameters during inference totaling .
§.§ Data Processing Protocol
To ensure that the model effectively distinguishes between temporal and spatial dependencies among images in multi-image scenarios and performs well across various tasks, we meticulously differentiated special characters in different scenarios. As shown in Figure <ref>, these special characters comprehensively address the various relationships between images in different contexts, thereby enhancing the model's adaptability to diverse tasks.
Regular Single and Multiple Images: For regular single and multiple image inputs, we use and to enclose image tokens, helping the model differentiate between image and text tokens.
Video: For video inputs, to enable the model to understand the temporal relationship between frames, we first use and to enclose image tokens. Additionally, we add the special symbol between different frames to represent the temporal dependency between them.
High Resolution Image: For complex single-image understanding that need to divide image into multiple sub-images, we use to separate the main image from its sub-images. For the arrangement of sub-images, we traverse from the top-left to the bottom-right, adding between split lines to preserve the relative spatial positions of the sub-images.
§.§ Training Strategy
In our training strategy, we implement single-modal and multi-modal adaptations to transform a pre-trained language model into a multimodal long-context model.
Pure-text Instruction-tuning.
We initially enhance the pre-trained language model's ability to follow instructions of varying lengths in pure-text contexts. This is achieved using a comprehensive dataset totaling 278k pure text entries from Evol-instruct-GPT4 <cit.>, WildChat <cit.>, and LongAlign <cit.>.
For multi-modal adaptation, following the Single-image Alignment and Single-image Instruction-tuning stages in LLaVA <cit.>, we introduce a Multi-image Instruction-tuning stage to progressively enhance the model's long-context capabilities.
We adopt progressive training not only for better control of variables but also to increase model reusability <cit.>. The specific dataset usage is detailed in Figure <ref>.
Stage I: Single-image Alignment.
This stage is to align visual modal features with textual modality. We utilize datasets such as ALLaVA-Caption <cit.> and ShareGPT4V <cit.>, which comprise approximately 600K high-quality image-caption pairs. During this phase, only the projector is trained while freezing the parameters of the Visual Encoder and LLM.
Stage II: Single-image Instruction-tuning.
This stage aims to endow the model with multimodal instruction-following capabilities. We use datasets like LLaVA-1.5 <cit.> and Mantis-Single <cit.>, totaling around 932K high-quality question-answer pairs. Here, only the Visual Encoder is frozen, and the projector and LLM parts are trained. This process ultimately results in the development of LongLLaVA (single image).
Stage III: Multi-image Instruction-tuning.
In this stage, the model is trained to follow instructions in multimodal long-context scenarios. We sample 200K, 200K and 50K data items from Mantis <cit.>, VideoChat2 <cit.> and ShareGPT4Video <cit.> respectively. To preserve the model’s single-image comprehension and pure-text dialogue capabilities, we include an additional 200K and 50K data items from the Single-image Instruction-tuning and Pure-text Instruction-tuning phases as the component. Furthermore, to enhance the model's ability to interpret complex single images segmented into multiple sub-images, we extract 50K data items from the Single-image Instruction-tuning phase, perform padding and segmentation, and divide the original images into sub-images of size 336×336 as the component. This culminate in the development of LongLLaVA.
,
For multi-modal adaptation, fo
§ EXPERIMENTS
§.§ Training Details.
For training, we utilize random sampling to concatenate data items into a token length of 40,960, separated by the token. This approach helps in managing extensive datasets and ensuring diverse coverage of different data segments. Training is executed across three compute nodes, each equipped with eight A800 GPUs, leveraging DeepSpeed Zero-3 as the distributed strategy to enhance scalability and efficiency. We employ a cosine learning rate scheduler with a warm-up rate of , set the training epoch to , and the learning rate to . This configuration is designed to balance learning speed and model convergence effectively. .
§.§ Evaluation Setup
We evaluate the model's multimodal long-context understanding capabilities using the multi-image evaluation and conduct single-image evaluations to explore basic functionalities. For details on the single-image evaluation, please refer to Appendix <ref>. Both LongLLaVA (single image) and LongLLaVA are evaluated using quantization with the temperature set to zero to ensure consistency in performance evaluation. The use of quantization helps in reducing the model size and computational load while maintaining performance accuracy.
Benchmarks.
Our evaluation utilizes three multi-image benchmarks: MileBench <cit.> for assessing multimodal long-context scenario performance, and Video-MME <cit.> along with MVBench <cit.> for video analysis capabilities. Detailed descriptions of these benchmarks are available in Appendix <ref>.
Baselines.
We compare our model against four commercial models: GPT-4V[] <cit.>, GPT-4o[], Gemini-1.5-Pro[] <cit.>, and Claude3-Opus[],
as well as five open-source models: Phi-3-Vision[], OmChat <cit.>, LongVILA <cit.>, Video-LLaMA-2 <cit.> and VideoChat2 <cit.>.
§.§ Main Results
As shown in Table <ref>, LongLLaVA demonstrates superior performance among open-source models on MileBench, even surpassing Claude3-Opus, and particularly excels in retrieval tasks. This highlights LongLLaVA's impressive capabilities in handling multi-image tasks. Notably, LongLLaVA's effectiveness is further underscored by its performance on video benchmarks such as Video-MME and MVBench. It shows exceptional results, especially in tasks involving medium to long-length videos, outperforming traditional video models like Video-LLaMA2 and VideoChat2.
Remarkably, despite achieving these impressive results, LongLLaVA operates with an order of magnitude fewer FLOPs compared to other models. This efficiency in computational resources not only underscores LongLLaVA's advanced performance but also its optimization in resource management. These results reflect a significant advancement in the research community's efforts to close the performance gap with commercial models.
§.§ Diagnostic Evaluation of Long-Context MLLMs
Considering that former evaluations cannot adequately capture the abilities of MLLMs over long contexts, we employ a new diagnostic evaluation set, VNBench <cit.>, to further analyze the atomic capabilities of models in long contexts. VNBench is a benchmark construction framework based on synthetic video generation, encompassing tasks such as retrieval, ordering, and counting.
The results, as presented in Table <ref>, indicate that LongLLaVA exhibits performance that is on par with leading closed-source models in tasks such as cross-context retrieval, ordering, and technical capabilities, even outperforms GPT-4V. Among open-source models, LongLLaVA also shows its superior performance. This positions LongLLaVA as a prominent contender in the field, demonstrating its advanced capabilities in managing and interpreting long contexts.
§.§ Ablation Study
As shown in Table <ref>, significant improvements were observed across all evaluation sets when using the hybrid LLM architecture, Jamba, with identical data and model parameters, demonstrating its potential in multimodal scenarios. For token compression, we choose the 2D pooling, which significantly reduces computational load while keeping performance degradation within acceptable limits (less than 2.2%). Compared to 1D pooling, the 2D pooling method, which pools along the height and width directions to obtain a 12x12 token arrangement, yields better results (0.1∼1.5 improvement). Regarding data construction, after training on our single-image data, the model achieved a 1.5% accuracy improvement on SEEDBench and 12.3% on MileBench. Subsequent multi-image training led to a further 7.4% increase on MileBench, validating the dataset construction's effectiveness.
§ MORE ANALYSIS
In this section, we conduct further analysis to understand the inner workings and multimodal long-context capability of LongLLaVA.
§.§ On the Motivation for the Hybrid Architecture.
We explore the strengths and weaknesses of different architectures in terms of ICL capabilities and inference efficiency, highlighting the balanced advantages of multimodal hybrid architectures that combine the strengths of both. For Mamba, we select Cobra <cit.>, which, to the best of our knowledge, is the first and only multimodal architecture to use Mamba as its LLM within the LLaVA framework. For Transformer, we choose the 13B parameter LLaVA-1.6, which has inference parameters consistent with LongLLaVA, to enable a more accurate efficiency comparison.
ICL Analysis. We evaluate the performance on the Matching Image task from VL-ICL benchmark <cit.> for multi-modal in-context learning. This task's inputs contain an image pair x={x1, x2}, and output y
indicates whether a specific relation r holds between them. MLLMs are required to learn the relation
from examples. As shown in the Table <ref>, both Hybrid and Transformer architectures exhibit rapid performance improvements with the increase in examples, whereas the Mamba architecture shows a slower improvement, confirming its ICL shortcomings.
Efficiency Analysis. We focus on three aspects: Prefill Time (first inference latency), Throughput (next tokens per second), and Memory Usage. We control the input text length to 100K and measure time and maximum memory usage for generating outputs of 1 token and 1000 tokens. Throughput is calculated as (1000-1)/(time_1000-time_1). To better simulate real application scenario, Transformer and Hybrid architectures are evaluated using vLLM framework <cit.> with quantization <cit.>. As shown in the Table <ref>, Mamba architecture has the fastest Prefill Time, highest throughput. The Hybrid architecture achieves 2.5 times the Throughput, 75% of the Prefill Time, and reduced memory usage compared to the Transformer architecture with similar inference parameters.
§.§ Scaling Law of the Image Number
With more images processed, it could support more image patches for high-resolution image understanding and more video frames for video understanding. To explore the impact of increasing the number of sub-images and video frames, we evaluate LongLLaVA on the benchmarks V* Bench <cit.> and Video-MME <cit.> respectively.
Scale up Number of SubImages. V* Bench evaluates a model's ability to locate small objects within large images. As shown in Figure <ref>, increasing the number of sub-images initially improves model performance significantly, indicating better understanding of image details. However, we also find that further increasing the number of sub-images slightly degraded the performance, suggesting that an excessive number of sub-images may interfere with performance on this task.
Scale up Number of Frames. Video-MME <cit.> is a benchmark that tests a model's ability to extract information from videos. We can see from Figure <ref> that as the number of sampled frames increases, the model's performance on the benchmark improves significantly, reaching its peak when 256 frames are extracted. This indicates that the model can effectively understand and utilize the information contained in the additionally sampled frames to provide better responses.
§ FURTHER SCALING UP THE IMAGE NUMBER TO 1000
Using the V-NIAH evaluation framework proposed in LongVA <cit.>, we conduct a needle-in-the-haystack test to evaluate the model’s performance. Given the model’s training sequence length limit of 40,960 tokens, we apply token pooling techniques to reduce the original token count from 144 to 36. This adjustment allows us to test the model’s ability to retrieve relevant information from a large dataset efficiently. As shown in Figure <ref>, LongLLaVA achieves nearly 100% retrieval accuracy on a set of 1000 images without requiring additional training.
However, when we increase the number of images in the test more than 1,000, we observe a decline in retrieval accuracy. This drop in performance may be due to exceeding the model’s training sequence length, which potentially affects its ability to maintain accuracy with more images. In future work, we will extend the training sequence length to 140,000 tokens, the limit for single-card inference with LongLLaVA, to further unlock the model's potential.
§ CONCLUSION
In this study, we introduce LongLLaVA (Long Context Large Language and Visual Assistant), an innovative hybrid architecture model that excels in long-context multi-modal understanding. The model integrates Mamba and Transformer blocks, leveraging temporal and spatial dependencies between multiple images to construct data, and employs a progressive training strategy. LongLLaVA demonstrates competitive performance across various benchmarks while ensuring efficiency, setting a new standard for long-context Multi-modal Large Language Models (MLLMs).
iclr2024_conference
§ APPENDIX TABLE OF CONTENTS
[sections]
[sections]l1
§ DETAILS OF BENCHMARKS
Single-image Benchmarks.
We select seven commonly used evaluations to assess the model's single-image understanding capabilities. These include:
* GQA <cit.>: A benchmark for real-world visual reasoning and compositional question answering.
* MME <cit.>: A comprehensive benchmark focused on perception and cognition, from which we use the perception component.
* MM-Vet <cit.>: Examines six core visual-linguistic (VL) capabilities and sixteen integrations derived from these capabilities.
* ScienceQA <cit.>: Consists of 4,210 questions across various science topics, with detailed annotations.
* SEED-Bench-v1 <cit.>: Evaluates comprehension across twelve dimensions in both image and video modalities; we use the image set.
* MMBench <cit.>: A systematically-designed benchmark across twenty ability dimensions.
* MMMU <cit.>: Tests multi-modal models on multidisciplinary tasks requiring university-level knowledge, covering 183 subfields and 30 types of images.
Multi-image Benchmarks.
To explore multi-image capabilities, we utilized:
* MileBench <cit.>: Assesses long-context scenario performance, focusing on Temporal, Semantic, and Information Retrieval (IR) components.
* Video-MME <cit.>: Covers 30 sub-fields to evaluate video analysis capabilities. We analyze 128 frames extracted uniformly from each video, independent of subtitles.
* MVBench <cit.>: Addresses 20 challenging video tasks that are not effectively solved with a single frame.
§ DETAILS OF SINGLE-IMAGE EVALUATION
The single-image evaluation aims to explore the model's fundamental capabilities and the impact of extended long-context training on single-image understanding.
§.§ Benchmarks
We utilized a series of benchmarks, including GQA <cit.>, MME <cit.>, MM-Vet <cit.>, ScienceQA <cit.>, SEED-Bench-v1 <cit.>, MMBench <cit.>, and MMMU <cit.>. These benchmarks assess various aspects of visual understanding and cognitive processing within a single-image context.
§.§ Comparison Models
For a comprehensive comparison, we selected three commercial models for the single-image evaluation: GPT-4V[] <cit.>, Gemini-1.5[] <cit.>, and Claude3-Opus[].
Additionally, we included five open-source models to broaden the scope of evaluation: LLaVA-1.5-13B <cit.>, LLaVA-1.6-13B <cit.>, Phi-3-Vision-4.2B[] and OmChat-8B <cit.>.
§.§ Results Analysis
As shown in Table <ref>, LongLLaVA (single image) generally outperforms LLaVA-1.6-13B, despite both models having the same inference parameter size. This advantage is particularly notable in the MMMU benchmarks, highlighting LongLLaVA (single image)'s strengths in handling comprehensive knowledge-based questions. Although LongLLaVA (single image)'s performance is slightly lower compared to some recently emerged high-performing models, it still demonstrates the potential of hybrid architectures in multi-model scenarios. To ensure complete reproducibility of our results, we only focus on four representative public datasets. Additionally, we find that LongLLaVA tends to underperform relative to LongLLaVA (single image). Addressing this issue may require incorporating more single-image data during the Multi-image Instruction-tuning phase.
|
http://arxiv.org/abs/2409.03493v1 | 20240905130025 | Coupled unidirectional chaotic microwave graphs | [
"Omer Farooq",
"Afshin Akhshani",
"Michał Ławniczak",
"Małgorzata Białous",
"Leszek Sirko"
] | quant-ph | [
"quant-ph"
] |
Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warszawa, Poland
§ ABSTRACT
We investigate experimentally the undirected open microwave network Γ with internal absorption composed of two coupled directed halves, unidirectional networks Γ_+ and Γ_-, corresponding to two possible directions of motion on their edges. The two-port scattering matrix of the network Γ is measured and the spectral statistics and the elastic enhancement factor of the network are evaluated. The comparison of the number of experimental resonances with the theoretical one predicted by the Weyl's law shows that within the experimental resolution the resonances are doubly degenerate. This conclusion was also corroborated by the numerical calculations. Though the network is characterized by the time reversal symmetry the missing level spectral statistics and the elastic enhancement factor are rather close to the Gaussian unitary ensemble predictions in random matrix theory. We used numerical calculations for the open non-dissipative quantum graph possessing the same structure as the microwave network Γ to investigate the doublet structures in the spectrum which otherwise would not be experimentally resolved. We show that the doublet size distribution is close to the Poisson distribution.
03.65.Nk,05.45.Mt
Coupled unidirectional chaotic microwave graphs
Omer Farooq, Afshin Akhshani, Michał Ławniczak, Małgorzata Białous, and Leszek Sirko
September 9, 2024
========================================================================================
§ INTRODUCTION
From mathematical point of view a quantum graph is a one-dimensional complex system with the Laplace operator L(Γ) = -d^2/dx^2 defined in the Hilbert space of square integrable functions <cit.>. The concept of quantum graphs was introduced by Pauling <cit.> to model organic molecules. Later, quantum graphs were applied in modelling and studying a large variety of different systems and theories such as quantum chaos, dynamical system theory, photonics crystals, superconductivity theory, microelectronics, etc <cit.>.
A quantum graph consists of one-dimensional edges e_i which are connected at the vertices v_i. The propagation of a wave along an edge of the graph is described by the one-dimensional Schrödinger equation. The boundary conditions are implemented on the wave functions entering and leaving the vertices. Commonly the Neumann (N) and Dirichlet (D) vertex boundary conditions are applied. The Neumann boundary condition imposes the continuity of waves propagating in the edges meeting at the vertex v_i and vanishing of the sum of outgoing derivatives at v_i. The Dirichlet boundary condition demands vanishing of the waves at the vertex.
According to Bohigas-Giannoni-Schmit conjecture <cit.> the spectral properties of quantum systems underlying classically chaotic dynamics can be modelled by appropriate Gaussian ensembles of the random matrix theory (RMT). In this approach three main symmetry classes are distinguished: the Gaussian orthogonal ensemble (GOE) and the Gaussian symplectic ensemble (GSE) with time-reversal invariance (𝒯-invariance), characterized respectively by the symmetry indices β=1 and β=4, and the Gaussian unitary ensemble (GUE) with broken time-reversal invariance, β=2. The Gaussian unitary ensemble characterizes chaotic systems with any spin, while the Gaussian orthogonal and symplectic ensembles describe quantum and wave-dynamical chaos in systems with integer and half-integer spins, respectively.
The experimental studies of complex quantum systems are in general very complicated and challenging. This problem, for a wide class of such systems, has been effectively resolved with the help of microwave networks. The one-to-one equivalence of the stationary Schrödinger equation describing quantum graphs and the telegraph equation describing microwave networks allows to simulate quantum graphs through the use of microwave networks <cit.>.
A unique versatility of microwave networks as wave simulators stems from their being the only systems which allow for simulation of quantum graphs whose properties are described by all three symmetry classes GOE <cit.>, GUE <cit.> and GSE <cit.> in the framework of RMT. The GSE systems can only be experimentally investigated using microwave networks.
The other complex quantum systems can be simulated by microwave flat billiards <cit.> and atoms excited in strong microwave fields <cit.>.
Quantum graphs with GUE properties can be simulated experimentally by microwave networks with microwave circulators <cit.>.
Recently, Akila and Gutkin <cit.> have theoretically and numerically considered an undirected quantum graph Γ composed of two unidirectional ones Γ_+ and Γ_- in which the nearest-neighbor spacing distribution of the eigenvalues shows close to GUE statistics. An experimental realization of a single unidirectional graph has been recently presented in Ref. <cit.>. The direction of the wave propagating through the unidirectional network was controlled by applying microwave hybrid couplers and isolators <cit.>. Though the paper was mainly focused on the spectral statistics of the unidirectional network some other characteristics of the network such as, e.g., correlation functions and the distribution of the reflection amplitude were also analyzed.
In this article we present the results of an experimental study of the coupled unidirectional systems Γ_+ and Γ_- corresponding to the networks with two opposite directions of wave motion on their edges which together form the undirected microwave network Γ. The network Γ is open and characterized by internal absorption. The two-port scattering matrices of different realizations of the network are measured to evaluate the spectral statistics, the reflection coefficient, the imaginary part of the Wigner reaction matrix, and the elastic enhancement factor of the network.
The comparison of the number of experimentally observed resonances with the theoretical one predicted by the Weyl's law shows that approximately only half of the resonances have been experimentally identified. Because the graphs Γ_+ and Γ_-, for symmetry reasons, are closely doubly degenerate, the resonances within the spectral resolution of the measurements should be doubly degenerate. This conclusion was also corroborated by the numerical calculations which will be discussed in detail in Subsection C of the article.
Though the networks are characterized by the time reversal symmetry their missing level spectral statistics and the enhancement factor obey closer the Gaussian unitary ensemble predictions than the GOE ones. We performed the numerical calculations for open quantum graphs simulating microwave networks with no internal absorption to investigate the doublets which were not experimentally resolved. We show that the doublet size distribution is close to the Poisson distribution.
One should point out that the previous experimental studies, in which microwave simulators were applied, were devoted to chaotic networks with the level spacing statistics belonging either to GOE, GUE or GSE statistics. In this article the networks and graphs characterized by the structure of nearly degenerate doublets are for the first time experimentally and numerically studied.
§ UNIDIRECTIONAL QUANTUM GRAPHS
In Ref. <cit.> the undirected quantum graph Γ composed of two unidirectional graphs Γ_+ and Γ_- has been theoretically and numerically considered.
In this realization of the unidirectional graphs the following structure of the vertex scattering matrices σ̂_i has been proposed
σ̂_i=[[ 0̂ Û_i; Û_i^† 0̂; ]] Û_i Û_i^†= Û_i^†Û_i=1,
where 0̂ is the zero matrix with all the entries equal zero and Û_i is a square unitary matrix.
Due to the off-diagonal structure of σ̂_i the transition from the graph Γ_+ to Γ_- and vice versa is impossible and dynamics on Γ_+ and Γ_- are completely decoupled. The splitting of Γ into Γ_+ and Γ_- is only possible if the vertices have even degree, e.g., in Ref. <cit.> the vertices σ̂_i with the valency v_i=4 have been considered.
The theoretical investigations of unidirectional quantum graphs <cit.> dealt only with close nondissipative systems. However, the real experimental systems are open and characterized by internal absorption. In this work we use microwave networks simulating quantum graphs to investigate properties of coupled unidirectional graphs Γ_+ and Γ_-. The internal coupling of the unidirectional graphs was enforced by the T-junctions, vertices with the valency v_T=3, which were introduced to couple the microwave analyzer via the external leads with the investigated network. Details of the experimental setup will be given in the next section.
§ COUPLED UNIDIRECTIONAL MICROWAVE NETWORKS
The properties of the coupled unidirectional quantum graphs were investigated experimentally using microwave networks <cit.>. The scheme of an undirected microwave network simulating an undirected quantum graph Γ composed of two coupled unidirectional graphs Γ_+ and Γ_- is shown in Fig. 1.
The network is constructed of SMA microwave cables and microwave joints that act as edges and vertices of the simulated quantum graph. A microwave cable consists of outer and inner conductors of radius
r_1=0.15 cm and r_2=0.05 cm, respectively. The separation between two conductors is filled with Teflon having
the dielectric constant ϵ= 2.06. Five microwave hybrid couplers (RF-Lambda RFHB02G08GPI), vertices σ̂_̂î with v_i=4, are used to obtain the undirected network Γ
with the coupled unidirectional networks Γ_+ and Γ_-, denoted in Fig. 1 by red and blue arrows, respectively.
The T-junctions play a double role. They couple the network to the measuring system via HP 85133-616 and HP 85133-617 flexible microwave cables (leads L^∞_1 and L^∞_2) and additionally, because they are undirected, the unidirectional networks Γ_+ and Γ_- with each other.
The properties of the T-junction with Neumann boundary conditions are described by its scattering matrix
σ̂_T= 1/3[[ -1 2 2; 2 -1 2; 2 2 -1; ]].
The coupling between the unidirectional networks Γ_+ and Γ_- is possible because of backscattering, represented by the diagonal elements in σ̂_T scattering matrix.
In the case of unidirectional vertices σ̂_i (couplers RF-Lambda RFHB02G08GPI) their scattering matrices in the operating frequency range ν∈ [2,8] GHz are the following
σ̂_i=1/√(2)[[ 0 0 1 1; 0 0 -1 1; 1 -1 0 0; 1 1 0 0; ]].
The diagonal elements of σ̂_i matrices are zeros preventing from backscattering and therefore also from coupling of the unidirectional networks Γ_+ and Γ_-. The scattering matrix σ̂_i has a form of the scattering matrix presented in Definition (<ref>) with the unitary matrix Û_i defined as follows
Û_i= 1/√(2)[[ 1 1; -1 1; ]].
In order to keep a one-to-one quantum-microwave vertex analogy the microwave vertex scattering matrices σ̂_v(ν) and σ̂_v(ν_0) at frequencies ν and ν_0 for Neumann boundary conditions <cit.> should be related by the equation <cit.>:
σ̂_v(ν) = (ν +ν_0) σ̂_v(ν_0) + (ν-ν_0)Î/(ν+ν_0)Î + (ν-ν_0) σ̂_v (ν_0).
Here, the matrix Î denotes the identity matrix of the dimension of the vertex scattering matrices σ̂_v(ν) and σ̂_v(ν_0).
It can be easily checked that for the components of the microwave network presented in Fig. 1, namely microwave T-junctions and couplers, the scattering matrices σ̂_T and σ̂_i are unitary and Hermitian, fulfilling Eq. (<ref>).
The lengths of edges of the quantum graph are equivalent to the optical lengths of the edges of the microwave network, i.e., l_opt = √(ϵ)l_ph, where l_ph is the physical length of a network edge. The total optical length L_tot of the network was 7.955 ± 0.012 m. The optical lengths of the edges of the network are the following: l_1 = 0.649 ± 0.001 m, l_2 = 0.788 ± 0.001 m, l_3 = 1.142 ± 0.001 m, l_4 = 0.382 ± 0.001 m, l_5 = 0.513 ± 0.001 m, l_6 = 0.435 ± 0.001 m, l_7 = 0.787 ± 0.001 m, l_8 = 0.480 ± 0.001 m, l_9 = 0.760 ± 0.001 m, l_10 = 0.897 ± 0.001 m, l_11 = 0.657 ± 0.001 m, l_12 = 0.465 ± 0.001 m.
In order to obtain an ensemble of coupled unidirectional networks Γ_+ and Γ_- the lengths of two bonds of the undirected Γ network were changed by using the phase shifters PS1 and PS2 in such a way that the total optical length L_tot of the network was kept constant. Due to couplers' frequency characteristics, the experiment was performed within the frequency range ν∈ [2,8] GHz. In this interval according to the Weyl's law in each spectrum one should expect ∼318 resonances, however, due to the doublet structure of very closely degenerate resonances induced by the coupled unidirectional networks Γ_+ and Γ_- in the experiment we observed at most half of them. In practice, in each of applied 50 network realizations about 4% of resonances (not resolved doublets) were not detected.
In Fig. 2 we show a photograph of the experimental setup. It consists the microwave undirected network Γ connected via HP 85133-616 and HP 85133-617 flexible microwave cables to a vector network analyzer (VNA), Agilent E8364B, to measure the two-port scattering matrix Ŝ(ν) of the network. The inset shows an example of the modulus of the diagonal scattering matrix element |S_11(ν)| of the network measured in the frequency range 5.24-5.74 GHz. Despite of the coupled unidirectional networks Γ_+ and Γ_- experimental resonances (local minima in |S_11(ν)|, marked by vertical lines) remain doubly degenerate within the experimental resolution.
§.§ Spectral statistics of the undirected microwave network Γ
The spectral properties of the undirected microwave network Γ were investigated using the most common measures of the short- and long-range spectral correlations: the nearest-neighbor spacing distribution P(s) and the spectral rigidity Δ_3(L). In order to perform these analyses the resonance frequencies ν_i of the network were rescaled (unfolded) to eliminate system specific properties. Since experimental resonances are doubly degenerate this can be done using the Weyl's formula for the network with the total optical length L_tot'=L_tot/2. Then, the unfolded eigenvalues determined from the resonance frequencies ν_i are given by ϵ_i=L_totν_i/c, where c is the speed of light in the vacuum.
The nearest-neighbor spacing distribution (NNSD) P(s) describes the distribution of the spacings between adjacent eigenvalues s_i=ϵ_i+1-ϵ_i in terms of their mean value ⟨ s⟩, while the spectral rigidity Δ_3(L) corresponds to the least square deviation of the integrated spectral density of the unfolded ϵ_i from the straight line best fitting it in an interval of length L <cit.>.
The nearest-neighbor spacing distribution P(s) which takes into account the incompleteness of a level sequence (missing levels) is given by <cit.>
P(s)= ∑_n=0^∞(1-ϕ)^np(n,s/ϕ).
For complete sequences, ϕ = 1, P(s) = p(0,s), which for GUE systems is well approximated by the Wigner surmise:
P(s) = 32/π^2s^2exp(-4/πs^2).
For the fraction of observed levels ϕ< 1 the following expression was used
P(s) ≃ p(s/ϕ)+(1-ϕ)p(1,s/ϕ) + (1-ϕ)^2p(2,s/ϕ), where <cit.>
p(n,s/ϕ)= γ (s/ϕ)^μexp(-κ (s/ϕ)^2),
with μ =7, 14 for n= 1, 2, respectively, and γ and κ determined from the normalization conditions:
∫ p(n,s/ϕ) ds = ϕ, ∫ sp(n,s/ϕ) ds = ϕ^2 (n+1).
The spectral rigidity δ_3(L) in the case of ϕ< 1 <cit.> is given by
δ_3(L)= (1-ϕ)L/15+ ϕ^2Δ_3(L/ϕ),
where for ϕ = 1 the spectral rigidity Δ_3(L) is defined by
Δ_3(L)= L/15 - 1/15L^4∫_0^L(L-x)^3(2L^2-9xL-3x^2)Y_2(x)dx.
For GUE systems the two-point cluster function Y_2(x)=(sinπ x/π x)^2 <cit.>.
The results for the discussed spectral measures are presented in Fig. 3. The NNSD and the spectral rigidity Δ_3(L) for the undirected microwave network Γ is displayed in the panels (a) and (b), respectively. In Fig. 3(a) the experimental NNSD obtained using 7488 level spacings is presented by the green histogram. The experimental results are compared with the theoretical ones based on random matrix theory (RMT) for complete series of resonances ϕ = 1 (GOE - black solid line, GUE - blue solid line) and the incomplete GUE one, with the fraction of observed levels ϕ=0.96, 4% of missing resonances, (red broken line), respectively. Fig. 3(a) shows that the experimental NNSD is shifted towards larger parameter s in relation to the GUE distributions. The numerical analysis of a single unidirectional graph presented in Ref. <cit.> showed that the departure of its spectral characteristics from the GUE predictions was caused by not sufficiently complex wave dynamics in this graph. Therefore, also in our case the observed spectral deviation maybe attributed to not sufficiently complex wave dynamics in the coupled unidirectional graphs Γ_+ and Γ_-. In Fig. 3(b) the spectral rigidity Δ_3(L) for the microwave network Γ is presented by green circles. The experimental results are compared with the theoretical ones based on RMT for complete series of resonances ϕ = 1, GOE - black solid line, GUE - blue solid line, and the incomplete series ϕ=0.96 for GUE, red broken line, respectively. The inspection of the results reveals that the experimental results are close to the GUE missing level statistics with the fraction ϕ=0.96 of observed levels.
§.§ The elastic enhacement factor of the microwave network Γ
The measurement of the two-port scattering matix Ŝ of the undirected network Γ allows for the evaluation of
the elastic enhancement
factor <cit.>
W_S=√((S_11)(S_22))/(S_12),
where, e.g., (S_12) ≡⟨ |S_12|^2⟩
-|⟨ S_12⟩ |^2 stands for the variance of the
matrix element S_12.
The diagonal elements of the scattering matrix Ŝ can be parameterized as
S_ii=√(R_i)e^iθ_i,
where R_i and θ_i are the reflection coefficient and the phase measured at the i^th port of the network.
The elastic enhancement
factor W_S is parametrized by the dimensionless parameter γ=2πΓ_W /Δ_S, characterizing the absorption strength <cit.>, where
Γ_W and Δ_S are the width of resonances and the mean level
spacing, respectively. It is important to point out that system characteristics defined by the scattering matrix are not sensitive on missing levels.
It was established for GOE (β=1) and GUE (β=2) systems that the elastic enhancement factor for weak absorption γ≪ 1 approaches the limit of W_S=2/β +1, while in the case of strong absorption γ≫ 1 the limit is W_S=2/β.
Because the properties of the elastic enhancement factor W_S strongly depend on 𝒯-symmetry of the system it can be used as a sensitive measure of time invariance violation.
In such a situation the effective parameter γ can be evaluated using the distribution P(R) of the reflection coefficient R.
For systems without 𝒯-invariance (β=2), the
analytic expression for the distribution of the reflection
coefficient R is given by <cit.>
P(R)=2/(1-R)^2P_0(1+R/1-R),
where P_0(x) is the probability distribution defined by
P_0(x) =1/2[A(α(x+1)/2)^β/2 + B]exp(-α(x+1)/2),
where α=γβ/2, A=e^α-1 and B=1+α-e^α.
The probability distribution P_0(x) can be also applied for calculating the distribution of the imaginary part P(v)
of the diagonal elements of the Wigner's K̂ matrix <cit.>
P(v)=√(2)/π
v^3/2∫^∞_0dqP_0[q^2+1/2(v+1/v)].
The
distribution P(v) is known in solid-state physics as the local density of states (LDoS) <cit.>.
For each realization of the network Γ the absorption strength γ=1/2∑_i=1^2γ_i was experimentally
evaluated by adjusting the theoretical mean reflection coefficient
⟨ R⟩ ^th = ∫ _0^1dRRP(R),
to the experimental one ⟨ R_i⟩
obtained after eliminating the direct
processes <cit.>. Here the index i=1,2 denotes the
port 1 or 2.
In Fig. 4 we show the experimental distributions P(R) of the reflection
coefficient R for the microwave network Γ at three values of the absorption strength γ =3.6 ± 0.6, 4.6 ± 0.3, and 6.4 ± 0.3. They are marked by black, red, and green open circles, respectively. The measurements were done in the frequency ranges ν∈ [2,4], [4,6], and [6,8] GHz, respectively, and were averaged over 500 microwave network realizations. The values of the absorption strength γ were assigned to the experimental curves by fitting the theoretical distributions P(R)
calculated from Eq. (<ref>) with the absorption coefficients
γ =3.6, 4.6, and 6.4, marked by black, red, and green solid lines,
respectively.
Fig. 5 shows the elastic enhancement factor W_S (black open circles) evaluated experimentally for the undirected mirowave network Γ in the frequency range ν∈ [2,8] GHz. The experimental results were averaged in 1 GHz window over 300 microwave network realizations. In this frequency range the averaged absorption strength parameter γ=4.9± 0.4.
The experimental results are compared to the expected theoretical values of W_S for GUE systems which are marked by red solid line. The experimental elastic enhancement factor is on average slightly higher that the theoretical one predicted for GUE systems. Also this discrepancy is probably caused by not sufficiently complex wave dynamics in the microwave network Γ. The black broken lines in Fig. 5 show the lowest W_S=1 and the highest W_S=2 theoretical limits of the elastic enhancement factor predicted for GUE systems.
In Fig. 6 we show the experimental distribution P(v) of the imaginary part
of the diagonal elements of the Wigner's K̂
matrix for the microwave undirected network Γ at γ =4.9 (black open circles), averaged over 500 microwave network realizations, for the frequency range ν∈ [2,8] GHz. The experimental results are compared with the theoretical
distribution P(v) evaluated from Eq. (<ref>) for γ =4.9 (red solid line). The agreement between the experimental and theoretical results is good.
§.§ Numerical analysis of doublets properties
To investigate numerically the properties of doublets, which were not experimentally resolved, the microwave undirected network Γ was simulated in the calculations by the open, dissipationless quantum graph Γ.
The secular function ξ, whose zeros define the spectrum of the
graph was expressed by using the method of pseudo-orbits <cit.>
ξ= [ Î_2N-L̂Ŝ_G],
where Î_2N is 2N × 2N identity matrix, N is the number of the internal edges of the graph and L̂=[exp(ikl_1),...,exp(ikl_N),exp(ikl_1),...,exp(ikl_N)], l_1...l_N are the lengths of the respective edges of the graph. The Ŝ_G matrix, called the bond-scattering matrix <cit.>, contains scattering conditions at the graph vertices.
The full form of the Ŝ_G matrix is specified in the Appendix.
The nearest-neighbor spacing distribution P(s) and the spectral rigidity Δ_3(L) of the graph Γ consisting of the coupled unidirectional graphs Γ_+ and Γ_- are shown in Fig. 7(a) and Fig. 7(b), respectively. In this case the unfolded eigenvalues were determined from the resonance frequencies ν_i applying the Weyl's formula ϵ_i=2L_totν_i/c, where L_tot is the total optical length of the graph. The numerical calculations were performed in the frequency range ν=[2,8] GHz and were averaged over 50 configurations of the graph Γ.
Because of the doublet structure of the spectra the nearest-neighbor spacing distribution P(s), prepared using 15850 level spacings (green histogram), displays a large peak at small values of the parameter s and is significantly different from the Poisson (red dotted-dashed line), GOE (black dotted line), and GUE (blue dashed line) distributions, respectively.
The inset in Fig. 7(a) shows an example of the spectrum of the undirected graph Γ (coupled unidirectional graphs Γ_+ and Γ_-) calculated in the frequency range 3.0-3.3 GHz. Because the size of the doublets are very small, between 0.3 - 3.9 MHz, they are not resolved within the experimental resolution (15 MHz) and are not observed experimentally.
The spectral rigidity Δ_3(L) of the graph Γ (green circles) presented in Fig. 7(b) due to the presence of doublets increases faster for small L than the Poisson one (red dotted-dashed line) and only for L>8 slowly saturates at the value Δ_3(L) ≈ 0.5 which is significantly higher than the GUE (blue dashed line) and GOE (black dotted line) predictions for the presented range of L ≤ 20.
In Fig. 8(a) and Fig. 8(b) we show the nearest-neighbor spacing distribution P(s) and the spectral rigidity Δ_3(L) of the graph Γ obtained under the assumption that the doublets are not resolved and are treated as singlet states. In this case, similarly to the experimental situation, we assumed that the unfolded resonances were determined from the Weyl's formula ϵ_i=L_totν_i/c.
In Fig. 8(a) the numerical distribution P(s) obtained for the graph Γ (green histogram) is compared with the distribution P(s) evaluated for the simplified, closed graph Γ', composed of 10 edges and 5 couplers (red histogram). Both distributions were made using 7900 level spacings. The graph Γ' was obtained from the graph Γ, presented in Fig. 1, by removing of two T-junctions. The quantum graph Γ' is characterized by the spectrum of exactly doubly degenerate states. The two mentioned above distributions P(s) are compared with the Poisson (red dotted-dashed line), GOE (black dotted line), and GUE (blue dashed line) distributions, respectively. Both numerical distibutions P(s) are close to the GUE distribution, however, they are slightly shifted towards larger values of the level spacing s. Moreover, the distribution P(s) of the simplified graph Γ' is more localized around the center of the GUE distribution than the one for the graph Γ, suggesting that backscattering present at T-junctions of the graph Γ causes some additional deviations from the GUE distribution.
The spectral rigidity Δ_3(L) presented in Fig. 8(b) shows a significant deviation from the GUE prediction. Our results corroborate the observation reported in Ref. <cit.> that the unidirectional graphs may not generate a wave dynamics of sufficient complexity to accurately reproduce RMT predictions.
In Fig. 9 we show the doublet size distribution P(Δ) (blue histogram) of the graph Γ consisting of the coupled unidirectional graphs Γ_+ and Γ_- (see Fig. 1). The doublet size was normalized to the mean value ⟨Δ⟩ =1. In the calculation of the distribution P(Δ) 7900 doublets were used.
The distribution P(Δ) is compared to the Poisson distribution P_Poisson = exp( -Δ) (red solid line). Fig. 9 demonstrates that the distribution P(Δ) is close to the Poisson one.
§ SUMMARY AND CONCLUSIONS
We investigated experimentally undirected open microwave network Γ with internal absorption composed of two coupled unidirectional networks Γ_+ and Γ_- corresponding to two possible directions of motion on their edges. The two-port scattering matrices of the network were measured. The comparison of the number of experimental resonances with the theoretical one predicted by the Weyl's law showed that the resonances are doubly degenerate. Though the networks are characterized by the time reversal symmetry their missing level nearest-neighbor spacing distribution P(s) and the spectral rigidity Δ_3(L) (ϕ=0.96) do not obey the GOE predictions. The missing level NNSD P(s) reminds shifted towards larger values of mean level spacing s GUE distribution while the missing level spectral rigidity Δ_3(L) is in good agreement with the missing level prediction for GUE. Furthermore, the distributions of the reflection coefficient and the imaginary part of the Wigner's reaction matrix as well as the enhancement factor of the networks were evaluated. The aforementioned characteristics of chaotic systems are defined by the scattering matrix of the network. Therefore, they are not sensitive on missing levels. The obtained results are close to the GUE prediction, though the experimental enhancement factor appears to be slightly above it. We used the numerical calculations for open quantum graphs simulating microwave networks with no internal absorption to investigate their spectral statistics and doublets which were not experimentally resolved. The numerically obtained spectral characteristics show significant deviations from the GUE predictions. We show that the doublet size distribution is close to the Poisson distribution. Reported in this paper discrepancies between the experimental results and the GUE ones as well as between the numerical results simulating the experimental ones and the GUE predictions maybe associated with the presence of backscattering and not sufficiently complex wave dynamics on the coupled unidirectional networks and graphs Γ_+ and Γ_-.
§ ACKNOWLEDGMENTS
This work was supported in part by the National Science Centre, Poland, Grant No. 2018/30/Q/ST2/00324. We are grateful to S. Bauch for critical reading of the manuscript.
§ APPENDIX
Numerical calculations
The resonance conditions for the unidirectional graphs can be expressed using the method of pseudo-orbits <cit.>, by solving the equation:
[ Î_2N-L̂Ŝ_G]=0,
where Î_2N is 2Nx2N identity matrix, N is the number of the internal edges of the graph (N=12) and L̂=diag[exp(ikl_1),...,exp(ikl_12),exp(ikl_1),...,exp(ikl_12)], l_1...l_12 are the lengths of the respective arms, according to figure 1 in the paper. The Ŝ_G matrix, called the bond-scattering matrix <cit.>, contains scattering conditions at the network vertices (see Fig. 1), i.e., five vertices with the valency v_T=4 and the Neumann boundary conditions, ensuring that the graph Γ contains two unidirectional graphs Γ_+ and Γ_-,
σ̂_i= 1/√(2)[[ 0 0 1 1; 0 0 -1 1; 1 -1 0 0; 1 1 0 0 ]],
and two T-junction vertices with the valency v_T=3 and with the Neumann boundary conditions:
σ̂_T= 1/3[[ -1 2 2; 2 -1 2; 2 2 -1; ]].
The backscattering introduced by the T-junctions (presence of the diagonal elements in σ̂_T) causes that the resonances of the graphs Γ_+ and Γ_- are not doubly degenerate.
The matrix Ŝ_G has a form:
1010
Ŝ_G=[[ 0 0 0 0 0 0 0 0 0 0 2/3 0 -1/3 0 0 0 0 0 0 0 0 0 0 0; 1/√(2) 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1/√(2) 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 2/3 0 0 0 -1/3 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1/√(2) 0 0 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 -1/√(2) 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 1/√(2) 0 0 0 0 0 0 -1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 1/√(2) 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1/√(2) 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 -1/√(2) 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1/√(2) -1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1/√(2) 0 0 1/√(2) 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 0 0 0 1/√(2) 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 -1/√(2) 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1/√(2) 0 1/√(2) 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 1/√(2) 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 1/√(2) 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 -1/√(2) 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 1/√(2); 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 0 0 0 -1/√(2) 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/√(2) 0 0 0 0 0 0 1/√(2) 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1/√(2) 0 0 0 0 0 0 1/√(2); 0 0 0 0 0 0 0 0 0 0 -1/3 0 2/3 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 -1/3 0 0 0 2/3 0 0 0 0 0 0 0 0 ]]
§ REFERENCES
|
http://arxiv.org/abs/2409.03297v1 | 20240905070659 | Alpha helices are more evolutionarily robust to environmental perturbations than beta sheets: Bayesian learning and statistical mechanics to protein evolution | [
"Tomoei Takahashi",
"George Chikenji",
"Kei Tokita",
"Yoshiyuki Kabashima"
] | physics.bio-ph | [
"physics.bio-ph"
] |
APS/123-QED
[email protected]
Institute for Physics of Intelligence, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.
Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan.
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
Institute for Physics of Intelligence, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.
Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-Scale Quantum Science Institute, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
§ ABSTRACT
How typical elements that shape organisms, such as protein secondary structures, have evolved, or how evolutionarily susceptible/resistant they are to environmental changes, are significant issues in evolutionary biology, structural biology, and biophysics. According to Darwinian evolution, natural selection and genetic mutations are the primary drivers of biological evolution. However, the concept of “robustness of the phenotype to environmental perturbations across successive generations," which seems crucial from the perspective of natural selection, has not been formalized or analyzed. In this study, through Bayesian learning and statistical mechanics we formalize the stability of the free energy in the space of amino acid sequences that can design particular protein structure against perturbations of the chemical potential of water surrounding a protein as such robustness. This evolutionary stability is defined as a decreasing function of a quantity analogous to the susceptibility in the statistical mechanics of magnetic bodies specific to the amino acid sequence of a protein. Consequently, in a two-dimensional square lattice protein model composed of 36 residues, we found that as we increase the stability of the free energy against perturbations in environmental conditions, the structural space shows a steep step-like reduction. Furthermore, lattice protein structures with higher stability against perturbations in environmental conditions tend to have a higher proportion of α-helices and a lower proportion of β-sheets. The latter result shows that protein structures rich in α-helices are more robust to environmental perturbations through successive generations than those rich in β-sheets.
Alpha helices are more evolutionarily robust to environmental perturbations
than beta sheets: Bayesian learning and statistical mechanics to protein evolution
Yoshiyuki Kabashima
September 9, 2024
==============================================================================================================================================================
§ INTRODUCTION
Understanding whether fundamental elements that shape organisms are evolutionarily robust or prone to change is crucial for addressing why the phenotypes of existing organisms are so limited compared to the physically possible patterns, or for resolving significant issues such as predicting evolution.
Many studies on protein evolution suggest that mutational robustness may have driven the evolution of characteristic protein structures<cit.>. It has been shown that there is a correlation between mutational robustness and the geometric symmetry of protein structures<cit.>, that secondary structures are more robust to mutations than intrinsic disorder structures<cit.>, and that mutational robustness and the structural modularity of proteins (i.e., the proportion of amino acid residues forming secondary structures) contribute to the evolvability of proteins<cit.>. Furthermore, the low algorithmic complexity of gene sequences achieves symmetric protein structures<cit.>. It is also demonstrated that mutational robustness is well-compatible with the functional sensitivity of proteins<cit.>, and there is a correlation between the dynamics of protein structures and their evolvability<cit.>. These various lines of research strongly suggest that mutational robustness (and additionally, the low algorithmic complexity of genes) drives the formation of protein secondary structures and the evolvability of proteins.
In addressing the evolution problem, a statistical mechanics approach to abstract models has also produced significant results. A study using the spin glass model has shown that, in evolved organisms, plasticity due to environmental fluctuations and plasticity due to mutations are strongly correlated, leading to a dimensional reduction in the phenotypic space<cit.>. Studies on gene regulatory network (GRN) models have revealed that GRNs with high fitness exhibit high mutational robustness. This tendency is stronger in GRNs obtained through evolutionary simulations than those obtained through efficient sampling methods for exploring high-fitness GRNs<cit.>. It has also been found that both mutational robustness and developmental robustness drive the evolution of GRNs<cit.>. These studies suggest that biological evolution possesses mechanisms different from typical optimization processes, enabling the selection of GRNs (phenotypes) with high mutational robustness and noise during development. These findings imply the reduction of phenotypic space.
The above results have significantly advanced our understanding of the relationship between mutational robustness, developmental robustness, and evolvability in proteins (or life in general). However, these concepts of robustness pertain solely to the stability of an individual (or phenotype) against various perturbations over its lifetime. Since evolution involves changes in genetic information and associated traits over generations, considering the stability in terms of how well a trait (phenotype) adapts to or is maladapted to its environment, how this influences the genotype that produces the phenotype, and how these influences alter traits in subsequent generations, helps understand evolution.
In this study, we define the free energy of the space of amino acid sequences (i.e., genetic information) that can design a particular protein structure, utilizing the framework of Bayesian learning. We discuss the stability of this free energy against perturbations in environmental conditions surrounding the protein. The stability of the free energy is determined for a randomly generated two-dimensional (2D) lattice protein model, and we elucidate the relationship between the structural features of the protein and the stability of its free energy.
We use lattice proteins for model proteins<cit.>. Random structural patterns that have not evolved do not exist in protein structure databases such as the Protein Data Bank (PDB)<cit.>. Therefore, artificial models like lattice proteins are more suitable for this study. We show the definition of secondary structures in 2D lattice proteins of this study in <ref>. In the analysis of secondary structures of 2D lattice proteins, β-sheets decrease as the designability (the number of amino acid sequences that fold into a given protein structure) increases<cit.>. Lattice proteins are also effective for analyzing the free energy landscape of protein folding<cit.>, the phenomenon of cold denaturation where proteins denature at low temperatures<cit.>, and the impact of amino acid residue mutations on the native structure of proteins<cit.>. Additionally, lattice models are used to analyze the folding energy landscape of RNA<cit.>. Therefore, if one develops a valid theory for protein evolution, it could be said that lattice proteins can reveal qualitatively accurate behaviors in analyzing the environmental robustness of protein secondary structures across successive generations, our objective in this study.
§ MODEL AND METHOD
§.§ Hamiltonian of lattice HP model with the water chemical potential
The lattice HP model places amino acids at lattice points, representing the protein structure as a self-avoiding walk on the lattice. A self-avoiding walk is a path that does not pass through the same point more than once on a lattice (or graph). The naturally occurring 20 types of amino acids are simplified into two types: hydrophobic (H), which repels water molecules, and polar (P), which attracts water molecules.
The structure of a protein, denoted as R, is represented by the set of coordinates r_i for each amino acid. For a protein with N amino acids, R = r_1, r_2, ⋯, r_N. The state of the i-th amino acid is denoted by σ_i, with σ_i = 1 (for hydrophobic) and 0 (for polar). The lattice HP model typically considers only the attractive interactions between hydrophobic amino acids. However, since proteins also interact with water molecules surrounding them, in this study, the Hamiltonian of a protein with structure R and sequence σ is expressed as follows, with μ representing the chemical potential of water near the protein surface:
H( R , σ; μ) = -∑_i<jσ_iσ_jΔ( r_i - r_j) - μ∑_i = 1^N (1 - σ_i),
where Δ( r_i - r_j) is a “contact function" that equals 1 when the i-th and j-th amino acids are spatially nearest neighbors but not consecutive in the sequence, and 0 otherwise. The second term represents the interaction of polar amino acids with surrounding water molecules.
This Hamiltonian, proposed for the first time in our previous study<cit.>, includes a hydration term, as shown in Eq. (<ref>), which is crucial for exploring the environmental robustness of protein structures<cit.>. Additionally, Eq. (<ref>) can be seen as a simplified version of the protein energy in water, excluding interactions between water around the protein and bulk water, as shown in studies like<cit.>. It is important to note that μ is not a parameter representing the overall environment within an organism but rather the environment surrounding a specific protein.
The chemical potential of water surrounding a protein, μ, can be considered a parameter representing environmental conditions, as it depends on the state of bulk water (such as pH and pressure) and temperature. The strength of hydrogen bonds between hydrophilic amino acid residues and water molecules is generally influenced by temperature. Consequently, μ can be regarded as an environmental condition specific to the protein structure.
§.§ Bayesian learning framework
Bayesian learning is a framework for machine learning based on Bayesian statistics, in which one updates the prior probability (prior) of an event to a posterior probability (posterior) in light of observed data. Statistical models derived from Bayesian learning are often interpreted as probabilistic generative models of observational data. We utilize the properties of Bayesian learning to construct a probabilistic generative model for the phenotype of proteins, namely their native structures. The probabilistic generative model of protein structures described below was devised in our previous work as a method for protein design and has achieved a certain level of success in the context of lattice protein models<cit.>. Protein design problem is the inverse problem of protein structure prediction. Protein design is thus determining the sequence of amino acids that will fold into a given protein structure <cit.>.
In our model, we consider the native structure R as the observed variable, the amino acid sequence σ as the latent variable, and the chemical potential of the surrounding water μ as the hyperparameter. In this context, Bayes' theorem is expressed as follows:
p(σ | R, μ) = p( R|σ, μ) p(σ|μ)/∑_σp( R|σ, μ) p(σ|μ)
Let β the inverse temperature of the environment, the likelihood function p( R|σ), prior p(σ|μ), and posterior p(σ | R, μ) are respectively given as
p(R | σ, μ) = e^-β H(R , σ ; μ)/Z(σ ; β, μ),
p(σ | μ) = Z(σ; β_p, μ_p)/Ξ(β_p, μ_p),
p(σ| R, μ) = e^-β H(R, σ ; μ) /Y( R;β, μ).
The normalization constants of Eqs. (<ref>), (<ref>), and (<ref>) are as follows:
Z(σ ; β, μ) = ∑_ R e^-β H(R , σ ; μ),
Ξ(β_p, μ_p) = ∑_σ∑_ R e ^ - β_p H(R, σ; μ_p),
Y( R;β, μ) = ∑_σ e^-β H(R, σ ; μ).
We refer to Eqs. (<ref>), (<ref>), and (<ref>) as the structural partition function, grand partition function, and sequence partition function, respectively. Additionally, the inverse temperature and chemical potential in the prior Eq. (<ref>) may differ from those of protein folding and are thus denoted as β_p and μ_p, respectively.
An important point is that the posterior Eq.(<ref>) does not include the structural partition function Z(σ ; β, μ). This is because Z(σ ; β, μ) requires an exhaustive structural search of the Boltzmann factor, which is infeasible considering the infinite degrees of freedom of protein structures. The sum over sequences ∑_σ is much more manageable than the sum over structures ∑_ R.
The likelihood function p( R | σ, μ) represents the probability of a given structure R occurring for a given amino acid sequence σ. This is the probability that σ folds into structure R. More generally, it is the probability that a given genotype results in a given phenotype. Eq. (<ref>) asserts that the likelihood function p( R | σ, μ) is the Boltzmann distribution of structure R conditional on the amino acid sequence σ. This setting of the likelihood functions p( R | σ, μ) is based on Anfinsen's dogma<cit.>, which states that the state in which a protein adopts its native structure is a thermodynamic equilibrium determined by its amino acid sequence under physiological conditions.
The prior in Eq. (<ref>) is highly non-trivial. To briefly explain the background of our setting of the prior in Eq. (<ref>), it is based on the free energy of a protein with amino acid sequence σ,
F(σ; β_p, μ_p) = -1/β_plog Z(σ; β_p, μ_p)
which is proportional to the structural partition function Z(σ ; β_p, μ_p) included in it. Since low free energy implies a large partition function, the prior in Eq. (<ref>) can be interpreted as the hypothesis that "amino acid sequences with lower free energy under specific temperature β_p and chemical potential μ_p have evolved." We call this the hypothesis of sequence weights (HSW). HSW was first proposed in <cit.>. Amino acid sequences with high free energy Eq.(<ref>) tend to increase the Hamiltonian Eq. (<ref>) for many structures. The probability that such amino acid sequences form specific compact three-dimensional structures is extremely low. HSW is a hypothesis that preemptively excludes such sequences.
HSW is still an unverified hypothesis, but our previous studies<cit.> have shown that a protein design method assuming HSW exhibits high performance for the 2D lattice HP model with N ≤ 36, which we will analyze in this study. Protein design problem is the inverse problem of protein structure prediction. Additionally, using the prior Eq. (<ref>) based on HSW allows us to cancel out the two partition functions Z(σ ; β, μ) and Ξ(β, μ) required for deriving the posterior, thereby avoiding the computational explosion associated with structural searches.
If β = β_p, μ = μ_p holds, the derivation of the posterior p(σ | R, μ) is then,
p(σ| R, μ) = p( R|σ, μ)p(σ | μ)/∑_σp( R|σ, μ)p(σ|μ)
=
e^-β H(R , σ ; μ)/Z(σ; β, μ)·Z(σ;β, μ)/Ξ(β, μ)/∑_σe^-β H(R , σ ; μ)/Z(σ;β, μ)·Z(σ;βμ)/Ξ(β, μ)
=
e^-β H(R , σ ; μ)/Ξ(β, μ)
/∑_σ
e^-β H(R , σ ; μ)/Ξ(β, μ)
= e^-β H(R , σ ; μ)/∑_σ e^-β H(R , σ;μ).
From Eq. (<ref>) to Eq. (<ref>), We used that the grand partition function Ξ(β, μ) does not depend on the amino acid sequence σ, allowing it to be factored out of the sum ∑_σ. In the posterior p(σ | R, μ), the sequence space considered can be regarded, from an evolutionary perspective, as the set of typical sequences that realize a given structure R.
Eq. (<ref>) represents the free energy of a protein with a given amino acid sequence σ. Hence, it is natural to assume that the inverse temperature and chemical potential in this equation are equal to those in the likelihood function Eq. (<ref>), which represents the probability that the protein folds into a given structure R.
§.§ A free energy depends on a protein structure and its stability to the environmental perturbation
In this subsection, we define the free energy for a sequence space dependent on a particular protein structure R, and derive expressions demonstrating its stability. To facilitate this, we present the expression for the marginal likelihood p( R | μ) here. In Bayesian learning, the marginal likelihood represents the probability of observing the data given a value for the hyperparameter. For the lattice protein case under discussion, the marginal likelihood is
p( R | μ) = ∑_σp( R|σ; μ)p(σ | μ)
=
∑_σe^-β H(R , σ ; μ)/Z(σ; β, μ)·Z(σ; β_p, μ_p)/Ξ(β_p, μ_p)
=
Y( R; β, μ)/∑_ RY( R; β, μ),
where β = β_p and μ = μ_p hold. From Eq. (<ref>), the marginal likelihood, i.e., the probability of observing a protein structure R given the environmental conditions represented by the chemical potential of the surrounding water μ, is proportional to the sequence partition function Y( R; β, μ). Furthermore, the marginal likelihood expressed in Eq. (<ref>) is identical to the denominator of the posterior distribution p(σ| R, μ) when Ξ(β, μ) is not canceled out in the transition from Eq. (<ref>) to Eq. (<ref>). Therefore, the marginal likelihood p( R | μ) serves as the partition function for the posterior distribution p(σ| R, μ) as described in Eq. (<ref>). Consequently, we can consider the corresponding free energy as follows:
F( R, μ) = -1/βlog p( R | μ).
When a specific structure R is determined at a given inverse temperature β, the free energy F( R, μ) becomes a function solely of the environmental conditions μ. When F( R, μ) is minimized at a particular environmental condition μ = μ_EB—a condition equivalent to maximizing the marginal likelihood, which is referred to as Empirical Bayes estimation in the field of Bayesian learning—the stability of F( R, μ) around μ = μ_EB requires
∂^2/∂μ^2 F( R, μ) |_μ=μ_EB > 0.
Before we manipulate Eq. (<ref>), we define the two types of expected values appearing in the transformation of Eq. (<ref>). These are the average taken over the posterior distribution when β = β_p and μ = μ_p hold, and the average taken over the joint distribution, which is the product of the likelihood function and the prior. The joint distribution is given as follows:
p( R, σ | μ) = p( R | σ, μ)p(σ | μ)
= e^-β H( R, σ;μ)/∑_ R∑_σ e^-β H( R , σ;μ).
Thus, the joint distribution p( R, σ | μ) forms a Boltzmann distribution where both the structure R and the sequence σ serve as thermal variables. For any physical quantity X( R, σ) that is a function of the structure R and the sequence σ, the average taken over the posterior distribution p(σ | R, μ) and the average taken over the joint distribution p( R, σ | μ) are given as
⟨ X( R, σ) ⟩_| R := ∑_σ X( R, σ) p(σ| R, μ)
= ∑_σ X( R, σ) e^-β H( R , σ;μ)/∑_σ e^-β H( R , σ;μ),
⟨ X( R, σ) ⟩ := ∑_ R∑_σ X( R, σ) p( R, σ| μ)
= ∑_ R∑_σ X( R, σ) e^-β H( R , σ;μ)/∑_ R∑_σ e^-β H( R , σ;μ).
The notation used on the far left side of Eq. (<ref>) explicitly indicates that the quantity depends on a specific structure R. It is important to note that this does not represent an average over all possible R patterns.
To manipulate inequality (<ref>), we substitute Eq. (<ref>) into Eq. (<ref>), and then transform Eq. (<ref>) as
< (β∑_i=1^N(1-σ_i))^2>_| R - (β< ∑_i=1^N( 1 - σ_i)>_| R)^2 <
< ( β∑_i=1^N(1-σ_i))^2> - (< β∑_i=1^N( 1 - σ_i)>)^2.
Thus, Eq. (<ref>) results in an inequality between the variance of the number of hydrophilic amino acid residues calculated from the posterior and the variance calculated from the joint distribution. Furthermore, we obtain following expression of Eq. (<ref>):
β∑_i,j[ < σ_iσ_j >_| R - <σ_i >_| R< σ_j >_| R]
<
β∑_i,j[ < σ_iσ_j > - <σ_i > < σ_j > ],
where we denote ∑_i,j = ∑_i = 1^N∑_j=1^N. The definition of the magnetic susceptibility χ at the equilibrium state using the spin correlation function is given by χ = β∑_i,j[ <σ_iσ_j> - <σ_i> <σ_j>], where σ_i denotes the i-th spin variable, similar to the current context. Thus, under the posterior Eq. (<ref>), which is the Boltzmann distribution of sequences conditioned on the structure R, so to speak the “hydrophobic susceptibility" can be defined as χ_ R := β∑_i,j[ < σ_iσ_j >_| R - <σ_i >_| R< σ_j >_| R]. For the joint distribution, which is the Boltzmann distribution over both spaces of R and σ, the hydrophobic susceptibility is denoted as χ := β∑_i,j[ < σ_iσ_j > - <σ_i > < σ_j > ]. Hence, Eq. (<ref>) becomes the inequality between those two hydrophobic susceptibilities:
χ_ R < χ.
We here consider the evolutionary meaning of χ_ R. It represents the susceptibility of the hydrophobicity of the structure R to perturbations in μ, within the posterior p(σ | R, μ). Given that changes in the hydrophobic/hydrophilic composition (hydrophobicity) of amino acid residues can lead to alterations in protein structures <cit.>, χ_ R reflects the structural plasticity of protein R to environmental changes via macroscopic shifts in the gene space that facilitates the formation of R. Therefore, we propose to call χ_ R the evolutionary structural plasticity of a protein structure R. The term evolutionary structural plasticity is used to distinguish it from phenotypic plasticity, which refers to an individual's capacity to adapt to environmental changes.
This quantity, χ_ R, represents the plasticity of a genotype associated with a specific structure R to change for subsequent generations. This concept can be understood by following the simple dynamics of Darwinian evolution. Consider a population of organisms where proteins with the structure R exist in a particular generation. In the subsequent generations, assume that amino acid sequences with slight differences in hydrophobic/hydrophilic composition, which incidentally fold into the same structure R, existed either by chance or due to mutations. Subsequently, perturbations in environmental condition μ occur, leading to the evolution of amino acid sequences that adapt to this new environment, consequently inducing changes in the original structure R. If χ_ R is high, it indicates a higher such genotypic change; conversely, a low χ_ R suggests that changes are less likely. Therefore, χ_ R does not merely reflect the susceptibility of the structure R to change within a single generation but rather its plasticity for change in an evolutionary context.
While there exist proteins whose structures remain unchanged despite variations in hydrophobicity, such proteins are a minority among all proteins. Moreover, it can be assumed that even these exceptional proteins would alter their structures if there were significant differences in hydrophobicity.
χ represents the susceptibility of sequences to change under the joint distribution p( R, σ | μ); thus, χ_ R is akin to an average of overall structural patterns. Therefore, Eq. (<ref>) asserts that for the free energy F( R, μ) of a given structure R to be stable against perturbations in μ, the χ_ R of that structure must be lower than the average χ across all structures.
Finally, for structures that satisfy Eq. (<ref>), we define the following quantity as the steepness of the function around the minimum of the free energy F( R, μ), specifically the second derivative with respect to μ: ∂^2/∂μ^2 F( R,μ) |_μ=μ_EB = χ - χ_ R,
κ_ R = χ - χ_ R.
This quantity κ_ R represents the stability against perturbations in μ around the minimum state of the free energy F( R, μ), independent of temperature.
The evolutionary significance of κ_ R is not immediately apparent. However, since χ_ R represents the evolutionary plasticity in response to environmental changes, κ_ R, as a decreasing function of χ_ R, indicates the robustness of R after several generations under environmental perturbations.
It is important to note that even if hydrophobicity remains constant, different amino acid sequences can lead to different structures. Therefore, κ_ R precisely measures the stability against structural changes due to differences in hydrophobicity, which implies a tolerance for macroscopic structural changes while permitting minor structural variations. The “minor" changes in protein structure that are considered here might include slight alterations due to microscopic differences in the contact network, for example.
For a given structure R, the computation of κ_ R are performed using Belief Propagation (BP) (theoretical details are in Appendix <ref>). BP is an algorithm that efficiently computes the marginal probability of an element in a graph with complex interactions for the entire probability distribution of the graph. The BP algorithm is derived using an extended method of mean-field approximation called the cavity method. It has been shown that the solutions produced by BP are equivalent to the Bethe approximation<cit.>, and if the graph is a tree, the solution is exact. BP also provides good approximate solutions when the graph is close to a tree. Since the contact network of a 2D lattice protein is often a tree graph, BP is suitable for 2D lattice protein models. Indeed, our previous research has shown that the design accuracy of 2D lattice proteins is nearly the same when using BP and Markov Chain Monte Carlo (MCMC) methods<cit.>.
§.§ Definition of the secondary structure of the
2D lattice proteins
Here we define the secondary structures in 2D lattice proteins, namely α-helices and β-sheets. An α-helix is defined as a structure where amino acid residues form a U-shaped structure connected in alternating orientations and a β-sheet is defined as a structure where amino acid residues are aligned in parallel or anti-parallel configurations (Fig. <ref>). The detection of these secondary structures was performed from contact maps, using the identical method proposed in the previous study <cit.>.
Both α-helices and β-sheets must have at least six residues. This requirement is due to the overrepresentation of single U-shaped structures (four residues) and paired residues in 2D lattice proteins, which would otherwise not qualify as secondary structures. However, a specific case of a 4-residue β-sheet within a 6-residue β-turn (Fig. <ref>c) is included. We count α-helices and β-sheets according to these rules, treating all other regions as random coils.
In lattice proteins, α-helices and β-sheets may share one or two amino acid residues. This study prioritizes β-sheets when two residues are shared and α-helices when one residue is shared. If two residues are shared, the α-helix is resized by removing the shared residues, and if the resized helix has four residues, it is classified as a random coil. If one residue is shared, the β-sheet is resized by removing the shared residue and its paired residue, and if the resized sheet has four residues (unless it is part of a 6-residue β-turn), it is classified as a random coil. Prioritizing β-sheets for two shared residues and α-helices for one shared residue helps balance the two structures.
We do not use 3D lattice proteins because the α-helix in three dimensions requires six residues per turn, unlike the real protein case of 3.6 residues per turn<cit.>. In two dimensions, as shown in Fig. <ref>(a), four residues per turn are closer to realistic proteins. This distinction is crucial as we define the proportion of secondary structures in terms of the number of residues forming these structures within a given protein.
In this study, we use maximally compact structures of N = 6 × 6 residues. The total number of structural patterns is 28,732, excluding structures symmetric under 90^∘ rotation, mirror images along horizontal and vertical axes, diagonal reflections, and head-tail symmetries of the self-avoiding walk.
§ RESULTS
§.§ Changes in the Number of Lattice Protein Structures According to Evolutionary Stability
First, we examine how the number of lattice protein structures decreases as the stability κ_ R increases, essentially observing how the dimension of the phenotypic space reduces with changes in κ_ R. Specifically, we increment κ_ R from 0 in small steps and record the number of structures that have a κ_ R larger than the value at each step, denoting this number as N^ str(κ_ R). We show the results for the quantity κ_ R divided by β. This measure is taken to cancel out the β in the coefficients of χ_ R and χ, which are common across all structures. We then calculate its proportion out of the total number of structures, which is 28,732. The results, showing the reduction in the number of structures with increasing κ_ R/β within the range 0 ≤κ_ R/β≤ 10, are displayed in Fig.<ref>. In Fig.<ref>, N^tot represents the total number of maximally compact structures for an amino acid residue number N = 6 × 6, equaling 28,732. We set at β =10.
From Fig. <ref>, it is evident that N^ str(κ_ R/β) decreases in a stepwise manner as κ_ R/β increases. It is also observed that there are two points where the rate of decrease in N^ str(κ_ R/β) is particularly significant. Furthermore, approximately 80% of the structures are found within the range 0 ≤κ_ R/β≤ 0.5, indicating that most structures have a gentle slope around the minimum of the free energy F( R, μ). The proportion of structures with κ_ R/β < 0, which are outside the range of Fig. <ref>, is approximately 4% and low. Additionally, there are no structures with κ_ R/β≥ 9.5, as the maximum value of κ_ R/β is 9.3971.
§.§ Changes in the Proportion of Secondary Structures According to Environmental Robustness
Next, we illustrate the changes in the proportion of secondary structures within groups of lattice protein structures as the stability κ_ R increases incrementally. We also show the results for the quantity κ_ R divided by β. For each increment in κ_ R/β, we plot the average proportions of α-helices, β-sheets, and random coils within the set of structures that have a κ_ R/β greater than the current κ_ R/β value. Fig.<ref> shows its distributions of secondary structures. We also set the inverse temperature at β = 10.
Looking at Fig.<ref>, as κ_ R/β increases, the proportion of α-helices increases while the proportion of β-sheets decreases. Random coils remain neutral to κ_ R/β changes. This result indicates that structures with a higher proportion of α-helices tend to have higher κ_ R/β values and those with a higher proportion of β-sheets tend to have lower κ_ R/β values. Therefore, it can be concluded that proteins with a higher proportion of α-helices are more robust over generations in response to perturbations in environmental conditions.
§ DISCUSSION AND CONCLUSION
First, we note some overall considerations regarding our statistical mechanics theory framed within Bayesian learning for evolution. Our proposed evolutionary theory does not optimize phenotypes to given environmental conditions. That is because the stability analysis around the minimum of the free energy F( R, μ) pertains to the stability around the minimum for environmental conditions μ given a structure (phenotype) R and not the reverse. According to the neutral theory of molecular evolution<cit.>, the evolution of organisms (molecular evolution) is not necessarily a product of optimization under given environmental conditions. Our theory does not contradict these important evolutionary concepts.
The evolutionary significance of κ_ R is, directly, the stability to change in the amino acid sequence σ (genotype) that forms the structure (phenotype) R over subsequent generations. Thus, it cannot necessarily be said to explain the robustness to change of a structure over many generations. Furthermore, κ_ R pertains explicitly to the stability against perturbations in environmental conditions μ, and does not directly inform about the stability against significant changes in μ. Therefore, while κ_ R may be better suited to explaining incremental microevolution, it may not be as applicable to macroevolution. However, it is not uncommon for macroevolution in organisms to be understood as an accumulation of microevolutions; our theory and analysis of κ_ R could still play a significant role in understanding large evolutionary changes.
κ_ R can also be interpreted as part of a broader concept of robustness that includes environmental robustness, especially considering the functionality of gene regulatory networks. In this sense, proposing κ_ R in an evolutionary context is meaningful. However, it should be noted, as emphasized in Chapter II, that all the measures proposed in this study, such as χ_ R and κ_ R, averages over the amino acid sequences σ. Thus, the effects of mutational robustness across successive generations on the structure R remain unclear. This study exclusively analyzed the effects of environmental fluctuations on evolution without integrating comprehensive evolutionary theories that include mutations, natural selection, or genetic drift, which are changes in genes unrelated to natural selection. To include theories that account for mutations and genetic drift, the former would require discussing the stability of quantities that retain amino acid sequence pattern dependencies without integrating over amino acid sequences for proteins. For the latter, it would be necessary to consider quantities or models where hydrophobicity fluctuates solely due to random effects while being constant to changes in the environmental condition μ. Constructing such theories remains a task for future research.
The result of Fig.<ref> indicates that increasing of κ_ R leads to a very rapid reduction in the phenotypic space. It is unclear what proportion of the total structural space, including evolutionary non-viable protein structure patterns, is occupied by actual protein structures, so whether the results from Fig.<ref> accurately explain the dimensional reduction of phenotypes from the perspective of natural selection remains uncertain. However, the fact that evolved phenotypes constitute only a portion of the entire possible phenotypic space can indeed be explained by our proposed metric of robustness to change in phenotypes over subsequent generations in response to perturbations in environmental conditions μ.
Similarly, Fig.<ref> lacks direct biological evidence. However, a conventional study elucidated that α-helices exhibit higher mutational robustness than β-sheets <cit.>. Of course, the relationship between mutational robustness and our results is unclear. However, the fact that α-helices and β-sheets behave differently regarding structural mutational robustness, which is evolutionarily significant, is likely essential. Given the observed variation in the proportions of α-helices and β-sheets among proteins, it is plausible to suggest that these differences may be associated with differences in protein function. Considering that protein function closely relates to evolvability, the differing dependencies on the proportions of α-helices and β-sheets suggest that our proposed stability measure, κ_ R, is likely to be evolutionarily meaningful.
Here, we give the conclusion of our study. We proposed an evolutionary generation model for protein structures by statistical mechanics based on Bayesian learning framework. We considered the chemical potential surrounding proteins as an environmental condition. We discussed the stability of the free energy as a function of protein structure and environmental conditions, as defined in Eq. (<ref>). This stability refers to the robustness to change in hydrophobicity (the proportion of hydrophobic amino acids, analogous to magnetization in magnetic materials), determined by the amino acid sequences that design (fold into) a specific protein structure with high probability. Since changes in hydrophobicity can lead to structural changes, this stability can be considered as the stability of a given protein structure against environmental perturbations over subsequent generations. Consequently, in a 2D square lattice protein model composed of 36 residues, we found that structures with a certain level of stability in their free energy are very rare in the entire structural space, with a higher proportion of α-helices and a lower proportion of β-sheets.
§ ACKNOWLEDGMENTS
This work was supported by JSPS KAKENHI Grant Number 23K19996 (T. T.). The authors are grateful to Macoto Kikuchi, Ayaka Sakata, and Takashi Takahashi for illuminating discussions and helpful comments. This work was also supported by JSPS KAKENHI Grant Number 22H00406 (G. C.), and JSPS KAKENHI Grant Number 22H05117 and JST CREST JPMJCR1912 (Y. K.)
§ THE CALCULATIONS OF HYDROPHOBICITY AND SUSCEPTIBILITY USING BELIEF PROPAGATION
§.§ The calculations of ⟨σ_i⟩_R and ⟨σ_i⟩
In order to calculate the hydrophobicity of single structure, one needs to obtain the posterior average of single residue <σ_i>_ R given by
<σ_i>_ R = ∑_σσ_ip(σ| R, μ).
If the posterior p(σ| R, μ) is decoupled to each residue expressed as follows by using the set of residues without σ_i, σ\ i
p_i(σ_i| R, μ) = ∑_σ\ i p(σ| R, μ),
then, one can rewrite Eq. (<ref>) very simple form as follows:
<σ_i>_ R = ∑_σ_i = 0,1σ_ip_i(σ_i| R, μ)
= p_i(σ_i = 1| R, μ).
Belief propagation (BP) can obtain the marginal distribution p_i(σ_i| R, μ) by using following update rules,
ν̃_a → i^(t) (σ_i) = 1/Z_a → i∑_σ_j(i) e^βσ_iσ_j(i)ν_j(i) → a^(t) (σ_j(i)),
ν_i → a^(t + 1) (σ_i) = 1/Z_i → a e^βμ(1 - σ_i)∏_b ∈∂_i∖ aν̃_b → i^(t) (σ_i).
In Eqs. (<ref>) and (<ref>), the beliefs or messages ν̃_a → i^(t) (σ_i) and ν_i → a^(t + 1) (σ_i) are the probability from the a-th contact to the i-th residue and the probability from the i-th residue to the a-th contact, respectively. The subscripts a, b, ⋯ are indices on contacts, and the upper right subscript is the number of steps in the BP algorithm. The symbol ∂_i denotes the index set of contacts related to residue σ_i. The constants Z_a → i and Z_i → a are the normalizing constants of each distribution function. The residue index j(i) denotes the index that contacts with i-th residue. In the lattice HP model, all residue-residue interactions are two-body. Thus, this index j(i) is unique to i.
The derivation of the BP update rules (<ref>) and (<ref>) are somewhat technical, so they are not presented here. Please refer to the appendix in our previous work<cit.> for information of the derivation of the above BP update rules.
If one properly defines ν_i → a^(t = 0)(σ_i) as the initial condition (in this study, we use 0.5.) and computes Eqs. (<ref>) and (<ref>) at each step for all combinations (i, a), after sufficient iterations t_ max, the following belief:
ν_i(σ_i) = 1/Z_i∏_a ∈∂_iν̃_a → i^(t_ max) (σ_i),
converges to the marginal distribution p_i(σ_i | R,μ). In Eq. (<ref>) where Z_i is the normalization constant.
In order to obtain another hydrophobicity h, one needs to calculate the joint average of σ_i. It is expressed as follows.
<σ_i> = ∑_ R∑_σσ_ip( R, σ | μ)
= ∑_ R∑_σσ_ip( R | μ)p(σ | R, μ)
= ∑_ R∑_σσ_ip(σ | R, μ)p( R | μ)
= ∑_ R<σ_i>_ R p( R | μ)
= ∑_ R<σ_i>_ R Y( R;β,μ)/∑_ RY( R;β,μ).
In Eq. (<ref>) to Eq. (<ref>), we used Eq. (<ref>). Therefore, one has to calculate the sequence partition function Y( R;β,μ) = ∑_σe^-β H( R, σ; μ) and the posterior average of each residue <σ_i>_ R for all lattice structure patterns. We have the all patterns of the self avoiding walks on 2D N = 6×6 square. Thus, in this study, we can obtain the exact value of < σ_i>.
For more large size of lattice proteins, one has to carry out the efficient multi-canonical Monte Carlo methods suitable for exploring lattice protein structures<cit.>. For the realistic protein structures, such a structural search is extremely difficult even with the use of the database<cit.>, because the structural search space has to involve the random structural patterns including structures that have not evolved.
By using BP, The sequence partition function Y( R;β,μ) = ∑_σe^-β H( R, σ; μ) is obtained from the Bethe free entropy F_B(ν̃^*) = log Y( R;β,μ) where ν̃^* is the set of the contact to residue messages (Eq. (<ref>)) after sufficient time steps. Free entropy is -β times free energy.
The Bethe free entropy F_B(ν̃^*) is the free entropy under the Bethe-approximation. In the lattice HP model using the Hamiltonian (1), F_B(ν̃^*) is given by
F_B(ν̃^*) = ∑_a=1^Mlog Z_a + ∑_i=1^Nlog Z_i - ∑_ialog Z_ia,
Z_a := ∑_σ_iσ_j(i) e^βσ_iσ_j(i)∏_i ∈∂ aν_i → a^*(σ_i),
Z_i := ∑_σ_i e^βμ (1-σ_i)∏_b ∈∂ iν̃_b → i^*(σ_i),
Z_ia := ∑_σ_iν̃_a → i^*(σ_i) ν_i → a^*(σ_i),
where ia denotes index of elements of the set in which all combinations of residues and contacts, and the symbol ∂ a denotes the index set of residues related to contact a. The symbol M denotes the number of contacts. The messages ν_i → a^*(σ_i) and ν_a → i^*(σ_i) are the converged i-th residue to a-th contact message and the converged a-th contact to i-th residue message, respectively. The derivation of Eqs. (<ref>) ~(<ref>) for general cases involves a technical and lengthy explanation, which is beyond the scope of this paper. For further details, please refer to the representative texts<cit.>. Then, we obtain
Y( R;β,μ) = ( ∏_a Z_a) ( ∏_i Z_i)/∏_ia Z_ia.
In order to obtain μ_EB for each structure, we derive the minimization condition of free energy of a structure F( R, μ) defined by Eq. (<ref>). The minimization condition with respect to μ : (∂ / ∂μ) F( R, μ) = 0 becomes as follows:
< ∑_i=1^Nσ_i>_ R = < ∑_i=1^Nσ_i>.
The minimization parameter μ_EB satisfies Eq. (<ref>). Thus, one can get μ_EB by computing ∑_i< σ_i>_ R and ∑_i< σ_i> through the method explained above for each structure. We used the bisection method to compute left hand side and right hand side of Eq. (<ref>).
§.§ The calculations of χ_R and χ
In order to obtain the susceptibility χ_ R, one has to calculate the residue correlation function < σ_iσ_j>_ R given by
<σ_iσ_j>_ R = ∑_σσ_iσ_jp(σ| R, μ).
If σ_i and σ_j are not statistically independent, i.e., when there is at least one path connecting σ_i and σ_j on the contact graph, one has to think the joint probability
p_i,j(σ_i, σ_j | R, μ) = p_i | j(σ_i | σ_j, R, μ) p_j(σ_j | R, μ),
where the conditional probability, p_i | j(σ_i | σ_j, R, μ) is given by
p_i | j(σ_i | σ_j, R, μ) = p_i,j(σ_i, σ_j | R, μ)/p_j(σ_j | R, μ).
If the following marginalization can be carried out
p_i | j(σ_i | σ_j, R, μ) = ∑_σ\ ip(σ | σ_j , R,μ),
one can then obtain < σ_iσ_i>_ R following quite simple form
< σ_iσ_i>_ R = ∑_σ_i∑_σ_jσ_iσ_j p_i | j(σ_i | σ_j, R, μ) p_j(σ_j | R, μ)
= p_i | j(σ_i = 1 | σ_j=1, R, μ) p_j(σ_j=1 | R, μ).
BP is also able to calculate the conditional marginal p_i | j(σ_i | σ_j, R, μ). The procedure is as follows: one uses the conditional messages ν̃_a → i|j^(t) (σ_i | σ_j = σ) and ν_i|j → a^(t + 1) (σ_i | σ_j = σ) instead of normal BP messages Eqs.(<ref>) and (<ref>), where σ is the realization of σ_j. Thus, in the current case σ = 1, the normal BP messages change to following conditional forms
ν̃_a → i|j^(t) (σ_i | σ_j = 1) = 1/Z_a → i|j∑_σ_k(i) e^βσ_iσ_k(i)ν_k(i)|j → a^(t) (σ_k(i)|σ_j = 1),
ν_i|j → a^(t + 1) (σ_i | σ_j = 1) = 1/Z_i|j → a e^βμ(1 - σ_i)∏_b ∈∂_i∖ aν̃_b |j→ i^(t) (σ_i | σ_j = 1).
In Eq. (<ref>), we use k(i) as the index of residue that contact with σ_i to avoid confusion with j of current context. The symbol Z_a → i|j and Z_i|j → a are the normalization constants for the corresponding messages. As with the normal BP described earlier, the following belief:
ν_i|j(σ_i | σ_j = 1) = 1/Z_i|j∏_a ∈∂_iν̃_a → i|j^(t_ max) (σ_i | σ_j = 1),
converges to the conditional marginal p_i | j(σ_i | σ_j=1, R, μ). The symbol Z_i|j in Eq. (<ref>) denotes the normalization constant of this message.
One can calculate χ by using following formula
< σ_iσ_j> = ∑_ R<σ_iσ_j>_ R Y( R;β,μ)/∑_ RY( R;β,μ).
Eq. (<ref>) is obtained by the same manner as the derivation process of Eq.(<ref>).
apsrev4-2
|
http://arxiv.org/abs/2409.03646v1 | 20240905160457 | Limited but consistent gains in adversarial robustness by co-training object recognition models with human EEG | [
"Manshan Guo",
"Bhavin Choksi",
"Sari Sadiya",
"Alessandro T. Gifford",
"Martina G. Vilas",
"Radoslaw M. Cichy",
"Gemma Roig"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.HC"
] |
Adversarial Robustness Gains via EEG Co-Training
Guo et al.
Department of Computer Science, Goethe University, Frankfurt am Main, Germany
{m.guo,choksi,Saba-Sadiya,roignoguera}@em.uni-frankfurt.de The Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
{nikiguo93,rmcichy}@zedat.fu-berlin.de,[email protected] Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
[email protected]
Limited but consistent gains in adversarial robustness by co-training object recognition models with human EEG
Manshan Guo1,2,40000-0002-5506-6854 Bhavin Choksi 1,20000-0002-6475-4149 Sari Sadiya1,2,30009-0005-7482-3274
Alessandro T. Gifford40000-0002-8923-9477 Martina G. Vilas1,2,50000-0002-1097-8534
Radoslaw M. Cichy4jointly directed work0000-0003-4190-6071 Gemma Roig1,2,⋆0000-0002-6439-8076
======================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In contrast to human vision, artificial neural networks (ANNs) remain relatively susceptible to adversarial attacks. To address this vulnerability, efforts have been made to transfer inductive bias from human brains to ANNs, often by training the ANN representations to match their biological counterparts. Previous works relied on brain data acquired in rodents or primates using
invasive techniques, from specific regions of the brain, under non-natural conditions (anesthetized animals), and with stimulus datasets lacking diversity and naturalness. In this work, we explored whether aligning model representations to human EEG responses to a rich set of real-world images increases robustness to ANNs. Specifically, we trained ResNet50-backbone models on a dual task of classification and EEG prediction; and evaluated their EEG prediction accuracy and robustness to adversarial attacks. We observed significant correlation between the networks' EEG prediction accuracy, often highest around 100 ms post stimulus onset, and their gains in adversarial robustness. Although effect size was limited, effects were consistent across different random initializations and robust for architectural variants. We further teased apart the data from individual EEG channels and observed strongest contribution from electrodes in the parieto-occipital regions. The demonstrated utility of human EEG for such tasks opens up avenues for future efforts that scale to larger datasets under diverse stimuli conditions with the promise of stronger effects.
§ INTRODUCTION
Despite the remarkable performance of artificial neural networks (ANNs) in object recognition <cit.>, ANNs are sensitive to small so-called adversarial perturbations in the inputs <cit.>. Since the initial discovery of this vulnerability, the field has moved rapidly to devise various adversarial defenses against it<cit.>.
In contrast, human perception is robust to adversarial attacks that are detrimental for ANNs<cit.>. This inspires the idea that adding more bio-inspired elements into ANNs might help alleviate their sensitivity to adversarial attacks and make them more robust. These biological inductive biases often are architecture-based, optimization-based or both <cit.>. Studies have also directly constrained the ANN representations with their biological counterparts, often using neural data from rodents or non-human primates under non-ecological conditions as regularizers<cit.>. These attempts have led to promising, but modest gains in the robustness of the resulting ANNs <cit.>. Yet, the cost and the practical challenges associated with acquiring such data also limits the diversity of stimuli used, often restricting the images from small datasets like CIFAR in grayscale, even for approaches relying on fMRI brain data<cit.>.
Thus, in this work, we used a large-scale EEG dataset collected on diverse real-world images with the aim to improve ANNs robustness to adversarial attacks. Specifically, we experimented with the EEG dataset collected by <cit.> on participants viewing images from the THINGS dataset<cit.>. We report improvements against adversarial attacks which, though modest, were observed consistently across different random initializations of various architectural variants. These robustness gains were positively correlated with the ability of the models to predict EEG at early time points. Interestingly, as already observed in previous work, we also report similar robustness gains when using shuffled versions of the EEG data. While our results do not give state-of-the-art robustness, they provide important pointers guiding future research. We further analyze the EEG dataset across individual channels to investigate any channel-specific effects. We observe that mid-level channels (PO7, PO3, POz, PO4, PO8), though not as well-predicted by the ANNs as the early channels (Oz, O1, O2), are better (positively) correlated with the gains in robustness.
To further facilitate investigations into these methods, we publicly provide the code to the broader scientific community[Code available at: <https://github.com/cvai-roig-lab/eeg_cotraining_robustness>].
§ RELATED WORK
Various efforts have aimed to imbue deep neural networks (ANNs) with human-like cognitive abilities by training them using brain data. Khosla <cit.> showcased that ANNs optimized to predict fMRI activity in the fusiform face area (FFA) and extrastriate body area (EBA) could detect `faces' and `bodies', despite lacking direct exposure to such images during training. Fu <cit.> illustrated that aligning CNN representations with human fMRI data can improve model's performance in video emotion recognition tasks.
Among these, some studies have focused specifically on improving the adversarial robustness of object recognition models using brain data. Li <cit.> recorded mice's neural response in primary visual cortex (V1) with photon-scans and then, along with classification, use it to penalize the representations of ResNet18<cit.>, resulting in reduced vulnerability to white-box adversarial perturbations and gaussian noise.
Federer <cit.> used publicly available recordings from V1 in monkeys using micro-electrode arrays<cit.> and similarly penalized representations of VGG with RSA, observing a brain-like response to white-box adversarial noise and label corruption. In a similar fashion, Safarani <cit.> first jointly trained VGG19 for classification and predicting neural data collected in monkeys' V1, enhancing the models' robustness against 14 image distortions. Besides, they showed that their co-trained models were sensitive to salient regions of objects, reminiscent of V1's role in detecting object borders and bottom-up map.
Pirlot <cit.> argued that the concurrent methods used for penalizing the representations could be limited, and instead proposed a Deep Canonical Correlation Analysis (DCCA)-based regularization, observing a reduction of vulnerability against adversarial noise. Dapello <cit.> employed Centered Kernel Alignments (CKA) to penalize representations, leveraging monkeys' data in Inferior temporal cortex (IT) to improve robustness against white-box attacks and align with human behavior error patterns. Based on the findings from Stringer et al<cit.> that, regardless of the input, the eigenspectrum of covariance matrix of the neural code (in mice's visual cortex) followed a power law, Nassar <cit.> inquired whether a similar constraint on the ANN representations might help in enhancing their robustness. They found that, when implemented on smaller convolutional neural networks, their proposed spectral regularization improved robustness against L_∞-FGSM and Projected Gradient Descent attacks<cit.>.
Unlike ours, most studies have restricted themselves to using brain data collected from non-human animals using costly invasive techniques. We note that a contemporary study took a similar rationale to ours and introduced a multi-layer alignment framework to align ANN representations with human EEG<cit.>. Using the same EEG dataset, they trained an additional multi-layer module to predict image categories and human EEG from CORnet-S features (pretrained on ImageNet). While they tested their networks on FGSM attacks, which are known to be quite weak and prone to gradient-masking<cit.>, their main focus was to demonstrate the utility of the learned representations for neuroscience—to better explain fMRI and behavioral data.
§ METHODS
Dataset We used a publicly available dataset containing images and corresponding EEG recordings from 10 subjects viewing images from the THINGS database <cit.>. The training set included 16,540 natural images across 1,654 object categories, with each category containing 10 images presented in 4 separate runs. We split the training set into 9:1 split for training and validation, allocating 9 images per category for training (14,886 images total) and 1 image per category for validation (1,654 images total). The raw EEG signals were epoched from 200ms before to 800ms after stimulus onset and down-sampled to 100Hz. Seventeen channels were selected, including `Pz', `P3', `P7', `O1', `Oz', `O2', `P4', `P8', `P1', `P5', `PO7', `PO3', `POz', `PO4', `PO8', `P6', and `P2', which record signals from the occipital and parietal cortex where the visual signals are the strongest. Further details on EEG-image pair preprocessing are in <Ref>.
Architectures and training
The DTL networks consisted of an image classification branch (a ResNet50 backbone) and an EEG prediction branch. Along with certain layers that were shared from the classification branch, the EEG prediction branch comprised of an independent component of either dense linear layers (CNN), recurrent layers (RNN), Transformer, or attention layers that were appended to the ResNet backbone. Overall 24 models (see <Ref> for additional details) were trained for two objectives—classification and EEG prediction. For the classification branch, 1654 object-categories were used. For EEG prediction, MSE loss was applied to predict the EEG data consisting of 100 timepoints (output of size 1 image × 17 channels × 100 tps). To balance the losses between these tasks, we applied the following total loss function from <cit.> :
L(W, δ_1, δ_2) = 1/2 δ_1^2 L_1(W) + 1/2 δ_2^2 L_2(W) + log δ_1 +log δ_2.
Here, L_1 and L_2 represent the EEG prediction and image classification losses, respectively, with 1/2 δ_1^2 and 1/2 δ_1^2 as loss coefficients. These parameters, along with the model weights W were updated using the Adam optimizer with a learning rate of 5e-6 and a weight decay of 0.0. The image classification branch was pre-trained on ImageNet, and the EEG prediction branch was initialized with three different training seeds (0, 17, and 337). Each model was trained for 200 epochs with a batch size of 64. As control experiments, we also trained the networks on three simulated EEG datasets obtained by shuffling the original, and randomly drawing from a geometrical or normal distribution. The models trained with such data are labeled as DTL-shuffled, DTL-random, and DTL-random-normal and were contrasted with those trained on original EEG data (DTL-real). Additional details regarding all the architectures and model training can be found in <Ref>.
EEG prediction evaluation
Following <cit.>, we evaluated the EEG prediction results by measuring Pearson correlation between the predicted and the actual EEG. A PCC matrix (of shape 17 × 100) was constructed where each element represented the linear correlation between the predicted and actual EEG. We then averaged across the channel dimension to get a global value. Results were averaged across the 10 subjects and models initialized with three random seeds. Further details are available in <Ref>.
Adversarial robustness evaluation and Robustness gain Adversarial perturbations are image transformations capable of fooling ANNs while remaining imperceptible for humans. To assess the adversarial robustness of our models, we employed Foolbox <cit.> to create adversarial versions of the 1654 original validation images under different attack strengths ϵ. In particular, 1654 adversarial examples with each ϵ value were fed into each DTL model to obtain the the top-1 classification accuracy, denoted as acc_DTL(ϵ). Similarly, we obtained acc_baseline(ϵ) for the baseline model–the ResNet50 model trained for classification. The adversarial robustness gain was defined as Gain_DTL(ϵ) = acc_DTL(ϵ)- acc_baseline(ϵ). We applied L_2-
and L_∞-norm bounded untargeted projected gradient descent (PGD) <cit.>, and L_2 Carlini & Wagner (C&W) attack <cit.>, as described in <Ref>.
Correlation between adversarial robustness gain and EEG prediction
We co-trained 24 architectures on EEG from 10 subjects using 3 training seeds, resulting in a total of 24 × 10 × 3 = 720 samples for correlation analysis. The mean adversarial robustness gain Avg_Gain_DTL was computed by averaging Gain_DTL^n(ϵ) across subjects and training seeds (0 ≤ n ≤ 720) as well as attack strength ϵ. To identify significant time points (tps) contributing to robustness gain, we averaged PCC (ci,tps) across all channels (ci, channel index) and time points (tps) within an optimal sliding window size, resulting in Avg_PCC_tps, which denotes mean prediction accuracy of all channels within the window. The sliding window size was optimized to encompass as many significant time points as possible, further details of which are provided in <Ref>.
After identifying these significant time points, we investigated key electrodes by similarly measuring correlation between robustness gain and mean prediction accuracy per channel across critical time points.
§ RESULTS
Adversarial robustness gains were positively correlated with the models' EEG prediction
We first investigated if there is a relationship between the model's EEG prediction ability and its gains in adversarial robustness. For this, we measured the correlation between the gains in adversarial robustness (Gain_DTL(ϵ)) of the 720 models considered here and their EEG prediction (AVG_PCC_tps); see Figure <ref>A. We observed significant positive correlation values (between 0.53-0.61, p-values < 1e-6) implying that the robustness of the models scaled with the ability of the networks to better capture the statistics from the neural data. We note that similar results were obtained by <cit.> albeit for neural data from individual neurons in the macaque brain. Concerning the role of architecture, we observed that evaluations that combined the features from both the 3rd and 4th block obtained good results for both adversarial robustness and EEG prediction.
We further illustrate the robustness gains of our most robust model (in Figure <ref>B and D). This network (shown with a red arrow in Figure <ref>B) concatenated the features from both the 3rd and 4th block. We observed highest correlations of 0.32 around 100 ms post-stimulus onset (Figure <ref>C), that is at the time of highest discriminability in the EEG data<cit.>. In Figure <ref>D, we show the robustness of this model against all the attacks used in our analysis. We also include the robustness gains obtained after training the model with the control (shuffled and random) versions of EEG . While these also showed some gains in robustness (as also reported in previous works), the model trained with the real EEG showed the highest robustness gains. While the gains are modest—a clear limitation of our results and similar to those reported in earlier studies—they are nonetheless surprising given the high strengths of the attacks (epsilon budget of 1. in L2 norm). Moreover, they demonstrate the potential for the utility of human EEG for rendering robustness.
Electrodes from mid-level EEG channels contribute most strongly to robustness
To find out which EEG channels mostly contributed to robustness, we first determined the EEG channels that were best predicted after our dual-task training. We measured the correlation between the original and predicted EEG, and observed that the models best predicted the data from early occipital channels (Oz, O1, O2) with the group of parieto-occipital channels (PO7, PO3, PO4, PO8, POz) being second best (<Ref>A). These EEG channels overlay the visual cortex (see <Ref>B), consistent with the origin of the observed signals<cit.>.
But do these channels actually contribute to the robustness of the networks? To ascertain that, we measured the correlations between the robustness gains for each attack and the PCC values for each individual channel (<Ref>C). While the channels in the early visual areas seemed to be best predicted, those in the parieto-occipital region (from electrodes PO7, PO3, PO4, PO8, POz) showed the highest correlation values, indicating that it were the statistics from these channels that particularly aided in enhancing the robustness of the networks. This suggests that brain signal from the later visual processing stages in the human brain contribute more to the robustness than earlier processing stages.
§ DISCUSSION AND CONCLUSION
In this study, we explored the effectiveness of human EEG to render robustness to ANNs. Specifically, we co-trained the ANNs to predict human EEG signals in addition to image classification, and tested their robustness to adversarial perturbations. We observed consistent robustness gains across different variants of the networks. Our investigations revealed a positive correlation between a model's robustness and its ability to predict the EEG. We further teased apart the contribution from individual EEG channels, and observed that though the channels overlaying the early visual cortex were best predicted (with Oz even reaching the upper estimates of the noise ceilings), the ones in the parieto-occipital region correlated better with the gains in adversarial robustness.
Our work validates the use of human EEG data for enhancing the robustness of ANNs which, compared to intracranial recordings that were often used previously, is cheaper and easier to collect. Given that there is an ongoing trend in NeuroAI <cit.> to collect massive datasets, our methods could not only benefit from the new datasets, but also inform future data collection process. Future works could investigate if larger EEG datasets, collected with different stimulus conditions, say from the auditory domain, can similarly help in improving the robustness of artificial neural networks.
As reported in previous works, the robustness gain was consistent, yet modest. This could be because of the exact methods that we used to regularize our networks, as also suggested by <cit.>, or could be the inherent limitations of this approach in itself. Like earlier works, we found that the control (shuffled and random) versions of the EEG, and thus the mere statistics of the signal, also helped in rendering robustness to the ANNs. Indeed, this consistent observation, now observed across intra- and extracranial neural activity, deserves future scientific inquiry in its own right. Exactly what (statistical) elements of the neural activity are the networks utilizing to improve their robustness? Can we use these over conventional initialization methods to improve robustness of ANNs? These questions raise the need and guide future research efforts.
§ ACKNOWLEDGEMENTS
This project was funded by the German Research Foundation (DFG) - DFG Research Unit FOR 5368 (GR) awarded to Gemma Roig, Deutsche Forschungsgemeinschaft (DFG; CI241/1-1, CI241/3-1, and CI241/7-1) awarded to Radoslaw Cichy, and a European Research Council (ERC) starting grant (ERC-2018-STG 803370) awarded to Radoslaw Cichy. We are grateful for access to the computing facilities of the Center for Scientific Computing at Goethe University and Freie universität Berlin. M. Guo is supported by a PhD stipend from the China Scholarship Council (CSC).
splncs04
§ APPENDIX
§.§ EEG-Images pairs
EEG-Image pre-processing
The raw EEG signals were first epoched into trials ranging from 200ms before the stimulus onset (denoted as -0.2s) to 800ms after the stimulus onset (denoted as 0.8s) and later down-sampled to 100Hz. 17 channels overlying occipital and parietal cortex where the visual signals are strongest were finally selected. Consequently, our EEG data matrix for model training is of shape (14,886 images × 4 trials × 17 EEG channels × 100 EEG tps (time points)) and our EEG data matrix for model validation is of shape (1654 images × 4 trails × 17 EEG channels × 100 EEG tps). As EEG signals are noisy, we averaged EEG data across the trial dimension and normalized it across the temporal dimension with Z-score. All the image stimuli were normalized and resized to 3 × 224 × 224 pixels.
§.§ Dual task learning (DTL)
Architecture
In CNN cluster, we concatenated or averaged output from different ResNet50-blocks in the shared net and then used fully connected layers to map from image features to EEG signals directly. In RNN cluster, shallow RNNs where feedback were realized with Resnet-like skip connections, or Long short-term memory (LSTM) units were integrated with the shared net, as recurrence is capable of capturing temporal dynamics and patterns in time-series data. In Transformer cluster, 6 layers of transformer encoders with 32 multi-heads, dropout value of 0.5 and embedding dimension of 256 were combined with the shared net, as transformer architectures have been shown to be beneficial for fMRI and MEG prediction <cit.>. In Attention layer cluster, 2 self-attention layers were considered. One is a self-attention layer in <cit.>. We used 4 multi-heads and an embedding dimension of 256 after optimization experiments. Another utilized the position attention modules (PAM) and channel attention modules (CAM) used in <cit.>. We followed the default settings. All the 24 architectures from the 4 clusters are included in <Ref>. Some architectures from the 4 clusters are depicted in <Ref>, <Ref> and <Ref>.
Model training
<cit.> used the total loss function in Equation<Ref> to balance the loss updating during dual task learning.
L(W, δ_1, δ_2) = 1/2 δ_1^2 L_1(W) + 1/2 δ_2^2 L_2(W) + log δ_1 +log δ_2
Where L_1 and L_2 were EEG prediction and image classification loss, respectively. 1/2 δ_1^2 and 1/2 δ_1^2 were loss coefficients. 1/2 δ_1^2, 1/2 δ_1^2, L_1 and L_2 were updated using Adam optimizer, with a learning rate of 5e-6 and a weight decay of 0.0. The learning rate and weight decay values were optimized with cross validation. W denoted weights of model. The image classification branch was pre-trained on Imagenet and the independent net of EEG prediction branch were initialized with 3 different pre-chosen training seeds (0, 17 and 337).
§.§ Evaluation of EEG prediction and adversarial robustness
EEG prediction evaluation
Given the hierarchical visual processing in the brain, EEG signals at different time points may capture activities in different regions. We used Pearson correlation analysis to evaluate EEG predictions at various time points using 1654 images from the validation set. The Pearson correlation coefficient (PCC) at each time point was computed as follows: (1) By looping through the channel and temporal dimension, we measured the PCC between predicted EEG and biological EEG at different channel index (ci) and time points. Consequently, we obtained a PCC array of size 17 × 100, denoted as PCC(ci,tps). (2) We averaged the PCC(ci,tps) across the channel dimension and obtained PCC(pts), which denoted linear relationship between predicted and biological EEG at different time points. Here, pts ∈ [-0.02s,-0.01s,...,0.07s,0.08s]. The formula for PCC was given as follows: PCC = cov(signal_1,signal_2)/σ_1,σ_2 , where cov denoted the covariance while σ_1 and σ_2 denoted the standard deviation of signal_1 and signal_2.
Adversarial examples generated with PGD
Projected Gradient Descent, or PGD, iteratively constructs adversarial examples as follows :
x^t+1 = Proj_x+S(x^t + α sgn(∇_x L(θ,x^t,y)))
where θ is the parameters of models, x^t is the input and y is the associated label. L(θ,x^t,y) is the loss of model training and ∇_x L(θ,x^t,y) is the gradient of loss L with respect to input x. α is the gradient step size. t is the number of steps for iteration. x^t and x^t+1 are adversarial examples before and after next iteration, respectively. The Proj is an operator to construct x^t+1 within space x+S, such as l_∞ ball or l_2 norm ball around x. In our setting, t was 50 and 40 for l_2-bounded PGD and l_∞-bounded PGD, respectively. In l_∞-constrained PGD, the attack strength ϵ∈ [1e^-5,2e^-5,3e^-5,4e^-5,5e^-5,6e^-5,7e^-5,8e^-5,1e^-4,3e^-4,5e^-4,7e^-4,8e^-4
,1e^-3,8e^-3,1e^-2] and the step size relative to ϵ was 0.01/0.3. In the l_2-constrained PGD, the attack strength ϵ∈ [1e^-3,5e^-3, 7e^-3, 1e^-2, 2e^-2, 3e^-2, 5e^-2, 7e^-2, 1e^-1
, 2e^-1, 3e^-1, 5e^-1, 7e^-1,1.0] and the step size relative to ϵ was set to 0.025.
Adversarial examples generated with Carlini & Wagner (C&W) attack
C&W attack formulates the generation of adversarial examples as an optimization problem, finding the smallest perturbations to the input data that causes misclassification of the target model. C&W attack defines the objective function J(x^') as follows:
J(x^') = α· dist(x,x^') + β· loss(f(x^'),y)
Where x is the original image; x^' is the perturbed image; In the case of L2 C&W attack, dist(x,x^') measures the perturbation using the l_2 norm. loss(f(x^'),y) represents the misclassification loss of target model f on the perturbed input with respect to the target class y. α and β are weights to balance the dist(x,x^') and loss(f(x^'),y). The C&W attack iteratively adjusts the perturbations to improve the chances of misclassification while keeping the perturbations imperceptible. Thus the term dist(x,x^') is minimized and loss(f(x^'),y) is maximized. Gradient descent is used for optimization. The perturbation is updated until it converged towards an examples x^' = x^' - η·∇_x^'J(x^'), where η is the step size. The attack strength ϵ∈ [1e^-5,7e^-5,1e^-4,7e^-4,1e^-3,1e^-2,1e^-1,3e^-1,5e^-1,7e^-1,
9e^-1,1.0,1.2,1.4,1.6,1.8,2.0,2.2,2.4,2.6,2.8,3.0] and the step size relative to ϵ was 0.01.
§.§ Mean adversarial robustness gain and mean EEG prediction accuracy
Mean adversarial robustness gain and mean EEG prediction accuracy were used to measure the relationship between adversarial robustness gain and EEG prediction accuracy. We trained 24 architectures from all 4 clusters on EEG of 10 subjects with 3 training seeds. As a result, we had totally 24 × 10 × 3 = 720 sample models for analysis.
Mean adversarial robustness gain Avg_Gain_DTL
For each sample model n (0 ≤ n ≤ 720), we first computed Avg_Gain_DTL^n which was the averaged value of Gain_DTL^n(ϵ) across selected attack strength. In C&W, Gain_DTL(ϵ) used for Avg_Gain_DTL^n calculation included those obtained under attack strength ϵ∈ [5e^-1,7e^-1,9e^-1,1.0,1.2,1.4,1.6,1.8,2.0,2.2,2.4,2.6,2.8,3.0]. In l_∞-constrained PGD, Gain_DTL(ϵ) used for Avg_Gain_DTL^n calculation included those obtained under attack strength ϵ∈ [8e^-5,1e^-4,3e^-4,5e^-4, 7e^-4,8e^-4,1e^-3,8e^-3,1e^-2
,1e^-1]. In l_2-constrained PGD, we considered the attack strength ϵ∈ [7^e-2,1e^-1
,2e^-1,3e^-1,5e^-1,7e^-1,1.0]. We selected high ϵ values because under high attack strength, robust models would perform much better than fragile ones in classifying adversarial examples, which enabled us to better quantify relationship between EEG prediction accuracy and robustness gain. We finally had 720 Avg_Gain_DTL^n values and the Avg_Gain_DTL was computed by averaged Avg_Gain_DTL^n across 3 training seeds and 10 subjects.
Mean EEG prediction accuracy Avg_PCC_tps and optimal sliding window selection
Based on our initial experimentation, we posited that higher prediction accuracy of EEG signals around 100ms might result in increased gains in adversarial robustness. To rigorously investigate this association, we initially deployed three sliding windows covering the vicinity of 100ms: 0.10s to 0.12s, 0.09s to 0.14s, and 0.05s to 0.3s, respectively. Within these windows, for each model iteration (n), we computed Pearson correlation coefficient (PCC) values (PCC^n(pts)) as described in <Ref>, subsequently averaging these values (Avg_PCC^n) across all time points within the sliding window. By averaging Avg_PCC^n values across 10 subjects and 3 training seeds, we derived the Avg_PCC_tps. Assessing the correlation between Avg_Gain_DTL and different Avg_PCC_tps obtained with the three window sizes, we identified an optimal window size of 0.06s, resulting in the highest correlation value between Avg_Gain_DTL and Avg_PCC_tps around 100ms. Subsequently, we proceeded to identify the most informative EEG signals crucial for robustness gain by moving this optimal sliding window across all time points, with a step size of 0.01s, and measuring the correlation values between Avg_Gain_DTL and Avg_PCC_tps across all time points. In total, we obtained 100 correlation values, which represent the correlation between Avg_Gain_DTL and Avg_PCC_tps when the optimal sliding window arrives at different time points. Similarly, to discern the most significant channels for enhancing the robustness gain, after identifying these significant time points, we averaged PCC(ci,tps) across critical time points, yielding the averaged prediction accuracy across critical time points for each channel.
§ ADDITIONAL RESULTS
§.§ Relationship between Avg_Gain_DTL and Avg_PCC_tps within different sliding windows
In <Ref> (A), the correlation values between Avg_Gain_DTL and Avg_PCC_tps within two specific sliding windows, one ranging from 0.05s to 0.3s and the other from 0.10s to 0.12s, were inferior to those in <Ref> (B), which suggested that using a sliding window size of 0.06s allowed us to capture more significant EEG signals around 100ms for adversarial robustness improvements. Using this optimal window size, <Ref> rigorously examines the correlation between EEG prediction accuracy and adversarial robustness gain.
|
http://arxiv.org/abs/2409.03603v1 | 20240905150147 | Two-field models in the presence of impurities | [
"D. Bazeia",
"M. A. Liao",
"M. A. Marques"
] | hep-th | [
"hep-th"
] |
images/
|
http://arxiv.org/abs/2409.03061v1 | 20240904202113 | Incorporating dense metric depth into neural 3D representations for view synthesis and relighting | [
"Arkadeep Narayan Chaudhury",
"Igor Vasiljevic",
"Sergey Zakharov",
"Vitor Guizilini",
"Rares Ambrus",
"Srinivasa Narasimhan",
"Christopher G. Atkeson"
] | cs.CV | [
"cs.CV",
"cs.GR",
"cs.RO"
] |
[
Elephant trunk wrinkles:
A mathematical model of function and form
Yang Liu[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK, Department of Mechanics, School of Mechanical Engineering, Tianjin University, Tianjin 300354, China]
Alain Goriely[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK]
L. Angela Mihai[Corresponding author: L.A. Mihai, Email: , School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, UK]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================
< g r a p h i c s >
figureWe present an approach for the photo-realistic capture of small scenes by incorporating dense metric depth, multi-view, and multi-illumination images into neural 3D scene understanding pipelines. We use a robot mounted multi-flash stereo camera system, developed in-house, to capture the necessary supervision signals needed to optimize our representation with a few input views. The reconstruction of the LEGO plant and the face were generated with 11 and 2 stereo pairs respectively. We relight the textured meshes using <cit.>. Background design by <cit.>.
]
Elephant trunk wrinkles:
A mathematical model of function and form
Yang Liu[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK, Department of Mechanics, School of Mechanical Engineering, Tianjin University, Tianjin 300354, China]
Alain Goriely[Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK]
L. Angela Mihai[Corresponding author: L.A. Mihai, Email: , School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, UK]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Synthesizing accurate geometry and photo-realistic appearance of small scenes is an active area of research with compelling use cases in gaming, virtual reality, robotic-manipulation, autonomous driving, convenient product capture, and consumer-level photography. When applying scene geometry and appearance estimation techniques to robotics, we found that the narrow cone of possible viewpoints due to the limited range of robot motion and scene clutter caused current estimation techniques to produce poor quality estimates or even fail. On the other hand, in robotic applications, dense metric depth can often be measured directly using stereo and illumination can be controlled. Depth can provide a good initial estimate of the object geometry to improve reconstruction, while multi-illumination images can facilitate relighting. In this work we demonstrate a method to incorporate dense metric depth into the training of neural 3D representations and address an artifact observed while jointly refining geometry and appearance by disambiguating between texture and geometry edges. We also discuss a multi-flash stereo camera system developed to capture the necessary data for our pipeline and show results on relighting and view synthesis with a few training views.
§ INTRODUCTION
Capturing photo realistic appearance and geometry of scenes is a fundamental problem in computer vision and graphics with a set of mature tools and solutions for content creation <cit.>, large scale scene mapping <cit.>, augmented reality and cinematography <cit.>.
Enthusiast level 3D photogrammetry, especially for small or tabletop scenes, has been supercharged by more capable smartphone cameras and new toolboxes like RealityCapture and NeRFStudio. A subset of these solutions are geared towards view synthesis where the focus is on photo-realistic view interpolation rather than recovery of accurate scene geometry. These solutions take the “shape-radiance ambiguity”<cit.> into stride by decoupling the scene transmissivity (related to geometry) from the scene appearance prediction. But without of diverse training views, several neural scene representations (e.g. <cit.>) are prone to poor shape reconstructions while estimating accurate appearance.
By only reasoning about appearance as cumulative radiance weighted with the scene's transmissivity, one can achieve convincing view interpolation results, with the quality of estimated scene geometry improving with the diversity and number of training views. However, capturing a diverse set of views, especially for small scenes, often becomes challenging due to the scenes' arrangement.
Without of dense metric depth measurements, researchers have used sparse depth from structure from motion <cit.>, and dense monocular depth priors <cit.> to improve reconstruction, with a focus on appearance. Assimilating dense non-metric depth (e.g. <cit.>) is often challenging due to the presence of an unknown affine degree of freedom which needs to be estimated across many views.
However, without diversity of viewpoints, measuring the geometry directly is often useful. Several hardware solutions for digitizing objects exist, ranging from consumer level 3D scanners (e.g. <cit.>), and room scale metrology devices (<cit.>) to high precision hand held 3D scanners (e.g. <cit.>). Although these systems measure geometry very accurately, they interpret appearance as diffuse reflectance and often fall short in modelling view-dependent effects.
Despite the known effectiveness of incorporating depth and widespread availability of dense metric depth sensors in smartphone cameras (<cit.>) and as standalone devices (<cit.>, incorporation of dense metric depth into neural 3D scene understanding is underexplored.
In this work:
* we present a method to incorporate dense metric depth into the training of neural 3D fields, enabling state-of-the-art methods to use dense metric depth with minor changes.
* We investigate an artifact (<ref>) commonly observed while jointly refining shape and appearance. We identify its cause as existing methods' inability to differentiate between depth and texture discontinuities. We address it by using depth edges as an additional supervision signal.
We demonstrate our ideas using a robot mounted multi-flash stereo camera rig developed in-house from off-the-shelf components. This device allows us to capture a diverse range of scenes with varying complexity in both appearance and geometry. Using the captured data, we demonstrate results in reconstruction, view interpolation, geometry capture, and relighting with a few views. We hope that our full-stack solution comprising of the camera system and algorithms will serve as a test bench for automatically capturing small scenes in the future. Additional results may be viewed at https://stereomfc.github.iohttps://stereomfc.github.io and in the supplementary document.
§ RELATED WORK
View synthesis and reconstruction of shapes from multiple 3D measurements is an important problem in computer vision with highly efficient and general solutions like volumetric fusion (<cit.>), screened Poisson surface reconstruction (<cit.>), patch based dense stereopsis (<cit.>) and joint refinement of surface and appearance <cit.>. While these continue to serve as robust foundations, they fall short in capturing view-dependent appearance. Additionally, even with arbitrary levels of discretization, they often oversmooth texture and surfaces due to data association relying on weighted averages along the object surface.
Recent neural 3D scene understanding approaches (e.g. <cit.>) have avoided this by adopting a continuous implicit volumetric representation to serve as the geometric and appearance back-end of the view synthesizer. Together with continuous models, reasoning about appearance as radiance, and high frequency preserving embeddings<cit.>, these approaches serve as highly capable view interpolators by reliably preserving view dependent appearance and minute geometric details. More recent work has included additional geometric priors in the form of monocular depth supervision (<cit.>), sparse depth supervision from structure-from-motion toolboxes (<cit.>), dense depth maps (<cit.>), patch based multi-view consistency <cit.>, and multi-view photometric consistency under assumed surface reflectance functions (<cit.>). Our work builds on the insights from using dense depth supervision to improve scene understanding with only a few training views available.
Novel hardware is often used for collecting supervision signals in addition to color images to aid 3D scene understanding. <cit.> demonstrate a method to incorporate a time-of-flight sensor. <cit.> demonstrate a method to extract geometric and radiometric cues from scenes captured with a commercial RGBD sensor and improve view synthesis with a few views. Event based sensors have also been used to understand poorly lit scenes with fast moving cameras (<cit.>). Researchers have also combined illumination sources with cameras to capture photometric and geometric cues for dense 3D reconstruction of scenes with known reflectances (<cit.>). Similarly, <cit.> capture geometry and reflectance of objects by refining multi-view color, depth and multi-illumination images. Given the recent advances in stereo matching (<cit.>) we use a stereo camera to collect data for view synthesis to disambiguate between shape and appearance at capture.
Pairing illumination sources with imaging can improve reasoning about the appearance in terms of surface reflectance parameters. <cit.> approaches the problem of material capture using a variety of neural and classical techniques. <cit.> leverage recent neural scene understanding techniques to jointly learn shape and appearance as reflectance of the scene. Our work also pairs illumination sources with stereo cameras to capture multi-illumination images from the scene and we build on modern neural techniques for view synthesis and relighting the scene.
§ METHODS
We follow related works (<cit.>) and represent the scene with two neural networks – an intrinsic network 𝒩(θ) and an appearance network 𝒜(ϕ) which are jointly optimized to capture the shape and appearance of the object. 𝒩(θ) is a multi-layer perceptron (MLP) with parameters (θ) and uses multi-level hash grids to encode the inputs (<cit.>). It is trained to approximate the intrinsic properties of the scene – the scene geometry as a neural signed distance field 𝒮(θ) and an embedding ℰ(θ). The appearance network 𝒜(ϕ) is another MLP which takes ℰ(θ) and a frequency encoded representation of the viewing direction and returns the scene radiance along a ray.
Prior work has jointly learned 𝒮, 𝒩, 𝒜 with only multi-view images by optimizing a loss in the form of <ref> using stochastic gradient descent <cit.> along a batch of rays projected from known camera centers to the scene.
ℓ = ℓ_C + λ_gℓ_D + λ_c𝔼(|∇^2_𝐱𝒮(𝐱_s)|)
λs are hyperparameters and the third term in <ref> is the mean surface curvature minimized against the captured surface normals (see <cit.>). As the gradients of the loss functions ℓ_C (appearance loss) and ℓ_D (geometry loss) propagate through 𝒜 and 𝒩 (and 𝒮 as it is part of 𝒩) the appearance and geometry are learned together.
We describe our method of incorporating dense metric depth in <ref> which enables a variety of neural 3D representations (<ref>) to use it. In <ref> we jointly optimize shape and appearance of a scene using information about scene depth edges.
§.§ Incorporating dense metric depth
Given a large number of orthogonal view pairs (viewpoint diversity), and the absence of very strong view dependent effects, <ref> is expected to guide 𝒮 to towards an unbiased estimate of the true scene depth (see e.g. <cit.>). We can accelerate the convergence by providing high quality biased estimate of the scene depth. Given the quality of modern deep stereo (<cit.>) and a well calibrated camera system, a handful of aligned RGBD sequences can serve as a good initial estimate of the true surface depth in absence of diverse viewpoints.
In this section we describe our method to directly optimize 𝒮 with estimates of true surface depth to any surface point 𝐱_s.
Although <cit.> use the depth estimates directly, they fall short of modelling view-dependent effects. To avoid that, we elect to learn a continuous and locally smooth function that approximates the signed distance function of the surface 𝐱_s which can then be transformed to scene density (<cit.>). To do this, we roughly follow <cit.> and consider a loss function of the form
ℓ_D(θ) = ℓ_𝐱_s + λ𝔼(||∇_𝐱𝒮(𝐱^Δ, θ)|| - 1)^2
where, ℓ_𝐱_s = 1/NΣ_∀𝐱[ 𝒮(𝐱, θ) + 1-⟨∇_𝐱𝒮(𝐱, θ), 𝐧_x ⟩]
Through the two components of ℓ_𝐱_s, the loss encourages the function 𝒮(𝐱, θ) to vanish at the observed surface points and the gradients of the surface to align at the measured surface normals (𝐧_x). The second component in <ref> is the Eikonal term (<cit.>) which encourages the gradients of 𝒮 to have a unit L_2 norm everywhere. The individual terms of <ref> are averaged across all samples in a batch corresponding to N rays projected from a known camera.
The Eikonal constraint applies to the neighborhood points 𝐱^Δ_s of each point in 𝐱_s. <cit.> identifies candidate 𝐱^Δ_s through a nearest neighbor search, where as <cit.> identifies 𝐱^Δ through random perturbations of the estimated surface point along the projected ray. As we have access to depth maps, we identify the variance of the neighborhood of 𝐱_s through a sliding window maximum filter on the depth images. This lets us avoid expensive nearest neighbor lookups for a batch of 𝐱_s to generate better estimates of 𝐱^Δ_s than <cit.> at train time. As a result, convergence is accelerated – (∼ 100× over <cit.>) with no loss of accuracy. As we used metric depth, noisy depth estimates for parts of the scene are implicitly averaged by 𝒮 optimized by minimizing <ref>, making us more robust to errors than <cit.>. We provide more details in the supplementary material.
§.§ Incorporating depth edges in joint optimization of appearance and geometry
Prior works (<cit.>) show the benefits of jointly refining geometry and appearance as it affords some degree of geometric super-resolution and more stable training. However, some pathological cases may arise when the scene has a large variation in appearance corresponding to a minimal variation in geometry across two neighboring surface points – 𝐱_s and 𝐱_s^Δ. We investigate this effect by considering an extreme case – a checkerboard printed on matte paper with an inkjet printer, where there is no geometric variation (planar geometry) or view dependent artifacts (ink on matte paper is close to Lambertian) corresponding to a maximum variation in appearance (white on black). The qualitative results are presented in <ref>.
Consider two rays r⃗_𝐱_s and r⃗_𝐱_s^Δ connecting the camera center and two neighboring points 𝐱_s and 𝐱_s^Δ on two sides of an checkerboard edge included in the same batch of the gradient descent. The total losses for those rays depend on the sum of the geometry and appearance losses (<ref>). By default, the current state of the art (<cit.> etc.) do not have a mechanism to disambiguate between texture and geometric edges (depth discontinuities).
As seen in <ref>, given unsuitable hyperparameters, the approaches will continue to jointly update both geometry and appearance to minimize a combined loss (<ref>).
This can often result in pathological reconstructions (left insets in <ref>) due to ℓ_C gradients dominating over ℓ_D. By gradually increasing the modelling capacity of 𝒩 we can somewhat avoid this artifact and force the gradient updates to focus on 𝒜 to minimize the cumulative loss. <cit.> recognize this and provide an excellent set of hyperparameters and training curricula to gradually increase the modelling capacity of 𝒩(ϕ). This results in remarkable geometric reconstructions for well known datasets (<cit.>). Alternatively, if we have per-pixel labels of geometric edges (𝐄, <ref>), we can preferentially sample image patches with low variation of geometric features when the model capacity is lower (𝒮(θ) tends to represent smoother surfaces), and focus on image patches with geometric edges when the model capacity has increased. The modelling capacity of 𝒜(ϕ) never changes.
<Ref> describes our sampling procedure while learning a scene with a variety of geometric and texture edges. <Ref> is used to draw pixel samples – the probability of drawing pixel p_i is calculated as a linear blend of the likelihood that it belongs to the set of edge pixels 𝐄 and α is a scalar (α∈ [0, 1]) proportional to the progress of the training.
P(p_i|α) = (1-α)P(p_i ∈𝐄) + α P(p_i ∉𝐄)
To preserve the geometric nature of the edges while ruling out high frequency pixel labels, we use Euclidean distance transform (<cit.>) to dilate 𝐄 before applying <ref>. We provide implementation details in the supplementary material for reproducibility. We discuss quantitative results in <ref>.
§.§ Baselines augmented with depth
As baselines, we augment four state-of-the-art methods to incorporate metric depth:
is our implementation of AdaptiveShells (<cit.>) using metric depth. We retain the formulations for the scene geometry and appearance models, and adapt the formulation of the “shells” to use dense metric depth. Through we also demonstrate how dense metric depth can combine the advantages of volumetric and surface based representations in <ref>.
is our augmented version of <cit.>, where we use the metric depths along the rays to optimize the geometry <ref>. All the other parts of the original approaches, including the methods for generating samples to minimize <ref> and scene density transforms are left unchanged.
is our augmented version of <cit.>, where we also use <ref> to optimize the geometry. The rest of the algorithm including the background radiance field is left intact.
is our deliberately hamstrung version of <cit.> where we force the the samples generated for the volumetric rendering step to have a very low variance around the current biased estimate of the surface. This makes the algorithm necessarily indifferent to the relative magnitudes of ℓ_D and ℓ_C in <ref>, and helps us exaggerate the pathological effects of not segregating texture and geometric edges. We choose to name the method after we (and <cit.>) observed that original method was vulnerable to this artifact under certain hyperparameter choices. Across all the methods, we implement and train 𝒩(θ) following <cit.>, and all of them use the same appearance network 𝒜(ϕ). and require a warm start – 𝒮 pre-optimized for 5K gradient steps. All methods except use the sampling strategy from <ref>. More details are in the supplementary material.
§ SETUP AND DATASET
§.§ A multi-flash stereo camera
In addition to multi-view images, scene depth and depth edges are valuable signals to train neural 3D representations. To capture all the supervision signals, we designed and fabricated a multi-flash stereo camera based on insights from <cit.>, with off-the-shelf parts. We capture data by moving our camera rig in front of objects. For each camera pose we capture a stereo pair of high dynamic range (HDR) images, two depth maps from left and right stereo, two corresponding image aligned surface normals (as gradient of depth maps). We first tonemap the HDR images <cit.> and in-paint them with the depth edges before using <cit.> to preserve intricate surface details in the depth maps. Additionally we capture 12 pairs of multi-illumination images for 12 flash lights around the cameras, one light at a time. From the multi-flash images we recover a per pixel likelihood of depth edges in the scene and a label of pixels with a large appearance variation under changing illumination – relating to the specularity. We detail the design of our rig and the capture processes in the supplementary material. <Ref> shows a snapshot of the data captured, <ref> illustrates our camera rig prototype.
We elected to calculate depth from stereo because it performs better than the following three alternatives we tested. 1) Recovering geometry from intrinsic-image-decomposition (<cit.>) and photometric stereo with a few lights (<cit.>) did not yield satisfactory results. PIE-Net(<cit.>) requires 256×256 images which were too low-resolution for reconstruction and, our captures were out-of-distribution for the pre-trained model. <cit.> assumes fixed lights – we’d need new light calibrations per-view. 2) Self-calibrating-photometric-stereo (<cit.>), demonstrated on <cit.>, needs 50-80 light views and accurate masks which we do not capture. Also, our lights are much closer to the camera than <cit.>. And, 3) modern camera-projector systems (<cit.>) yield better estimates of geometry than stereo, but is not fast enough to capture human subjects (<ref>).
§.§ Dataset
Although a data set is not the primary contribution of our research, we capture some salient aspects of the scene that are not present in several established datasets. We identify these aspects in <ref>. In the rows labeled “specularity” and “depth edges” we note if the dataset has explicit labels for the specular nature of the pixel or a presence of a depth edge at that pixel respectively. Under “illum. model” we note if an explicit illumination model is present per scene – we do not capture an environment illumination model, and instead provide light poses. PaNDoRa does not have explicit specularity labels but polarization measurements at pixels may be used to derive high quality specularity labels, which are better than what our system natively captures. We differentiate between “OLAT” (one light at a time) and “flash” by the location of the source of illumination. Similar to <cit.>, our flashes are parallel to the imaging plane, located close (∼0.1f) to the camera, as opposed to ReNE and OpenIllumination.
§ EXPERIMENTS AND RESULTS
§.§ Accuracy of incorporating metric depth
We reconstruct synthetic scenes with ground truth depth from <cit.> to measure the accuracy of our technique. We use 12-15 RGBD images to reconstruct the scenes and train for an average of 30k gradient steps (∼1500 epochs) in about 75 minutes. In contrast, <cit.> use 300+ RGBD tuples and 9+ hours of training on comparable hardware. Notably, <cit.> also optimizes for noise in camera poses and reports metrics with ground truth and optimized poses. We report the best metric among these two. <cit.> registers the images themselves. We register the RGBD images with a combination of rigid and photometric registration (<cit.>). We present the quantitative results in <ref>. We replicate or out-perform the baselines by using a fraction of the training data and gradient steps. Among all methods discussed in <ref>, and demonstrate similar performance, recovers a smoother surface at the expense of ∼ 1.25× more gradient steps. Our errors on these synthetic datasets closely reflect the performance of <cit.> on approximating surfaces from low noise point clouds. These datasets do not have large view dependent appearance variations to affect the gradient updates.
§.§ The effect of depth edges in training
We tested fused RGBD maps from stereo and four baselines from <ref> to investigate the effect of depth and texture edges. We use edge guided sampling (<ref>) for all except stereo and to prioritize learning geometric discontinuities over appearance. We present the results in <ref>. All of the baselines except improve the reconstruction accuracy due to segregation of texture and depth edges. The smoothness enforced by the curvature loss in <ref> also improves the surface reconstruction over stereo.
§.§ View synthesis with dense depth
Incorporating dense metric depth and our sampling strategy from <ref> enables , , and to perform competitively across challenging scenes. Scene A (<ref>(a)) looks at a couple of reflective objects with large variation in view dependent appearance. Additionally, there are large local errors in the captured depth maps due to specularities in the scene. We capture six stereo pairs, train on 11 images and test on one image. Scene B (<ref>(b)) features a rough metallic object of relatively simple geometry captured by a 16mm lens (450 mm focal length, shallow depth of field). We capture four stereo pairs, train on seven images and test on one image. Scene C (<ref>(c)) features a fairly complicated geometry and is captured with 12 stereo pairs. We train on 22 images and test on two. Quantitative results of our experiments are in <ref>. We observe that , which is roughly 15% faster per gradient step than , generally converges the fastest (wall clock time) to a target PSNR. When the geometry is very complicated (scene C), an equally complicated sampling volume negates the efficiency gains of our sampler. We could not find good parameters for for any of these sequences.
View synthesis was unsuccessful without the inclusion of dense depth. We trained with no depth supervision (equivalent to <cit.>) until saturation (less than 0.1 PSNR increase for 1000 consecutive epochs). The reconstructions, none of which had a PSNR of 18 or higher, are shown in the last column of <ref>.
§.§ Using noisy depth
To investigate the effects of noise in the depth maps, we obtain the depths of scenes using conventional stereo. We used semi-global matching stereo (<cit.>) with a dense census cost (<cit.>) and sub-pixel refinement on tone mapped HDR images to calculate the surface depth. Surface normals were calculated using the spatial gradients of the depth maps.
To focus on the performance of our approaches, we did not filter or smooth the depth obtained from conventional stereo. From the top of <ref>, we observe that strictly improves the quality of the surface reconstructed from just noisy stereo (row 1 and 2 versus row 3), especially when edge sampling is enabled.
If the end goal is just view synthesis, , which blends the advantages of volumetric and surface based rendering, performs equally well with large noise in depth, whereas takes many more iterations to converge. This indicates that photorealistic view synthesis with a volumetric renderer is possible with noisy depth data. However conventional stereo often introduces large local errors which our approaches were unable to improve significantly.
In the presence of noisy depth, the quality of the reconstructed surface was enhanced through edge-based sampling (<ref>). Our sampling strategy allocated samples away from depth edges, where the noise was more prevalent, leading to fewer gradient steps spent modelling areas with higher noise. <Ref> presents the quantitative details of the experiment.
§.§ Relighting
We capture multi-illumination images with known light poses and recover geometry independently of appearance. This allows us to infer the illumination dependent appearance using a combination of physically based appearance parameters – e.g. the Disney Principled BRDF<cit.>. As a benchmark, we upgraded the closest related work, <cit.>, which uses the full gamut of the Disney BRDF parameters, with , to incorporate dense depth. For the data we collected, the optimization process as implemented by <cit.>, was quite brittle and some parameters (e.g. `clearcoat-gloss') would often take precedence over other appearance parameters (e.g. `specular-tint') and drive the optimization to a poor local minima. We demonstrate this problem in detail in the supplementary material. We found the optimization of a subset of appearance parameters (`base-color', `specular-tint', and `roughness') to be the most stable. <cit.> conclude the same.
For relighting, we explore two avenues – the inference step of our approach as a volumetric renderer and a mesh created with the appearance parameters as texture (<ref>). Quantitative and qualitative results of the volumetric renderer are shown in <ref>. We used <cit.> to unwrap the geometry and generate texture coordinates whose quality exceeded <cit.> and our implementation of <cit.>. None of our approaches worked on the ReNe dataset (<ref>, <cit.>) due to low view diversity, and the absence of metric depth. We used the labels in <ref> to allocate more gradient steps for learning the regions with higher appearance variation. We provide more details in the supplementary material.
§ LIMITATIONS
Although we achieve state of the art results in view-synthesis and relighting with a few views, our approach struggles to represent transparent objects and accurately capture the geometry of reflective surfaces. <cit.> address the problem of reflective objects by modelling background reflections and is based on the architecture proposed by <cit.>. As enables <cit.> to use possibly noisy metric depth, it can potentially be extended to model reflective objects.
Our approaches require metric depth and depth edges for the best performance. Our approach relies on capture devices with reasonable quality depth measurements. Future work will address incorporation of monocular and sparse depth priors with depth edges.
Incorporation of metric depth introduces a strong bias, often limiting super resolution of geometry sometimes achieved in neural 3D scene representation (see e.g. <cit.>). Decreasing the effect of <ref> during training may potentially encourage geometric superresolution and is future work.
Finally, modern grid based representations (see e.g. <cit.>) produce very compelling view interpolation results at a fraction of the computational cost of a state of the art volumetric renderer (e.g. <cit.>). However, they need to be “distilled” from a pre-trained volumetric view interpolator. Future work can investigate the use of depth priors to train a grid based representation directly from color and depth images.
§ CONCLUSIONS
We present a solution to incorporate dense metric depth into neural 3D reconstruction which enables state of the art geometry reconstruction. We examine a corner case of jointly learning appearance and geometry and address it by incorporating additional supervision signals. Additionally, we describe a variant of the multi-flash camera to capture the salient supervision signals needed to improve photorealistic 3D reconstruction and demonstrate a pipeline for view synthesis and relighting of small scenes with a handful of training views.
ieeenat_fullname
This supplementary document inherits the figure, equation, table, and reference numbers from the main document. Additional results may be viewed at https://stereomfc.github.iohttps://stereomfc.github.io.
§ REPRESENTATIONS AND IMPLEMENTATION DETAILS
Our scene representation consists of two networks – an intrinsic network 𝒩(θ) and an appearance network 𝒜(ϕ). We follow <cit.> to build and train 𝒩(θ). We use 18 levels of hashgrid encodings <cit.> to encode the input and a two layer (128 neurons/layer) MLP to generate the intrinsic embedding. The first channel of the embedding, 𝒮(θ) is trained with <ref> to recover a signed distance field of the scene as described in <ref>. The rest of the 127 channels of the embedding ℰ(θ) are passed on to the appearance network 𝒜(ϕ) as an input.
The appearance network takes ℰ(θ), the viewing direction (encoded with 6 levels of sinusoidal encodings following <cit.>), and optionally the illumination direction (if recovering BRDF) to generate colors. The neural network is built with 2 layers of fully connected MLPs (128 neurons/layer) with skip connections.
The neural signed distance field 𝒮(θ) is optimized to return the signed distance of a point from its nearest surface 𝒮(θ): ℝ^3 →ℝ.
The surface of the object can be obtained from the zero-level set of 𝒮(θ) – i.e. for all surface points 𝐱_s ∈ℝ^3 | 𝒮(𝐱_s|θ) = 0. We train 𝒮(θ) by minimizing a geometric loss ℓ_D (<ref>). We follow <cit.> to transform the distance of a point p⃗_i = r⃗|_t_i in a ray to its closest surface s_i = 𝒮(θ,p⃗_i) to the scene density (or transmissivity).
Ψ_β(s) =
0.5exp(s/β), s≤0
1-0.5exp(-s/β), otherwise.
To render the color 𝐂 of a single pixel of the scene at a target view with a camera centered at o⃗ and an outgoing ray direction d⃗, we calculate the ray corresponding to the pixel r⃗ = o⃗ + td⃗, and sample a set of points t_i along the ray. The networks 𝒩(θ) and 𝒜(ϕ) are then evaluated at all the 𝐱_i corresponding to t_i and the per point color 𝐜_i. The transmissivity τ_i is obtained and composited together using the quadrature approximation from <cit.> as:
𝐂 = ∑_iexp( - ∑_j<iτ_j δ_j ) (1-exp(-τ_jδ_j))𝐜_i, δ_i = t_i - t_i-1
The appearance can then be learned using a loss on the estimated and ground truth color 𝐂_gt
ℓ_C = 𝔼[||𝐂 - 𝐂_gt||^2]
The appearance and geometry are jointly estimated by minimizing the losses in <ref> using stochastic gradient descent <cit.>.
ℓ = ℓ_C + λ_gℓ_D + λ_c𝔼(|∇^2_𝐱𝒮(𝐱_s)|)
λs are hyperparameters and the third term in <ref> is the mean surface curvature minimized against the captured surface normals. As the gradients of the loss functions ℓ_C and ℓ_D propagate through 𝒜 and 𝒩 (and 𝒮 as it is part of 𝒩) the appearance and geometry are learned together.
§.§ Details of our baselines
We implemented four baselines to investigate the effects of incorporating dense metric depth and depth edges into neural view synthesis pipelines.
is our method similar to VolSDF <cit.> and MonoSDF<cit.>. We represent the scene with 𝒩 and 𝒜 and train it with metric depth and color by minimizing <ref>. The samples for <ref> are drawn using the “error-bounded sampler” introduced by <cit.>.
represents a modified version of NeUS <cit.>, where we use the training schedule and structure of 𝒩 from <cit.>, the appearance network 𝒜 is adopted from NeUS and we optimize <ref> along with <ref>. In addition to 𝒜, also has a small 4 layer MLP (32 neurons per layer) to learn the radiance of the background as recommended in the original work by <cit.>.
is our method inspired by UniSurf<cit.>. We represent the scene's geometry using a pre-optimized implicit network 𝒩 as outlined in <ref>. We follow the recommendations of <cit.> to optimize 𝒜. UniSurf exposes a hyperparameter to bias sampling of <ref> towards the current estimate of the surface. As we pre-optimize the surface, we can find the surface point 𝐱_s = 𝐨+t_s𝐝 through sphere tracing 𝒮 along a ray. The intersection point t_s can then be used to generate N samples along the ray to optimize <ref>.
t_i = 𝒰[ t_s + ( 2i-2N - 1)Δ, t_s + ( 2iN - 1)Δ]
<Ref> is the distribution used to draw samples and Δ is the hyperparameter that biases the samples to be close to the current surface estimate. We optimize 𝒮 independent of <ref> by just minimizing <ref> with registered depth maps (see <ref>). We use this method to study the effects of volumetric rendering versus surface rendering. We found this strategy to be very sensitive to the hyperparameter Δ and its decay schedule as the training progressed. While best parameters for some sequences resulted in very quick convergence, they were very hard to come across and generally, poorer choices led to undesirable artifacts (see e.g. <ref>).
We describe in the following section.
§.§ : Accelerating training with dense depth
The slowest step in training and inference for neural volumetric representations is generally the evaluation of <ref>. In this section we describe our method to accelerate training by incorporating metric depth. A method to make training more efficient involves drawing the smallest number of the most important samples of t_i for any ray. The sampling of t_i is based on the current estimate of the scene density and although these samples can have a large variance, given a large number of orthogonal view pairs (viewpoint diversity), and the absence of very strong view dependent effects, the training procedure is expected to recover an unbiased estimate of the true scene depth (see e.g. <cit.>). We can accelerate the convergence by a) providing high quality biased estimate of the scene depth and b) decreasing the number of samples for t_i along the rays.
Given the high quality of modern deep stereo (we use<cit.>) and a well calibrated camera system, stereo depth can serve as a good initial estimate of the true surface depth. We use stereo depth, aligned across multiple views of the scene to pre-optimize the geometry network 𝒮(θ). The other channels ℰ(θ) of 𝒩 remain un-optimized. A pre-optimized 𝒮 can then be used for high quality estimates of ray termination depths.
<cit.> recommend using root finding techniques (e.g. bisection method) on scene transmissivity (<ref>) to estimate the ray termination depth. The samples for <ref> are then generated around the estimated surface point. Drawing high variance samples as 𝒩 and 𝒜 are jointly optimized reduces the effect of low quality local minima, especially in the initial stages of the optimization. As we have a pre-trained scene transmissivity field (𝒮 transformed with <ref>), we can draw a few high-quality samples to minimize the training effort.
We found uniformly sampling around the estimated ray-termination depth (baseline in <ref>) to be unsuitable. Instead, we pre-calculated a discrete sampling volume by immersing 𝒮 in an isotropic voxel grid and culling the voxels which report a lower than threshold scene density. We then used an unbiased sampler from <cit.> to generate the samples in this volume. This let us greatly reduce the number of root-finding iterations and samples, while limiting the variance by the dimensions of the volume along a ray. As the training progresses, we decrease the culling threshold to converge to a thinner sampling volume around the surface while reducing the number of samples required.
We show the sampling volume (at convergence) and our reconstruction results in <ref> respectively. We retain the advantages of volumetric scene representation as demonstrated by the reconstruction of the thin structures in the scene, while reducing training effort. We dub our method to acknowledge <cit.>, which demonstrates a related approach to accelerate inference.
The “shells” shown in <ref> and the shells recovered by <cit.> for small scenes are physically similar quantities. <cit.> dilate and erode the original level-set of the scene (approximated by 𝒮) using a hyperparameter. Our “shells” are also jointly estimated with the geometry as the training progresses. <cit.> estimate the fall-off of the volume density values along a ray to determine the hyperparameters, which in turn determines the thickness of the “shell”. They subsequently use uniform sampling (similar to <ref>, where the Δ now denotes the local thickness of the shell) to generate samples for rendering. Our work takes a discrete approach by immersing the zero-level set (in form of pre-optimized 𝒮) in a dense isotropic voxel grid and culling the voxels which have a lower volume density, according to a preset hyperparameter that determines the thickness of the shell. Once the shell has been estimated, we use a unbiased density weighted sampler (instead of a uniform sampler) to generate samples along the ray inside the shell. We roughly follow <cit.> to generate samples along a segment of the ray guided by the voxels it intersects. The spatial density of samples is inversely proportional to their distance from the estimated surface. We implement this using the tools from NerfStudio<cit.>. Our sampling strategy is more robust to errors in estimated geometry (as shown in <ref>) than other approaches –notably and the original work (Adaptive Shells <cit.>) which is based on NeUS(<cit.>).
§.§ Training Details
We ran our experiments on a Linux workstation with an Intel Core i9 processor, 64GB RAM, and an Nvidia RTX3090Ti graphics card with 25GB of vRAM. Across all the experiments for learning scene radiance, we implemented a hard cut-off of 100K gradient steps amounting to less than 4.5 hours of training time across all the experiments.
Across all our baselines (, , , and ) we used the intrinsic network proposed in <cit.>, with 2 layers of MLPs (128 neurons per fully connected layer) and 18 levels of input hash encodings activated gradually. Our input activation curriculum was based on the recommendations of <cit.>, and was used jointly with our edge-aware sampling strategy <ref> across all the scenes.
The implementations of our baselines, design and bill-of-materials for the multi-flash camera system, dataset and the hyperparameters will be released soon. <Ref> denotes the training steps graphically.
§.§ Difference between our and prior work on neural scene understanding with depth
IGR<cit.> was among the first to fit a neural surface to point samples of the surface. Our pipeline is largely inspired by that work. However, we have two main differences – we use a smaller network, and periodically activate multi-resolution hash encodings as recommended by <cit.> instead of using a fully connected set of layers with skip connections. Additionally, as we have access to depth maps, we identify the variance of the neighborhood of a point on the surface through a sliding window filter. We use this local estimate of variance in a normal distribution to draw samples for 𝐱_s^Δ along each ray. Our strategy assumes that image-space pixel neighbors are also world space neighbors, which is incorrect along the depth edges. However, as the Eikonal equation should be generally valid in ℝ^3 for 𝒮, the incorrect samples do not cause substantial errors and only contribute as minor inefficiencies in the pipeline. A more physically based alternative, following <cit.>, would be executing nearest neighbor queries at each surface point along the rays to estimate the variance for sampling. With about 80k rays per batch, ∼ 200K points in (𝐱_s), and about 40k gradient steps executed till convergence, and a smaller network, our approach was more than two orders of magnitude faster than <cit.>, with no measurable decrease in accuracy of approximating the zero-level set of the surface.
NeuralRGBD<cit.> is the closest prior work based on data needed for the pipeline and its output. The scene is reconstructed using color and aligned dense metric depth maps. The authors aggregate the depth maps as signed distance fields and use the signed distance field to calculate weights for cumulative radiance along samples on a ray (<ref> in text). The weights are calculated with
w_i = σ(D_itr) ×σ(-D_itr)
where the D_i is the distance to the surface point along a ray, and the truncation tr denotes how fast the weights fall off away from the surface. <Ref> yields surface biased weights with a variance controlled by the parameter tr. Notably, the depth map aggregation does not yield a learned sign distance field (no Eikonal regularizer in the loss). The authors also include a `free-space' preserving loss to remove “floaters”. As implemented, the pipeline needs the truncation factor to be selected per-scene. As the depth maps are implicitly averaged by a neural network, it is implicitly smoothed and therefore the pipeline is robust to local noise in the depth map.
MonoSDF<cit.> is mathematically the closest prior method to our work and it uses dense scene depths and normals obtained by a monocular depth and normal prediction network (OmniData<cit.>). MonoSDF defines the ray length weighted with the scene density as the scene depth 𝐝_pred and minimizes
ℓ_D = ∑_r||𝐰𝐝_mono+𝐪 - 𝐝_pred||_2^2
where {𝐰,𝐪} are scale and shift parameters. Estimating an affine transformation on the monocular depth 𝐝_mono is important because in addition to gauge freedom (𝐰), monocular depths also have an affine degree of freedom (𝐪). The scale and shift can be solved using least squares to align 𝐝_mono and 𝐝_pred. The scene normals are calculated as gradients of 𝒮 weighted with scene density along a ray. Through a scale and shift invariant loss, MonoSDF calculates one set of (𝐰,𝐪) for all the rays in the batch corresponding to a single training RGBD tuple. In the earlier stages of the training, this loss helps the scene geometry converge. The underlying assumption is that there is an unique tuple {𝐰,𝐪} per training image that aligns 𝐝_mono to the actual scene depth captured by the intrinsic network 𝒩.
Our experiments with MonoSDF indicate that the network probably memorizes the set of (𝐰,𝐪) tuples per training image. Explicitly passing an unique scalar tied to the training image (e.g. image index as proposed in <cit.>) speeds up convergence significantly. Success of MonoSDF in recovering both shape and appearance strongly depends on the quality of the monocular depth and normal predictions. Our experiments on using MonoSDF on the WildLight dataset(<cit.>) or the ReNe dataset (<cit.>) failed because the pre-trained Omnidata models performed poorly on these datasets. Unfortunately, as implemented, MonoSDF also failed to reconstruct scene geometry when the angles between the training views were small – ReNe dataset views are maximally 45^∘ apart. However, it demonstrates superior performance on the DTU and the BlendedMVS sequences while training with as low as three pre-selected views. Finally, our scenes were captured with a small depth of field and most of the background was out of focus, so the scene background depth was significantly more noisy than the foreground depth. We sidestepped this problem by assigning a fixed 1m depth to all the pixels that were in the background. Although this depth mask simplifies our camera pose estimation problem (by segregating the foreground from the background), it assigns multiple infeasible depths to a single background point. As we aggregate the depth maps into the intrinsic network (𝒩) by minimizing <ref>, the network learns the mean (with some local smoothing) of the multiple depths assigned to the single background point. However, the scale and shift invariant loss is not robust to this and with masked depth maps, we could not reliably optimize MonoSDF on our sequences. We suspect that this is because the scale and shift estimates for each instance of <ref> on the background points yielded very different results, de-stabilizing the optimization.
<cit.> and <cit.>
use sparse scene depth in the form of SfM triangulated points. <cit.> use learnt spatial propagation <cit.> to generate dense depth maps from the sparse depth obtained by projecting the world points triangulated by SfM. <cit.> assign the closest surface depth at a pixel obtained by projecting the triangulated points to the image plane. Neither of these pipelines recover a 3D representation of the scene and focus on view synthesis using few views.
<cit.> introduce a novel 3D representation – “Neural point clouds” which includes geometric and appearance feature descriptors (small MLPs) grounded to a point in 3D. The geometry is recovered as the anchors of the “neural points”. The appearance is calculated using a volumetric renderer which composits the outputs of the appearance descriptors of the neural points with the transmissivity of the neural points along the ray. The transmissivity of a neural point is calculated as a function of distances of a pre-set number of neighboring neural points.
§.§ Capturing approximate BRDF and generating textured meshes
Multi-illumination images captured by our camera system can be used to estimate surface reflectance properties. We recover a truncated Disney BRDF model(<cit.>. Our model consists of a per pixel specular albedo, a diffuse RGB albedo, and a roughness value to interpret the observed appearance under varying illumination. To estimate the spatially varying reflectance, we first train a model (, or ) to convergence to learn the appearance as radiance. At convergence, the first channel 𝒮 of the intrinsic network 𝒩 encodes the geometry and the appearance network 𝒜 encodes the radiance. We use two of the embedding channels of ℰ to predict the roughness and specular albedo at every point on the scene. The diffuse albedo is obtained as the output of the converged appearance network 𝒜. To calculate the appearance, we apply the shading model (<cit.>) to calculate the color at every sample along a ray and volumetrically composite them using <ref> to infer appearance as reflectance. <Ref> describes our steps graphically.
Optimizing for the full set of the Disney BRDF parameters, following <cit.> did not work with our aproach as the optimization often got stuck at poor local minima. <Ref> shows one instance of optimizing the pipeline of <cit.>, where the strengths of the recovered `clearcoat' and `clearcoat-gloss' parameters dominated over the optimization of the other parameters, resulting in a waxy appearance. Choosing a more conservative set of parameters (only `base-color', `specular' and `roughness') in <ref> led to a more realistic appearance. WildLight<cit.> is based on <cit.> – we substituted <cit.> with and the apperance model of <cit.> was not changed, minimizing the chances of introduction of a bug causing the artifact.
Our process of generating texture and material properties roughly follows the methods described by <cit.> and <cit.>. We proceed through the following steps:
* At convergence (see <ref>), we extracted the scene geometry using the method described in <cit.>.
* We calculate a depth mask by thresholding the depth images at every training view with an estimate of the scene depth to segregate the foreground from the background.
* Next, we cull the resulting triangular mesh (step 1) by projecting rays from every unmasked (foreground) pixel corresponding to all the camera views. This lets us extract the main subject of our scene as a mesh. We use Embree<cit.> to implement this.
* We generate texture coordinates on the culled mesh using “Smart UV Unwrap” function from <cit.>. These results were qualitatively better than <cit.> and our implementation of <cit.>. We then rasterize the culled mesh from step 3 to get points on the surface corresponding to the texture coordinates.
* We project each of these surface points back on to each of the training views to get the image coordinates. Rays originating from a rasterized surface point and intersecting the surface before reaching the camera are removed to preserve self occlusion.
* For all the valid projected points, we cast a ray onto the scene and use either , , or to generate the color at the pixel along the ray using <ref>. This is repeated for all the training views.
* At the end of the previous step we have several measurements of colors at every texture coordinate of the scene. We apply a median filter (per color channel) to choose the color – taking averages or maxima of the samples introduces artifacts. If using the radiance as texture is sufficient (often the case for diffuse scenes) this textured mesh can be exported. <Ref> demonstrates using each of , and, to calculate the diffuse color of the scene in <ref>.
* To generate material textures, we follow the same procedures with the corresponding material channels after , or has been trained on multi illumination images using the schedule outlined in <ref>.
* The material properties are also volumetrically composited using <ref> and median filtered like the base colors. This is different from just querying the value of the network at the estimated surface point in <cit.>.
We use <cit.>, a web browser based tool that supports physically based rendering with the Disney BRDF parameters, and <cit.> to generate the images in <ref> respectively.
§ A MULTI-FLASH STEREO CAMERA
We capture the scene using a binocular stereo camera pair with a ring of lights that can be flashed at high intensity. For our prototype, we use a pair of machine vision cameras (<cit.>) with a 1”, 4MP CMOS imaging sensor of resolution of 2048 × 2048 pixels. As we focus mainly on small scenes, we use two sets of lenses that yield a narrow field of view – 12mm and 16mm fixed focal length lens (<cit.>). We use 80W 5600K white LEDs (<cit.>) flashed by a high-current DC power supply switched though MOSFETs controlled with an Arduino microcontroller. At each pose of our rig, we captured 12 images with each of the flash lights on (one light at a time) and one HDR image per camera. The cameras are configured to return a 12 bit Bayer image which is then de-Bayered to yield a 16 bit RGB image.
For the HDR images, we performed a sweep of exposures from the sensor's maximum (22580 microseconds) in 8 stops and used <cit.> to fuse the exposures captured with ambient illumination (fluorescent light panels in a room). Following the recommendations of <cit.> we used an f-stop of 2.8 to ensure the whole scene is in the depth of field of the sensors. We found the recommendations from <cit.> to be incompatible with our pipeline, so we used Reinhard tone-mapping (<cit.>) to re-interpret the HDR images. Our image localization pipeline, and stereo matching also worked better with tonemapped images.
We set the left and right cameras to be triggered simultaneously by an external synchronization signal. We configured the camera frame acquisition and the illumination control programs to run in the same thread and synchronized the frame acquisition with the flashes through blocking function calls. <Ref> presents a schematic of our prototype device.
Through experiments we observed that the vignetting at the edges of the frames were detrimental to the quality of reconstruction, so we only binned the central 1536 × 1536 pixels. A 16bit 1536×1536 frame saved as a PNG image was often larget than 10MB. To achieve a faster capture and training time without sacrificing the field of view, we down sampled the images to a resolution of 768×768 pixels for our experiments. Centered crops of our initial larger frames lead to failures of our pose-estimation pipelines due to the field of view being too narrow(<ref>), so we chose to down sample the images instead. For the images lit by a single LED, we used the camera's auto exposure function to calculate an admissible exposure for the scene and used 80% of the calculated exposure time for imaging – the built-in auto-exposure algorithm tended to over-expose the images a bit. Estimating the exposure takes about 2 seconds. Once the exposure value is calculated, it is used for all of the 12 flashes for each camera.
Several instances of these RGBD tuples are collected and the colored depth maps are registered in the 3D space in two stages – first coarsely using FGR <cit.> and then refined by optimizing a pose graph<cit.>. At the end of this global registration and odometry step, we retain a reprojection error of about 5 - 10 pixels. If the reprojection errors are not addressed, they will cause the final assets to have smudged color textures. To address it, we independently align the color images using image-feature based alignment techniques common in multi-view stereo (<cit.>), so that a sub 1 pixel mean squared reprojection error is attained. The cameras aligned in the image-space are then robustly transformed to the world space poses using RANSAC<cit.> with Umeyama-Kabsch's algorithm<cit.>. Finally, we mask out the specular parts of the aligned images and use ColorICP <cit.> to refine the poses. The final refinement step helps remove any small offset in the camera poses introduced by the robust alignment step. A subset of the data collected can be viewed on the project website.
§.§ Identifying pixels along depth edges
To identify pixels along depth edges, we follow <cit.> and derive per-pixel likelihoods of depth edges. Assuming that the flashes are point light sources and the scene is Lambertian, we can model the observed image intensity for the k^th light illuminating a point 𝐱 with reflectance ρ(𝐱) on the object as
𝐈_k(𝐱) = μ_k ρ(𝐱) ⟨𝐥_k(𝐱), 𝐧(𝐱) ⟩
where μ_k is the intensity of the k^th source and 𝐥_k(𝐱) is the normalized light vector at the surface point. 𝐈_k(𝐱) is the image with the ambient component removed. With this, we can calculate a ratio image across all the illumination sources
𝐑(𝐱) = 𝐈_k(𝐱)𝐈_max(𝐱) = μ_k ⟨𝐥_k(𝐱), 𝐧(𝐱) ⟩/max_i (μ_i ⟨𝐥_i(𝐱), 𝐧(𝐱) ⟩)
It is clear that the ratio image 𝐑(𝐱) of a surface point is exclusively a function of the local geometry. As the light source to camera baselines are much smaller than the camera to scene distance, except for a few detached shadows and inter-reflections, the ratio images (<ref>) are more sensitive to the variations in geometry than any other parameters. We exploit this effect to look for pixels with largest change in intensity along the direction of the epipolar line between the camera and the light source on the image. This yields a per-light confidence value of whether 𝐱 is located on a depth edge or not. Across all 12 illumination sources, we extract the maximum values of the confidences as the depth edge maps. Unlike <cit.>, we use 12 illumination sources 30^∘ apart, and we do not threshold the confidence values to extract a binary edge map. This lets us extract more edges especially for our narrow depth of field imaging system and gets rid of hyper parameters used for thresholding and connecting the edges.
Often parts of our scene violate the assumption of Lambertian reflectances resulting in spurious depth edges. When we use depth edges for sampling, these errors do not affect the accuracy of our pipeline. When using depth edges for enhancing stereo matching (<ref>) we ensure that the stereo pairs do not contain too many of these spurious edge labels to introduce noise in our depth maps.
§.§ Identifying patches with non-Lambertian reflectances
We modified the definition of differential images in the context of near-field photometric stereo introduced by <cit.> to identify non-Lambertian patches. Assuming uniform Lambertian reflectances, <ref> can be expanded as
𝐈_k(𝐱) = μ_k^* ρ(𝐱) 𝐧(𝐱)^T 𝐬_k-𝐱|𝐬_k-𝐱|^3
where 𝐬_k is the location and μ_k^* is the power of the k^th light source. We define the differential images as 𝐈_t = ∂𝐈/∂𝐬𝐬_t where, 𝐬_t = ∂𝐬/∂ t, which when applied to <ref> can be expanded as
𝐈_t(𝐱) = 𝐈(𝐱)𝐧^T 𝐬_t𝐧^T(𝐬-𝐱) - 3𝐈(𝐱)(𝐬-𝐱)^T𝐬_t|𝐬-𝐱|^2
Observing that the light sources move in a circle around the center of projection on the imaging plane, 𝐬^T𝐬_t = 0. Also, the second term of <ref> is exceedingly small given that the plane spanned by 𝐬_t is parallel to the imaging plane and our choice of lenses limit the field of view of the cameras. The second term is further attenuated by the denominator |𝐬-𝐱|^2 because the camera-to-light baselines (𝐬) are at least an order of magnitude smaller than the camera to object distance (𝐱). As a result, under isotropic reflectances (Lambertian assumed for this analysis) the differential images 𝐈_t(𝐱) are invariant to circular light motions. Any observed variance therefore can be attributed to the violations of our isotropic BRDF assumptions. We identify specular patches by measuring the variance of this quantity across the 12 instances of the flashlit images.
Although our pipelines for identifying depth edges and patches of varying appearances demonstrate satisfactory qualitative performance, sometimes they yield wrong labels because <ref> do not include additional terms for spatially varying BRDFs and interreflections respectively. These errors do not have any significant effect in our reconstruction pipeline as we use this information to generate samples during different phases of training to minimize photometric losses and we do not directly infer shape or reflectances from these steps.
§.§ Difference between <cit.> and our hardware
<cit.> was the first to propose pairing flashes with cameras and laid the groundwork for identifying depth edges from multi-flash images from a single viewpoint. However, <cit.> considered a monocular camera and only four flashes along the horizontal and vertical directions of the camera in the demonstrated device. Researchers (see e.g. <cit.>) have since extended it by placing multiple light sources far apart from a monocular camera and have demonstrated locating depth edges on objects with strictly Lambertian reflectances. In this work, we retain the original light and camera configuration from <cit.> and increase the number of lights from four to 12.
<cit.> also investigated a stereo camera in a multi-flash configuration aimed at edge preserving stereo depth maps. They do not extend the application to synthesizing geometry or appearance by capturing and assimilating multiple views of the scene. For obtaining stereo depth maps, we use <cit.>, which performs much better than conventional stereo matching (<cit.>) largely deployed in off-the shelf systems (<cit.>).
Both <cit.> discuss methods to detect specularities (termed “material edges”) through different transforms of the multi-light images. However, we achieve a more continuous circular motion of the lights around the cameras, so we choose to use the photometric invariants described by <cit.> instead.
|
http://arxiv.org/abs/2409.02863v1 | 20240904164240 | CONClave -- Secure and Robust Cooperative Perception for CAVs Using Authenticated Consensus and Trust Scoring | [
"Edward Andert",
"Francis Mendoza",
"Hans Walter Behrens",
"Aviral Shrivastava"
] | cs.RO | [
"cs.RO",
"cs.CR",
"cs.MA"
] |
Arizona State University
Tempe
Arizona
USA
[email protected]
Arizona State University
Tempe
Arizona
USA
[email protected]
Arizona State University
Tempe
Arizona
USA
[email protected]
Arizona State University
Tempe
Arizona
USA
[email protected]
§ ABSTRACT
Connected Autonomous Vehicles have great potential to improve automobile safety and traffic flow, especially in cooperative applications where perception data is shared between vehicles. However, this cooperation must be secured from malicious intent and unintentional errors that could cause accidents. Previous works typically address singular security or reliability issues for cooperative driving in specific scenarios rather than the set of errors together. In this paper, we propose – a tightly coupled authentication, consensus, and trust scoring mechanism that provides comprehensive security and reliability for cooperative perception in autonomous vehicles. benefits from the pipelined nature of the steps such that faults can be detected significantly faster and with less compute. Overall, shows huge promise in preventing security flaws, detecting even relatively minor sensing faults, and increasing the robustness and accuracy of cooperative perception in CAVs while adding minimal overhead.
- Secure and Robust Cooperative Perception for CAVs Using Authenticated Consensus and Trust Scoring
Aviral Shrivastava
September 9, 2024
====================================================================================================
§ INTRODUCTION
Cooperative autonomous vehicle operation has the potential to make roadways dramatically more safe and efficient <cit.>.
Even if all the technicalities of executing a cooperative maneuver can be solved, there are still problems securing them from unauthorized participants. Beyond that, there is the issue of preventing both malicious vehicles that intentionally disrupt a cooperative application as well as preventing faulty vehicles that unintentionally disrupt a cooperative application due to an error. For example, a malicious vehicle could try to get ahead in the queue for a cooperative intersection by falsifying data. A fault, on the other hand, could be an autonomous vehicle that has a sensor malfunction and is sharing bad data with another vehicle that is blindly trusting the data to see around a corner that is out of its sensor range. Even if the exact reaction to these types of situations could be vehicle or vendor specific, the overarching identification and prevention of all disrupting vehicles, whether intentional or not, is paramount to running successfully cooperation amongst autonomous vehicles.
Detecting malicious and unintentional faults requires multiple steps, including authentication and verification of incoming data <cit.>. However, without a trusted third party with its own sensors involved in every vehicle area network, the scope increases to include consensus <cit.>. Existing state of the art methods typically treat authentication, consensus, and trust scoring either separately or together in a limited subset of cooperative scenarios. Guo et al. propose a method to log events using blockchain, but their approach completely ignores the problem of keeping out unauthorized participants and also does not have any mechanism to keep authenticated users from making up events <cit.>. More recently, trust scoring methods have been used in place of proof of work. For instance, Mankodiya et al. <cit.> use a specialized ML bases trust scoring that could take the place of proof of work but it is not coupled with a consensus method and therefore cannot reap those extra benefits. Bhattacharya et al. do tackle the authentication and consensus problems at once, but too many assumptions are made for the specific application, and therefore their approach will not work for general cooperative scenarios <cit.>.
This paper presents – an application-level network protocol designed for sensor networks that require reliable and trustworthy data in the context of Cooperative Autonomous Vehicles (CAVs) and Cooperative Infrastructure Sensors (CISs). The three primary contributions of are:
* A three party homomorphic hashing based authentication process which includes the manufacturer, a third party authority/government, and the vehicle itself. This inclusion ensures that all entities (CAVs and CISs) that wish to participate in the system must have the approval of both the manufacturer and governmental stakeholders.
* A BOSCO-based single-shot consensus protocol that works in a dynamically changing geo-spatial vehicular networks by limiting the latency and resource requirement of the consensus protocol on non-discrete sensed values. Instead of generating consensus on a common world-view, generates consensus on the individual world-view provided by each agent. This eliminates Byzantine attacks on the network, leaving the common world-view generation work for the next sensor fusion step.
* A perception trust scoring technique that reports an accuracy score by utilizing sensor and recognition pipeline characterization data as the accuracy predictor, allowing for errors to be detected down to the individual sensor level that are not picked up by other state of the art methods. This trust scoring technique is tightly coupled with the authentication and consensus step so that it can operate in place of a proof of work to improve real-time performance.
was tested against the state of the art trust scoring method Trupercept <cit.> using fault and malicious injection on a 1/10 scale model autonomous vehicles using a motion capture system as ground truth. 1100 faults and malicious attacks were injected over the course of 14 different scenarios while varying the severity and number of the fault/injection. detected 96.7% of the 300 sensor extrinsic faults injected, 83.5% of the 300 software faults injected, 67.3% of the 300 malicious injections and removals, and 100% of the 200 communication faults and malicious injections that we subjected it to. On the other hand, the state of the art method TruPercept only detected 29.6% of sensor extrinsic faults, 34% of software faults, 32.6% of malicious injections and removals, and 19.6% of the communication faults and malicious injections. Overall, had a mean time to detection that was 1.83x faster on average and 6.23x faster in the best case when compared to TruPercept on the faults TruPercept could detect.
§ RELATED WORK
Authentication: When distributed agents communicate in the field, authentication is critical or the network is open to Sybil attacks <cit.>. Further complicating the issue, conditions often prevent real-time communication with a central server, and local resource constraints limit processing and storage <cit.>. Handy et al. assume a more difficult task with no centralized authority or setup phase, but participants establish keys with each new participant through a process that does not consider the Sybil threat <cit.>. Wang et al. rely on specialized hardware such as Physically Uncloneable Functions (PUFs), an impractical choice for real-world deployments <cit.>. Similarly, approaches that rely on trusted execution environments (TEEs) are susceptible to eventual compromise and key extraction (e.g. via cold boot attacks <cit.> or side channels <cit.>). To address these challenges, we use a three-way knowledge partitioning between a government entity, manufacturer, and each individual participant. To allow for reconstruction, we rely on an approach that allows for intermediate hash composition using the homomorphic hash tree described by Behrens et al. <cit.>. Though this produces larger hashes, it allows for asymmetric reassembly of hashes, compartmentalizing information, and preventing the compromise of any one party from undermining the security of the authentication protocol <cit.>.
Consensus: In a distributed environment, cooperative perception algorithms can quickly succumb to byzantine faults <cit.>. Whether due to communication dropout or malicious intent, faults will manifest themselves as data corruption in the subsequent sensor fusion step. A popular way to solve these issues is byzantine fault tolerant consensus, however, consensus on non binary values is slow. Han et al. solve this by eschewing the need for consensus on all sensor values by bounding the problem to just nearby vehicle positions in a platoon in addition to many other specializations <cit.>. However, this approach will not work for general cooperative perception. To address this challenge, our approach relies on a semi synchronous distributed Byzantine tolerant consensus on the data each party sent, rather than coming to consensus on the correctness of that data. The correctness proof is left for the subsequent trust scoring step in the pipeline. This technique keeps the consensus itself lightweight and eschews the need for any proof of work by using the trust score of the sensed values computed next as substitute.
Trust Scoring: In a cooperative perception environment, a minor disagreement in sensor input caused by a sensor fault or malicious actor could result a catastrophic incident and must be prevented <cit.>. Cavorsi et al. propose a method to apply a trust score against robots sensing local traffic in their region that can detect adversaries and lower the percentage error in the locally fused traffic estimate <cit.>. However, it is not clear how this method can be generally applied to cooperative perception nor does it take into account the expected accuracy of the sensors involved. Hurl et al. propose a cooperative perception specific trust scoring method and test it using simulated data <cit.>. The trust score is applied to a sensor fusion algorithm as a weight, and the result is a better sensor fusion. Their method is limited to the case that the CAV sensor configurations are uniform, containing both a camera and a LIDAR as they use the camera confidence as the expected accuracy of each sensed object and the LIDAR point count as a proxy for the visibility. To address this, we create a trust scoring system that uses a generalized error estimation technique for heterogeneous sensing platforms borrowed from Andert et al. while consuming the results of the previous consensus step to prevent byzantine faults <cit.>.
§ OUR APPROACH
§.§ Overview
To achieve a secure consensus and trust scoring of CAVs and other sensing infrastructure for reliable cooperative driving, proposes a three-step process, which can be seen depicted at a high level in figure <ref>. First, all participants are authenticated to address the risk of Sybil attacks. We create a novel authentication scheme that leverages homomorphic hashing, incorporates both the manufacturer and a government entity, and allows participants to authenticate each other in such a way that participants in a consensus round don't always need to have communication with a trusted RSU. Next, we come to consensus on the sensor values that all participants submit to the consensus round using Byzantine fault tolerant consensus protocol such that faults in communication can such as packet delay or dropped messages don't manifest themselves later as error in the output <cit.>. We come to consensus on the sensor values that each participant sends such that we reduce the computation time by bounding the problem to be consensus on the sensor values each participant sent using the Bosco consensus protocol, which was modified to be semi-synchronous <cit.>. Finally, a trust scoring technique is applied to the sensing input set that results from the consensus round, to verify the correctness of the data each participant sent. Instead of using camera confidence values as an accuracy indicator like Hurl et al. use, we use parameterized sensor pipeline accuracy values from Andert et al. <cit.>. This, along with being closely coupled with a sensor fusion technique, allows our trust scoring to be both fast and more accurate than the previous state of the art. Our trust scoring not only improves the accuracy of cooperative perception, it also serves as a replacement of the proof of work for our consensus step. All of this combines to prevent most known attack vectors and errors that can occur in a cooperative perception environment. Next, we explain the three steps of and its working in more detail.
§.§ Three Party Authentication
A high level depiction of our authentication setup process can be seen in algorithm <ref>. To initialize, both the law enforcement/governmental agency and manufacturer generate a secret key known only to themselves denoted as S_g and S_m respectively. The manufacturer generates an asymmetric keypair P_c, S_c and stores it locally on the vehicle; crucially, we do not require any central database of these keys (line 2). In an interactive process, the two players exchange the CAV/CIS's identity and generate a challenge Chal_c and response hash Resp_c which is also stored locally (line 3-5). This inclusion ensures that all CAVs/CISs which wish to participate in the system have the approval of both stakeholders. Note that neither party exchanges their secret keys S_m or S_g, instead using hashed versions Chal_c and Resp_c to prevent inappropriate use.
Next, both stakeholders must periodically go through an interactive process to refresh what we call as a round signature Sig_r – a validity mechanic that allows for either party to exit from participation (lines 8-13). Stakeholders may tune the frequency of this process to increase or decrease the duration in which CAVs/CISs may operate asynchronously. Once generated, these signatures Sig_r are securely distributed to each RSU. As vehicles travel within range of an RSU, they may choose to issue a renewal request to that RSU (line 7). These messages are encrypted with a CAV/CIS's private key, and the corresponding public key is transmitted along with the request to provide authenticity but not secrecy. Nonces prevent replay attacks. The RSU may optionally check the CAV/CIS's identifier against a central database to ensure compliance, such as valid licensing or inspection requirements. Once the RSU validates the request, ephemeral challenge and response tokens Chal_t and Resp_t are generated and encrypted with the CAV/CIS's public key before sending them back (lines 14, 15). This ensures that eavesdroppers may not re-use a CAV/CIS's token, as they lack the corresponding private key. Depending on how often rounds change, it may be desirable to store the subsequent tokens to ensure that validation can take place between CAVs/CISs whose round tokens differ in sequence by one.
§.§ Single-shot Consensus
For consensus rounds, we utilize a set area around an intersection with a constant trigger. All CAVs/CISs within range attempt to participate in the round and local IPs are known ahead of time. When establishing a relationship between CAVs/CISs during a given consensus step, each CAV/CIS provides additional metadata with their broadcast to allow for authentication. Each CAV/CIS generates this metadata Chal_c1, and shares it along with a hashed version of its ID ID_c1 and its public key P_c1. Recipients check the received values using their own local tokens ID_c2 and Resp_t2 to ensure compliance, and if valid, they temporarily store the public keys to allow for secure communication during the consensus step.
Next, participants accumulate the sensing messages from all other participants. This stops when a message is received from every known participant or the sensing transmission timeout is reached. Each participant sends out the accumulated set of sensing messages, received with valid authentication, and accumulates the same message from other participants. This stops when a message is received from every known participant or the aggregate transmission timeout is reached. Finally, each participant decides their vote according to the BOSCO algorithm and sends the result to all other participants, as well as nearby trusted RSUs for secure storage <cit.>.
§.§ Accurate Trust Scoring
Our trust scoring method is closely coupled with a UKF based sensor fusion method, depicted in algorithm <ref>. It can be ran by all participants separately or alternatively ran on a nearby trusted RSU and distributed out. The first step of this is matching observations to tracks using a JPDA Filter while taking into account expected error and bounding box size <cit.>. We utilize the Unscented Kalman Filter (UKF) approach that Andert et al. use for fusion (lines 1-3) <cit.>. Utilization of observations that are coming from each vehicle is conditional upon the existing trust score, or sensing standard deviation score (SDS). If SDS exceeds a certain value SDS_max, the sensor platform is considered not trustworthy and none of its observations nor its own position will be included in this global fusion.
looks at two factors when calculating the trust score: i) Was an object supposed to be detected or not?, and ii) Was an object detected with the accuracy it was expected to be detected with? In order to evaluate the first, we determine if an object should have been seen or not with respect to other sensor platforms using a Byzantine tolerant voting scheme (line 4,5). We iterate through all the tracks and mark the track as existing if the object is within the FOV and range of a sensor as well as the visibility percentage threshold. Our method maintains byzantine tolerance by requiring that a majority of all participants should see an object according to their FOV and modeled obstructions to vote whether a track exists or not. For the second item, we utilize the estimated accuracy of the local fusion output of each vehicle as defined by Andert et al. <cit.>, to determine the expected accuracy of each detection.
All participants within the sensor network need to have a minimum accuracy boundary, otherwise a possible attack vector would be to report its sensors as incredibly inaccurate. For sensing pipelines, we enforce a minimum sensing range requirement, a minimum FOV requirement, and a minimum sensor error magnitude requirement within this range and FOV. For localization pipelines, we enforce a minimum error magnitude requirement. If any participant's sensed values exceed these thresholds, the participant values will not be considered in the trust scoring and should be sent for repairs, even though it passes the authentication and consensus rounds.
Using the matching results from the JPDA filter and bounding box step as well as the fused positions outputted by the UKF for observability consensus, we have the necessary ingredients to perform a trust scoring of the reported accuracy of the sensor platform versus the fused position from the consensus round (line 6). If a track is reported as exists, the reported accuracy of the detected value of the CAV / CIS is compared with that contained in the fused output of the UKF. For the sensed values from each CAV/CIS that were matched to the track, the reported position <x,y> (or z_k) is subtracted from the estimated position produced by the UKF (or x̂_k|k), shown in the numerator of equation <ref>. P_k|k returned by the UKF encompasses the expected error of the measurements from all sensors involved in the local fusion as well as the estimation from the UKF itself <cit.>. We then normalize by the expected error 𝔼(μ^α_θ) which is contained in Σ^α_θ and can be extracted using the eigenvalues, shown in the denominator of equation <ref>. The result is why this is called the standard deviation score (SDS) as this is simply the standard deviation of the measured error returned by the global fusion w.r.t the value of the expected error from the measurement covariance in the local fusion of the sensor platform.
In order to keep a sensor from reporting itself as accurate in a vacuum, we enforce a rule of three – meaning at least three sensors must be matched and detecting the same track for a standard deviation frame to be added to the SDS revolving buffer for that object (line 7). This is in addition to the byzantine tolerant consensus on the existence or lack of existence of the track from all sensors. Furthermore, to keep low confidence tracks from being levied against a sensor, we enforce the second piece of the rule of three which dictates that the hypotenuse of the accuracy reported by the roundFusion track (or P_k|k) must be three times as accurate as the hypotenuse of the accuracy reported by the sensor itself (or Σ^α_θ). If all of these conditions hold true, then it is prudent that the sensor α is attributed with the SDS by placing it in the last position of the revolving buffer, which is averaged to create the overall trust score. The rule of three applies to missed detection, three sensors must agree the object is there along with the consensus that the track exists. If these constraints are met, any sensor platform that should detect that object will have a missed detection frame added. The value ρ is then added in place of the SDS frame for the sensor platform that did not detect the track. ρ was set to 3 * min_Sensor_Accuracy.
SDS^α_θ = √((λ^↓_0(μ^α_θ) - λ^↓_0(x̂_k|k_θ))^2 + (λ^↓_1(μ^α_θ) - λ^↓_1(x̂_k|k_θ))^2)/√((λ^↓_0(Σ^α_θ) - λ^↓_0(P_k|k_θ))^2 + (λ^↓_1(Σ^α_θ) - λ^↓_1(P_k|k_θ))^2)
For each participant in the round, a new SDS score is calculated in equation <ref>. This score is then added to the SSDS buffer (line 6). The set of SSDS scored for each participant constitutes their trust score. This value is shared globally among RSUs and is updated after each consensus round ends. Finally, the tracks are updated using the curated sensor set and new trust scores (line 8).
§ EXPERIMENTAL SETUP
For testing, we utilize 1/10 scale autonomous vehicle replicas. Our setup consists of four scale CAVs with a front facing 160 degree FOV camera and a 360 degree FOV single channel LIDAR and two mounted CISs with a 160 degree FOV camera. Figure <ref> shows our setup with four CAVs.
Using data collected from a set of 10, ten-minute-long tests for each physical configuration, we perform error injection with the 14 scenarios shown in Table 1. The simplest scenarios are sensor errors that can be easily caused in any autonomous vehicle by jarring a sensor. For E1-E3, the same data from the sensors are used, but the extrinsics of the sensors will be skewed by N degrees resulting in a shift in the data from that sensor. The next category of error is malicious error in which we purposely inject or remove detection with probability N. Next, we have software errors that manifest itself as a bad weighting in the sensor fusion where we change weightings by some N percent. Finally, we have communication faults and attacks where we cause N vehicles in the simulation to experience a communication error. The tests are run for a random amount of time from 120 seconds through 540 seconds with normal operation before we begin injecting the specific fault. If the trust score for a vehicle becomes 1.2x of the baseline within 60 seconds after the fault is injected, we consider the fault to be caught and record the MTTD. Each fault injection was run 10 times at each step for a total of 1100 tests.
§ RESULTS
5.1 - quickly detects nearly all extrinsics errors
E1-E3 where the sensors experience a physical sensor shift of N degrees are shown in figure <ref>. These tests showcase 's ability to pick out relatively minor errors within the cooperative perception environment. Errors as minor as a two-degree camera shift, one-degree LIDAR shift, or one-degree shift combination are caught. detects an impressive 96.7% of the 300 tests. TruPercept is only able to catch large magnitude errors, such as a ten-degree LIDAR shift or a four-degree or more camera and LIDAR shift resulting in a detection rate of 29.6% of the 300 tests. TruPercept does not have a suitable predictor that works when the IOU match is still high, but the error variance is higher than it should be, instead relying on the confidence of the camera as a predictor <cit.>. Therefore, TruPercept only detects these sensor extrinsic errors when they result in the IOU of a track dropping below the 50% threshold, which causes there to no longer be a match to the rest of the CAV/CIS report and finally the offending CAV/CIS is punished for not detecting the object entirely. Conversely, has a concept for how much variance it expects in sensing error and as that variance starts to leave the expected limits, the vehicle trust score is punished for that. This variance threshold results in quick detection of minor extrinsics errors which are missed by TruPercept. RMSE of Conlcave in all tests for E1-E3 is 6.3% higher than TruPercept and 10.3% better than without trust scoring.
5.2 - detects malicious errors faster than TruPercept
E4-E6 consist of accidental track removal as well as malicious injection and removal, seen in figure <ref>. These tests are similar to what TruPercept was designed for, with the caveat that we only have a single vehicle experiencing the error, whereas TruPercept had all vehicles experiencing the same probability of error <cit.>. TruPercept performs well in this case, detecting 34% of 300 tests. beats TruPercept, detecting 67.3% of the 300 tests. This is because a malicious actor that is injecting fake vehicles into their data will not purposely report a low camera confidence. Therefore, in the second test, TruPercept relies completely on the IOU mismatch of detections to identify a problem where can detect higher variances and report errors sooner. For the RMSE case, we can see that both and TruPercept respond to E4, so is only 1.9% better than TruPercept and 5. 1% better than without trust scoring. Although these error injections have a high probability, they are filtered out by local sensor fusion, so the RMSE effect is less than E1-E3.
5.3 - detects more software errors, and faster
E7-E9 look at common cases of mis-weighting. Overall Trupercept detected 32.6% of the 300 tests while was able to detect 83.5%, which can be seen in figure <ref>. For the E7 localization error and E8 local sensor fusion misweight, we can see that detects it but Trupercept misses it due to being tuned to detect variance while Trupercpet is not. E9 on the other hand is detected by both methods with just slightly edging ahead in detection speed. Again, this is due to the nature of being able to detect larger and smaller variance than expected and report the errors quickly. Meanwhile TruPercept has to wait until detections start to mismatch IOU wise before it will start to detect the errors. Meanwhile, TruPercept takes longer to respond and therefore has a worse result. RMSE of was 3.9% better than Trupercept and 5.8% better than no trust scoring.
5.4 - detects many communication faults and attacks
E10 - E14 showcase tolerance to a variety of errors and attack vectors that are not typically captured by a trust scoring system alone. This is apparent when looking at figure <ref> on the bottom right where only one of the errors, E14, is detected by Trupercept. Trupercept detected 19.6% of communication faults and attacks while detected a perfect 100% of the 200 tests. Furthermore, detected E10-13 in less than two seconds, or two consensus rounds. We did not compare RMSE because Trupercept as well as the performance of the baseline technique, was rendered inoperable in the case of E10, E11, and E13 beyond recovery.
§ CONCLUSION
In this paper we present a method to secure cooperatively perception-based applications for connected autonomous vehicles that we call . consists of three parts, an authentication method, a consensus round, and a trust scoring method that are pipelined such that it can be run in real time. was able to detect more categories of faults and errors, including both malicious and unintentional errors, while being faster than the state of the art method TruPercept. In future work, we would like to expand to work for all cooperative driving scenarios, including those that need path plan trust scoring.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02911v1 | 20240904175028 | Bulk Spectra of Truncated Sample Covariance Matrices | [
"Subhroshekhar Ghosh",
"Soumendu Sundar Mukherjee",
"Himasish Talukdar"
] | math.ST | [
"math.ST",
"math.PR",
"stat.TH"
] |
§ ABSTRACT
Determinantal Point Processes (DPPs), which originate from quantum and statistical physics, are known for modelling diversity. Recent research <cit.> has demonstrated that certain matrix-valued U-statistics (that are truncated versions of the usual sample covariance matrix) can effectively estimate parameters in the context of Gaussian DPPs and enhance dimension reduction techniques, outperforming standard methods like PCA in clustering applications. This paper explores the spectral properties of these matrix-valued U-statistics in the null setting of an isotropic design. These matrices may be represented as X L X^⊤, where X is a data matrix and L is the Laplacian matrix of a random geometric graph associated to X. The main mathematically interesting twist here is that the matrix L is dependent on X. We give complete descriptions of the bulk spectra of these matrix-valued U-statistics in terms of the Stieltjes transforms of their empirical spectral measures. The results and the techniques are in fact able to address a broader class of kernelised random matrices, connecting their limiting spectra to generalised Marčenko-Pastur laws and free probability.
Design of a Standard-Compliant Real-Time Neural Receiver for 5G NR
Reinhard Wiesmayr^,∗, Sebastian Cammerer^†, Fayçal Aït Aoudia^†, Jakob Hoydis^†
Jakub Zakrzewski^†, and Alexander Keller^†
^†NVIDIA, ^ETH Zurich, contact: [email protected]
^∗Work done during an internship at NVIDIA.
This work has received financial support from the European Union under Grant Agreement 101096379 (CENTRIC). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission (granting authority). Neither the European Union nor the granting authority can be held responsible for them.
July 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The explosion of large-scale data, often referred to as “big data”, has transformed industries, research fields, and everyday life in the recent years. The phenomenon of massive scale data has called for new approaches to modelling and analysis. In particular, the question of diverse samples to enable a more parsimonious representation of data has led to connections with statistical physics, wherein models of strongly repulsive particle systems have been leveraged to augment the diverseness of features in machine learning procedures.
A key model in that respect is that of determinantal point processes or DPPs. A DPP is a probability distribution over subsets of a given ground set, such that the probability of a subset is proportional to the determinant of a kernel matrix corresponding to the subset. DPPs are known for their ability to model diversity, making them useful for selecting a set of items that are spread out over the feature space. Originating in quantum and statistical physics, DPPs have quickly grown to have an increasing impact as a significant component of a machine learning toolbox based on negative dependence.
A major parametric model of DPPs that has attracted attention in recent years is that of the Gaussian Determinantal Processes, abbrv. GDP <cit.>. In particular, it was shown in <cit.> that a certain matrix-valued statistic
Σ̂ = 1/2n^2∑_1 ≤ i, j ≤ n(X_i - X_j≤ r) (X_i - X_j)(X_i - X_j)^⊤,
where (X_i)_i=1^n ⊂ℝ^p are data points and r is a suitably chosen threshold, effectively performs parameter estimation in the GDP model.
Σ̂ may be viewed as a certain truncation of the sample covariance matrix (based on pairwise distances between data points), Further, it was demonstrated empirically in <cit.> that this matrix-valued test statistic can be leveraged as an ansatz to build dimension reduction tools that arguably outperform standard PCA based methods, especially in the context of clustering applications.
An understanding of this latter phenomenon would require an understanding of the spectrum of the matrix in (<ref>). In this paper, we take a first step towards this by studying its bulk spectrum, modelling the data as i.i.d. centered random variables. This is the most fundamental and basic setting in which one first needs to understand the behaviour and properties of the matrix in (<ref>).
In fact, we are able to analyse a broader class of matrix-valued statistics which widely generalises Σ̂, by incorporating a general class of kernel functions K(X_i,X_j) in lieu of the distance based cutoff function (X_i - X_j≤ r). We perform a detailed analysis of the bulk spectrum of this broad class of kernelised random matrices and obtain a concrete description of their limiting spectral distributions as the size of the dataset n and the dimension p go to ∞ in a way such that p/n → c ∈ (0, ∞) (the so-called proportional asymptotics regime). In particular, in the smooth case, where the kernelised interaction is a suitably regular function of their mutual interaction, we can explicitly characterise the limiting spectral distribution as a parameterised family of Marčenko-Pastur laws (see Theorem <ref>). In the non-smooth case, we obtain a certain generalised Marčenko-Pastur law as the limiting spectral distribution (see Theorem <ref>). See Figure <ref> for histograms of the bulk spectra of Σ̂ for different choices of the threshold r. See Figure <ref> for an example of a kernelised version of Σ̂ with a smooth kernel. We also obtain the limiting spectral distribution in the semi-high-dimensional regime where p / n → 0 and p ≫√(n) (see Theorem <ref>).
The main insight that goes into analysing the spectrum of the matrix in (<ref>) is to represent it as a matrix-valued Rayleigh quotient:
1/n^2 X L X^T,
where L is the Laplacian matrix of the random geometric graph on n vertices whose edges are given by (X_i - X_j≤ r). It is to be noted that matrices of the form X A X^⊤, where A is a positive semi-definite matrix independent of X have been studied in the literature in great detail. For example, in <cit.>, the authors considered the matrix 1/nXAX^⊤, where X is a n × p (note that the roles of n and p are reversed in their notation but this is only a cosmetic difference) matrix of i.i.d. entries with zero mean and unit variance and A is a diagonal matrix having some deterministic limiting spectral distribution μ_A. If X and A are independent and p/n→ y, then their result says that the above-mentioned matrix has a deterministic limiting spectral distribution, whose Stieltjes transform s is described as the unique solution, in the upper half plane ℂ^+ = {z ∈ℂ : z > 0}, of the equation
s(z) = 1/-z + y∫t d μ_A(t)/1+ts(z),
for z ∈ℂ^+. The resulting limiting spectral distribution is called a generalised Marčenko-Pastur Law which also admits the following free probabilistic interpretation: it is the free multiplicative convolution of μ_A and the Marčenko-Pastur law. In a more recent work, under certain additional assumptions, <cit.> obtained local laws for the matrices 1/nX A X^⊤ and 1/n A^1/2XX^⊤ A^1/2, where A is a deterministic matrix. The crucial difference of our model from these existing works is that the Laplacian matrix L is dependent on X. As such we need use careful decoupling arguments to analyse its spectrum. Obtaining local laws under our setting is an interesting direction for future research.
It may also be observed that Σ̂, and its kernelised generalizations, belong to the wider family of matrix-valued U-statistics. As such, our results also contribute to the burgeoning theory of matrix-valued U-statistics and their applications. For instance, the spectrum of a matrix-valued Kendall's τ statistic was studied recently by <cit.>. They showed that if X_1, X_2, …, X_n are i.i.d. p-dimensional random vectors with independent entries from a continuous distribution, then the empirical spectral distribution of the matrix-valued Kendall's τ statistic, defined as
τ = 1/n2∑_1≤ i <j ≤ nsign(X_i - X_j) sign(X_i - X_j)^⊤,
converges weakly to 1/3 + 2/3Y, in probability, where Y follows the standard Marčenko-Pastur distribution (here the sign function is applied componentwise). The proof heavily relies on a matrix version of the Hoeffding decomposition for U-statistics. Although, the matrix (<ref>) is also a matrix-valued U-statistic, the presence of the cutoff factor makes a direct use of Hoeffding decomposition difficult. Instead, we directly analyse the Stieltjes transform of the empirical spectral distribution.
The rest of the paper is organised as follows. In Section <ref>, we describe the model under consideration and recall preliminaries of random matrices. In Section <ref> we state our main results and work out some examples. We also provide brief proof sketches of our main results in this section. Section <ref> gives detailed proofs of all the results. Finally, in Appendix <ref>, we collect some useful results from matrix analysis and concentration of measure which are used throughout the paper.
§ THE MODEL
Suppose w_ij, i ∈ [p], j ∈ [n] are i.i.d. random variables on some probability space (Ω, ℱ, ℙ). Assume w_11 = 0, (w_11) = σ^2 and w^4_11 < ∞. Define the p-dimensional vectors X_j= (w_1j, w_2j,… , w_pj)^⊤, j = 1, 2, …, n. X is the p × n matrix with X_j's as columns. Also define X̅ = 1/n∑_i = 1^n X_i.
We consider two asymptotic regimes:
* The proportional asymptotics regime: p/n→ c ∈ (0, ∞).
* The semi-high-dimensional regime: p/n→ 0.
Suppose that K_p:ℝ^p ×ℝ^p → [0,1] is a function symmetric in its coordinates, that is K_p (U, V) = K_p (V, U). Let A denote the n × n symmetric random matrix with entries A_ij = K_p (X_i, X_j).
The Empirical Spectral Distribution (ESD) of a real symmetric matrix Y_n × n is defined as
μ_Y = 1/n∑_i = 1^nδ_λ_i,
where λ_1, λ_2, …, λ_n are the eigenvalues of Y. The weak limit of the ESD (defined almost surely or in probability depending on the context) is called the Limiting Spectral Distribution (LSD).
In this paper we are interested in the truncated covariance matrix
M = 1/2n^2∑_1 ≤ i, j ≤ n A_ij (X_i - X_j)(X_i - X_j)^⊤,
which is a generalisation of the estimator in (<ref>).
Notice that if K ≡ 1, then
M = 1/2n^2∑_1 ≤ i, j ≤ n (X_i - X_j)(X_i - X_j)^⊤ = 1/n∑_i = 1^n (X_i - X̅)(X_i - X̅)^⊤,
which is the sample-covariance matrix of the observations X_1, …, X_n. As it is a rank-1 perturbation of the matrix 1/n X X^⊤ (which will also be called the sample-covariance matrix), they share the same LSD.
In the proportional asymptotic regime, it is well known that the sample-covariance matrix 1/n X X^⊤ has as its LSD the Marčhenko-Pastur distribution _c, σ^2 with parameters (c, σ^2). Recall that when c ∈ (0, 1], _c, σ^2 has density
d_c, σ^2(x) = 1/2 πσ^2√((b - x)(x - a))/cx𝕀_(a, b)dx,
where a = σ^2 (1 - √(c))^2 and b = σ^2(1 + √(c))^2. When c > 1, _c, σ^2 has a mass of (1 - 1/c) at 0, the remaining part has the same density as above, i.e.
_c, σ^2 = (1 - 1/c) δ_0 + 1/cν,
where
dν(x) = 1/2 πσ^2√((b - x)(x - a))/cx𝕀_(a, b)dx.
We will see that when the kernel K_p is Lipschitz and the entries w_ij satisfy some regularity conditions, then the LSD of M is a scaled Marčenko-Pastur law (see Theorem <ref>). However, if K_p is non-smooth, then a different LSD emerges, which is a generalised Marčenko-Pastur law (see Theorem <ref>).
On the other hand, in the semi-high-dimensional regime, one requires different scaling and centering. In is well known that the ESD of √(n/p)(1/nXX^⊤ - I) converges to the standard semi-circle law (see <cit.>, p. 864). The semi-circle law _ϖ^2 with variance ϖ^2 > 0 is defined as
d_ϖ^2 (x) = 1/2 πϖ^2√(4ϖ^2 - x^2) (|x| ≤ 2ϖ) dx.
For ϖ = 1, we have the standard semi-circle law.
In our setup, we prove that under certain conditions on the moments of A_12, in the regime
p ≫√(n),
the ESD of
E = √(n/p)(M - α_pσ^2 I)
converges weakly to a semi-circle law with parameters depending on A_12^2 and σ^2, almost surely. We also describe the Stieltjes transform of the limiting distribution (see Theorem <ref>).
§ MAIN RESULTS
§.§ The non-smooth case
Let d : ℝ^2 →ℝ be a symmetric function such that
𝔼|d(w_11,w_12)|^3< ∞.
Typical examples of d(x,y) are (x-y)^2 or |x-y|. We define d_p:ℝ^p×ℝ^p→ℝ, d_p(x,y)=∑_i=1^p d(x_i,y_i). Let ϕ_p:ℝ→ [0,1] be monotonic and potentially dependent on p. For the purpose of the first theorem we shall assume that K_p has the following form
K_p(x,y)=ϕ_p(d_p(x,y)).
Notice that this class of kernels includes the indicator kernel (x-y≤ r_p) and the Gaussian kernel 1- exp(-x-y^2/2τ^2_p), where r_p, τ_p are suitable constants.
We shall also require some limiting properties of the sequence of functions (ϕ_p). We state them now. First fix the following notations:
m_1 = [d(X_11,X_12)];
m_2 =Var(d(X_11,X_12));
m_2^(1) = Var(𝔼[d(X_11,X_12)|X_11]);
m_2^(2) = 𝔼Var(d(X_11,X_12)|X_11).
Define the functions ψ_p,ϕ̃_p: ℝ→ℝ as ψ_p(x)=pm_1+√(pm_2)x and ϕ̃_p=ϕ_p∘ψ_p.
Suppose there exists ϕ̃ such that for any ϵ>0,
(|ϕ̃_p-ϕ̃|>ϵ)→ 0
as p →∞ where is the Lebesgue measure. In other words, ϕ̃_p converges in Lebesgue measure to ϕ̃.
Our first theorem describes the LSD of M in terms of its Stieltjes transform. Throughout the paper z will denote a complex number with u= z and v= z, i.e. z = u + ι v. Recall that the Stieltjes transform S_μ of a probability measure μ on ℝ is a complex function defined for z∈ℂ^+ as follows:
S_μ(z) := ∫dμ(x)/x-z.
Suppose {μ_n}_n ≥ 1, μ are probability measures on ℝ with Stieltjes transforms {S_μ_n}_n ≥ 1 and S_μ, respectively. It is well known that S_μ_n→ S_μ pointwise on ℂ^+ if and only if μ_n →μ weakly (see, e.g., <cit.>). Moreover, if {μ_n}_n ≥ 1 are random probability measures and μ is a deterministic probability measure, then S_μ_n (z) → S_μ (z) almost surely for each fixed z ∈ℂ^+ if and only if μ_n →μ almost surely.
Suppose that Assumption <ref> holds. Then the ESD of M converges weakly to a deterministic distribution, almost surely. Moreover, if s(z) is the Stieltjes transform of the limiting distribution, then s(z) is the unique solution in ℂ^+ of the following equation:
1 + zs(z) = 𝔼_ζ[σ^2 s(z) ζ/1+cσ^2 s(z) ζ],
where
ζ = _Z_2[ϕ̃(√(m_2^(1)/m_2)Z_1 + √(m_2^(2)/m_2)Z_2) ],
with Z_1, Z_2 being i.i.d. N(0,1) random variables.
Suppose ξ_1, ξ_2, …, ξ_n are i.i.d. bounded random variables such that ξ_i and X_j are independent for all i ≠ j. Further assume that ξ_1 converges in distribution to some variable ζ. Then, the proof of Theorem <ref> will show that the LSD of 1/n∑_i=1^n ξ_i X_i X_i^⊤ is given by (<ref>). This result is known if we further assume that ξ_i is independent of X_i, but here we allow them to depend.
If M is represented as a matrix-valued Rayleigh quotient 1/n^2 X L X^⊤, our proof will show that the ESD of 1/nL will converge weakly to ζ as defined in (<ref>). Now, a moment's thought will reveal that the equations (<ref>) and (<ref>) are equivalent once we make the necessary adjustments for the scaling. In other words, even though X and L are dependent and L is not diagonal, a generalised Marčenko-Pastur law emerges as the LSD.
[Indicator kernel]
We first consider kernel
K_p(x, y) = (x - y≤ r_p),
where r_p is an appropriate threshold. This gives us the estimator (<ref>) that motivated the present study. In this case, we will assume that w_11 is Gaussian. At the least, we shall need that α_p K_p(X_1, X_2) converges to a nonzero quantity as p →∞. The suitable choice for r_p turns out to be
r_p^2= ((2p+2√(2p)z_α) σ^2+o(√(p))).
For α∈(0,1), z_α=Φ^-1(α), where Φ is the distribution function of the standard normal variable. Observe that X_1-X_2^2/2σ^2∼χ^2_p. Using the central limit theorem, U=X_1-X_2^2/2σ^2-p/√(2p) converges, in distribution, to a standard Gaussian. Notice that
α_p = ℙ(X_1-X_2≤ r_p)
=ℙ(X_1-X_2^2/2σ^2-p/√(2p)≤r_p^2/2σ^2-p/√(2p))
= ℙ(U≤ z_α+ o(1))
= α + o(1),
as p→∞. Now, in this case, ϕ_p (t) = I(t ≤ r_p^2) and d(x, y) = (x-y)^2. An easy computation shows that
m_1 = 2 σ^2, m_2 = 8 σ^4,
m_2^(1)= 2 σ^4, m_2^(2) = 6 σ^4.
Then ϕ̃_p and ϕ̃ turn out to be as follows:
ϕ̃_p(t) = I( t ≤ z_α +o(1)),
ϕ̃ (t) = I(t ≤ z_α).
Hence the distribution of ζ may be described as
ζ = 1/√(2 π)∫_ℝϕ̃(1/2Z+ √(3)/2t)e^-t^2/2dt
= Φ(-1/√(3)Z +2/√(3) z_α)
d=Φ(1/√(3)Z +2/√(3) z_α),
where Z ∼ N(0,1).
In the next example, however, Theorem <ref> can not be applied.
[Gaussian kernel]
Let us now consider the Gaussian kernel
K_p(x,y) = 1- e^-x-y^2/2pτ^2,
where we have taken τ_p^2 =2 p τ^2. Here, ϕ_p (t) = 1 - e^-t/2pτ^2 and d(x,y) = (x-y)^2. Let us also assume that w_11 is Gaussian. One can calculate that
ϕ̃_p(t) = 1-e^-σ^2/τ e^-√(2)σ^2 t/√(p)τ^2,
which does not satisfy Assumption <ref>.
In order to examine the Gaussian kernel, one needs to use the smoothness properties of the Gaussian kernel which was missing in our analysis. This leads us to the next theorem where the smoothness of the kernel is crucially used.
§.§ The smooth case
We say that K_p is Lipschitz with Lipschitz constant κ_p (or κ_p-Lipschitz in short) if
|K_p(x_1, y_1) - K_p(x_2, y_2)| ≤κ_p(x_1-x_2+y_1-y_2).
In particular, if for p ∈ℕ, ϕ_p : ℝ→ [0,1] is a κ_p-Lipschitz function and one takes K_p(x,y) = ϕ_p(x-y), then one can check that K_p is also an κ_p-Lipschitz kernel. Following <cit.>, we will make the following assumption on w_11.
[LC class property]
Let ω > 0. We require w_11 to satisfy either of the following three conditions:
(a) w_11d=φ(Z) for some Lipschitz function φ with φ_Lip≤ω, where Z is a standard normal variable.
(b) w_11 has density uniformly bounded below by 1/ω.
(c) X_1 is strongly log-concave with curvature ≥ 1/ω^2.
Define α_p = K(X_1, X_2).
Suppose that w_11 satisfies Assumption <ref> with parameter ω_p. Let K_p be an κ_p-Lipschitz kernel, such that κ_pω_p =o(1/√(log(n))) and α_p →α. Then, the ESD of M converges weakly to _c, α^2σ^2, almost surely.
Let us revisit Example <ref> in light of Theorem <ref>. Take w_11∼ N(0, σ^2). In this case, one may take κ_p = √(2)/e√(p)τ, α_ p = 1- (1 + 2σ^2/pτ^2)^-p/2 and α = 1 - e^-σ^2 / τ^2. Thus, by Theorem <ref>, μ_M _c, σ^2 (1 - e^-σ^2/τ^2)^2, almost surely.
§.§ The semi-high-dimensional regime
In Theorem <ref> if we take c to be 0, then the Stieltjes transform of the LSD turns out to be 1/ασ^2 - z which corresponds to the measure δ_ασ^2. This shows that E is the right matrix to look at.
Define β_p ^2 A_12^2. Indeed, we show that
If α_p →α, β_p^2 →β^2 and w_11^8 < ∞, in the regime p ≫√(n), the ESD of E converges weakly to _βσ^2, almost surely.
Note that without additional assumptions on |α_p - α|, we can not determine the convergence of
E' = √(n/p)(M - ασ^2 I),
hence the appearance of α_p in (<ref>) instead of α. For instance, if |α_p - α| = O(1/√(p)), then we can replace E by E' in Theorem <ref>.
In the set up of Example <ref>, β_p^2=α_p →α. Thus, the LSD of E, in this case is _√(α)σ^2. As for Example <ref>, β_p turns out to be 1 - 2(1-2 σ^2/p τ^2)^-p/2 + (1-4σ^2/pτ^2)^-p/2 and the parameter for changes accordingly.
In the remainder of this section, we briefly sketch the proofs of Theorems <ref>, <ref> and <ref>.
§.§ Proof sketches
In this section, we give brief sketches of the proofs.
§.§.§ The non-smooth case
We first observe that the matrix M can be written in the form 1/n^2XLX^⊤ where L is Laplacian matrix of a random geometric graph. Since the entries of L are bounded, it turns out to be sufficient to consider only 1/n^2 X D X^⊤, where D = (L). Note that D_ii = ∑_j≠ i K_p(X_i, X_j). This has rather high dependence on X_i and low dependence on the X_j's, for j ≠ i. Observe that conditional on X_i, { K_p(X_i, X_j) - [K_p(X_i, X_j) | X_i]}_j≠ i are bounded, centered i.i.d. random variables. Using Hoeffding's inequality, we show that the sum ∑_j≠ i (K_p(X_i, X_j)- [K_p(X_i, X_j) | X_i]) is negligible. It thus boils down to finding the LSD of the matrix
M̅ = ∑_i=1^n ξ_i X_i X_i^⊤,
where the ξ_i's are i.i.d and independent of X_j, j≠ i. One can still not use the existing results in the literature since ξ_i is not independent of X_i. However, ξ_i involves a sum of certain i.i.d. variables. This enables us to apply a Berry-Esseen bound, eventually providing us with the distributional convergence of ξ_i. We now look at the Stieltjes transform S_M̅ of M̅. Using standard martingale techniques, we deduce that S_M̅(z) - [S_M̅(z)] → 0 almost surely for each fixed z ∈^+. With this our goal becomes to get a recursive formula for [S_M̅]. This is done by using the Sherman-Morrison formula and various perturbation inequalities for matrices coupled with the fact that ξ_i converges in distribution as p →∞.
§.§.§ The smooth case
As in the non-smooth case, we start by writing M in the Rayleigh quotient form 1/n^2 X L X^⊤. Our goal is to show that M has the same LSD as 1/n^2 X ℒ X^⊤ where ℒ = L = α_n (nI-J), where J is the n× n matrix with all entries equal to 1. This is done by using matrix perturbation inequalities and Theorem 1 from <cit.>. Since J is of rank 1, we have a further simplification: we may just consider the matrix α_n/nX X^⊤, which clearly has _c, α^2 σ^2 as its LSD.
§.§.§ The p/n → 0 case.
As in the proof of Theorem <ref>, we can show that the LSD of E is the same as that of
E̅_1 = ∑_i=1^n ξ_i (X_i X_i^⊤ - σ^2 I).
We analyse this matrix via its Stieltjes transform and various matrix perturbation inequalities.
§ PROOFS
§.§ Proof of Theorem <ref>
Since zeroing out the diagonal entries of A does not change the Laplacian L = D - A, we may redefine A (with a slight abuse of notation) as follows:
A = (((1 - δ_ij)A_ij)),
D = (A ),
L = D - A,
where is the vector with each entry equal to 1. Notice that L is the Laplacian matrix corresponding to the weighted adjacency matrix A. The basic observation that will help us find the LSD of M is the following matrix Rayleigh quotient representation:
M = 1/n^2 X L X^⊤.
The idea is to divide M into three parts such that one part determines the LSD and the rest are negligible. In order to do so, we further decompose D into two diagonal matrices. Define ξ_i = [K_p(X_i,V) | X_i], where V∼ X_1 is independent of X_1, X_2,… X_n. Notice that
D_ii = ∑_j≠ i K_p(X_i,X_j)
= (n-1)ξ_i+∑_j≠ i(K_p(X_i,X_j)- [K_p(X_i,X_j)|X_i]).
Define the following quantities:
ξ = (ξ_1,ξ_2, …, ξ_n)^⊤,
ξ'_ij = K_p(X_i,X_j)- [K_p(X_i,X_j)|X_i],
ξ'_i = ∑_j≠ iξ'_ij,
ξ' = (ξ'_1, ξ'_2,…,ξ'_n)^⊤,
D_1 = (n-1)diag(ξ),
D_2 = diag(ξ').
With these notations set, one has D = D_1 + D_2. Now decompose M as
M=1/n^2 X D_1 X^⊤ + 1/n^2 X D_2 X^⊤ - 1/n^2 X A X^⊤.
Define M̃ = 1/n^2 X D_1 X^⊤. We will show that M has the same LSD as M̃. Towards that end, let d_W_2 denotes the 2-Wasserstein distance between probability measures μ_1 and μ_2 possessing finite second moments:
d_W_2(μ_1,μ_2) := inf√( (Z_1 - Z_2)^2),
where the infimum is taken over all possible couplings of (Z_1, Z_2) with marginals Z_1 ∼μ_1 and Z_2 ∼μ_2.
We have d_W_2(μ_M, μ_M̃) 0.
Using the Hoffman-Wielandt inequality (see Lemma <ref>) and the facts that
G H_≤min{G_H_, G_H_},
and
X^⊤_ = X_ = √(X^⊤ X_),
we have the following estimate
d_W_2(μ_M, μ_M̃) ≤1/√(p)M - M̃_
= 1/n^2 √(p)X(D_2-A) X^⊤_
≤1/n^2 √(p)X D_2 X^⊤_ +1/n^2 √(p)X A X^⊤_
≤1/n^2X D_2 X^⊤_+ 1/n^2 √(p)X^⊤ X_A_
≤1/n X^⊤ X_ ( 1/nD_2_+ 1/n √(p)A_ ).
Since the fourth moment of the entries is finite, using Theorem 3.1 of <cit.>,
1/nX^⊤ X_ (1+√(c))^2σ^2.
Let us now consider A_ and D_2_. Since each entry of A is bounded by 1, A_≤ n. Notice that D_2_ = max_i |ξ'_i|. Fix i∈[n]. Conditional on X_i, {ξ'_ij}_j≠ i are i.i.d. random variables. Moreover, |ξ'_i|≤ 1. Therefore by Hoeffding's inequality, for any t > 0,
ℙ(|ξ'_i|>t|X_i)≤ e^-t^2/2(n-1).
Since the right hand side does not depend on X_i, the above bound also holds unconditionally. Now, the exchangeability of the ξ'_i's yields
ℙ(1/nD_2_>√(6log n/n)) ≤ nℙ(|ξ_1'|>√(6nlog n))
≤ ne^-6nlog n /2(n-1)
≤1/n^2.
It follows by the Borel-Cantelli Lemma that 1/nD_2_ 0. We conclude that
d_W_2(μ_M, μ_M̃) 0.
This completes the proof.
In fact, a further simplification is possible.
Suppose that M̅ := 1/n∑_i=1^n ξ_i X_i X_i^⊤. Then d_W_2(μ_M̃, μ_M̅) 0.
Since M̃= n-1/n^2∑_i=1^n ξ_i X_i X_i^⊤, we have
d_W_2(μ_M̃ , μ_M̅) ≤M̃ - M̅_≤1/n^2X X^⊤_ 0,
where for the second inequality we have used the fact that |ξ_i| ≤ 1 so that the matrix
XX^⊤ - ∑_i = 1^n ξ_i X_i X_i^⊤ = ∑_i = 1^n (1 - ξ_i) X_i X_i^⊤
is positive semi-definite.
By virtue of Lemmas <ref> and <ref>, it is enough to find the LSD of M̅. Let
S_n(z)1/p (M̅-zI)^-1
be the Stieltjes transform of M̅. We shall now show that S_n(z) converges almost surely as n→∞, for each fixed z ∈ℂ^+. The following lemma shows that it suffices to find the limit of S_n(z).
For each fixed z ∈ℂ^+,
S_n(z) - S_n(z) 0.
The proof uses the well-known martingale technique in random matrix theory (see, e.g., the proof of Theorem 3.10 of <cit.>). Define ℱ_0 = {ϕ, Ω} and ℱ_k = σ(X_1, X_2, …, X_k) for k ∈ [n]. By _k, k= 0, 1, …, n, we denote the conditional expectation operator given ℱ_k. Then
S_n(z) - S_n(z) = ∑_k=0^n (_k [1/p(M̅ - zI)^-1] - _k-1[1/p(M̅ - zI)^-1]).
Call the k-th summand above γ_k. Then {(γ_k, ℱ_k)}_k=1^n is a martingale difference sequence. Suppose
M̅_k = M̅ - 1/nξ_k X_k X_k^⊤.
Notice that
_k (M̅_k - zI)^-1 = _k-1(M̅_k - zI)^-1.
By Lemma <ref>(b),
|(M̅ - zI)^-1 - (M̅_k - zI)^-1| ≤1/v.
Define S_nk=1/p(M̅_k - zI)^-1. Therefore
|γ_k| = |_k [S_n(z) - S_nk(z)] -_k-1 [S_n(z) - S_nk(z)]| ≤ 2/v.
Thus γ_k is a bounded martingale difference sequence. By Lemma <ref>,
|S_n(z) - S_n(z)|^4 ≤K_4/p^4( ∑_k=1^n |γ_k|^2 )^2 ≤4 K_4 n^2/v^4 p^4 = O(n^-2).
Now an application of the Borel-Cantelli lemma gives us (<ref>).
Suppose V, W are i.i.d. copies of X_1. Define
ζ_p= 𝔼[ϕ_p(d(V,W))|V].
Then ζ_p ζ, where
ζ = _Z_2[ϕ̃(√(m_2^(1)/m_2)Z_1 + √(m_2^(2)/m_2)Z_2) ],
where Z_1, Z_2 are i.i.d. N(0,1) random variables.
Write V=(V_1,V_2,…,V_p)^⊤ and W=(W_1,W_2,…,W_p)^⊤. Let us fix the following notations:
m_1'(v) = 𝔼[d(V_1,W_1)|V_1=v],
m_2'(v) = Var(d(V_1,W_1)|V_1=v),
m_3'(v) = 𝔼[|d(V_1,W_1)-m_1'(V_1)|^3|V_1=v].
Define
T=∑_i=1^p d(V_i,W_i)-∑_i=1^p m_1'(V_i)/√(∑_i=1^p m_2'(V_i)).
Applying the Berry-Esseen theorem (see, e.g., <cit.>) on {d(V_i,W_i)}_i=1^p, conditional on V,
sup_x|ℙ(T≤ x|V)-Φ(x)|≤ C_1 Q/√(p),
where C_1 is an absolute constant and
Q = (1/p∑ _i=1^p m_2'(V_i))^-3/2(1/p∑_i=1^p m_3'(V_i))^1/2.
By the SLLN,
Q m_2^(2)^-3/2 (m_3)^1/2.
For notational convenience, let us define
a_p(V) =∑_i=1^p m_1'(V_i),
b_p(V) = √(∑_i=1^p m_2'(V_i)).
Suppose Z∼ N(0,1) is independent of V and W. Without loss of generality assume that ϕ_p is increasing. Then,
|𝔼[ϕ_p (d(V,W))|V] - 𝔼[ϕ_p(a_p(V)+b_p(V)Z)|V]|
= |∫_0^1ℙ(ϕ_p(a_p(V)+b_p(V)T)>t|V) dt -∫_0^1ℙ(ϕ_p(a_p(V)+b_p(V)Z)>t|V) dt|
≤∫_0^1 |ℙ(T>ϕ_p^-1(t)-a_p(V)/b_p(V)|V)-ℙ(Z>ϕ_p^-1(t)-a_p(V)/b_p(V)|V)| dt
≤ C_1 Q/√(p)→ 0
almost surely. Notice that
𝔼[ϕ_p(a_p(V)+b_p(V)Z)|V]=1/√(2 π)∫_ℝϕ_p(a_p(V)+b_p(V)t)e^-t^2/2dt.
From the relation between ϕ and ϕ̃, we have
ϕ_p(a(V)+b(V)t) =ϕ̃_p∘ψ_p^-1(a(V)+b(V)t)
= ϕ̃_p (a(V)+b(V)t-pm_1/√(pm_2))
= ϕ̃_p( a_p'(V)+b_p'(V)t),
where
a_p'(V) = a(V)-pm_1/√(pm_2),
b_p'(V) = b(V)/√(pm_2).
Using the central limit theorem on {m_1'(V_i)}_i=1^p, as p→∞, we have
a_p(V)-pm_1/√(pm_2^(1)) N(0,1),
and consequently,
a_p'(V) N(0,m_2^(1)/m_2).
On the other hand, an application of the SLLN shows that
b_p'(V) √(m_2^(2)/m_2).
We are interested in the weak limit of
1/√(2 π)∫_ℝϕ̃_p(a_p'(V)+b_p'(V)t)e^-t^2/2dt
as p →∞. Suppose (by an abuse of notation) that a_p', b_p' are sequences of real numbers. Fix ϵ>0 and define the event
G = {|ϕ̃_p(a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|≤ϵ}.
Then
∫_ℝ|ϕ̃_p (a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|e^-t^2/2dt
≤∫_G|ϕ̃_p(a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|e^-t^2/2dt+∫_G^c|ϕ̃_p(a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|e^-t^2/2dt.
The first integral above is bounded by √(2π)ϵ and the second one by (|ϕ̃_p-ϕ̃|>ϵ). Taking a supremum over a_p', b_p', we get
sup_a_p', b_p'∫_ℝ|ϕ̃_p(a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|e^-t^2/2dt ≤√(2π)ϵ + (|ϕ̃_p-ϕ̃|>ϵ).
Now, sending p→∞ and noticing that ϵ is arbitrary, we obtain
sup_a_p', b_p'∫_ℝ|ϕ̃_p(a_p'+b_p't)dt - ϕ̃(a_p'+b_p't)|e^-t^2/2dt → 0.
as p→∞. It is enough to consider the weak limit of
1/√(2 π)∫_ℝϕ̃_p(a_p'(V)+b_p'(V)t)e^-t^2/2dt
as p→∞.
Next we prove that if a_p'→ a' and b_p' → b' (both deterministic sequences) as p →∞, then
∫_ℝϕ̃(a_p'+b_p't)e^-t^2/2dt →∫_ℝϕ̃(a'+b't)e^-t^2/2dt,
in other words, the map (a',b')↦∫_ℝϕ̃(a'+b't)e^-t^2/2 is continuous. Since ϕ̃ is monotonic, it is continuous almost everywhere. Thus, ϕ̃(a_p+b_pt)→ϕ̃(a+bt) for almost every t. An application of the bounded convergence theorem then proves our claim. Combining this with the convergence of a'(V) and b'(V), we get
1/√(2π)∫_ℝϕ̃(a_p'(V)+b_p(V)'t)e^-t^2/2dt d⟶1/√(2 π)∫_ℝϕ̃(√(m_2^(1)/m_2)Z+ √(m_2^(2)/m_2)t)e^-t^2/2dt,
where Z∼ N(0,1). This completes the proof.
𝔼 S_n(z) satisfies the following approximate recursion:
1 + z𝔼S_n(z) = 𝔼_ζ[σ^2ζ𝔼S_n(z)/1+c_nσ^2ζ𝔼S_n(z)] + o(1).
Suppose ξ_0∼ξ_1, X_0∼ X_1 are mutually independent and they are independent of the {X_i}_i=1^n. Define
M̅' =M̅+1/nξ_0X_0X_0^⊤,
S_n'(z) = 1/p (M̅'-zI)^-1.
Then
p = (M̅'-zI) (M̅'-zI)^-1
=1/n∑_i=0^nξ_i X_i^⊤ (M̅'-zI)^-1 X_i- z (M̅'-zI)^-1.
Taking expectation on both sides and using the exchangeability of the summands, we have
p =(1+1/n) 𝔼(ξ_0 X_0^⊤ (M̅'-zI)^-1X_0) - pz𝔼S_n'(z).
Dividing by n and noticing that |ξ_0 X_0^⊤ (M̅'-zI)^-1X_0|≤ 1+|z|/v, we get
c_n+ c_nz𝔼S'_n(z)=1/n𝔼(ξ_0 X_0^⊤ (M̅'-zI)^-1X_0)+o(1).
Observe that
|S'_n(z)-S_n(z)|≤v/p.
Therefore
c_n+c_nz𝔼S_n(z)=1/n𝔼(ξ_0X_0^⊤ (M̅'-zI)^-1X_0) +o(1).
An application of the Sherman-Morrison formula on the first term yields
1/nξ_0X_0^⊤ (M̅'-zI)^-1X_0= 1/nξ_0X_0^⊤ (M̅-zI)^-1X_0/1+1/nξ_0X_0^⊤ (M̅-zI)^-1X_0.
Then
|1/nξ_0 X_0^⊤ (M̅'-zI)^-1X_0-c_nσ^2ξ_0𝔼S_n(z)/1+c_nσ^2ξ_0𝔼S_n(z)|
=|1/nξ_0X_0^⊤ (M̅-zI)^-1X_0/1+1/nξ_0X_0^⊤ (M̅-zI)^-1X_0-1/nξ_0𝔼(X_0^⊤ (M̅-zI)^-1X_0)/1+1/nξ_0𝔼(X_0^⊤ (M̅-zI)^-1X_0)|
≤1/nξ_0|X_0^⊤ (M̅-zI)^-1X_0-𝔼(X_0^⊤ (M̅-zI)^-1X_0)|/|1+1/nξ_0X_0^⊤ (M̅-zI)^-1X_0||1+1/nξ_0𝔼(X_0^⊤ (M̅-zI)^-1X_0)|
≤|z|^2/n|X_0^⊤ (M̅-zI)^-1X_0-𝔼(X_0^⊤ (M̅-zI)^-1X_0)|/|z+z1/nξ_0X_0^⊤ (M̅-zI)^-1X_0||z+z1/nξ_0𝔼(X_0^⊤ (M̅-zI)^-1X_0)|
≤|z|^2/v^21/n|X_0^⊤ (M̅-zI)^-1X_0-𝔼(X_0^⊤ (M̅-zI)^-1X_0)|,
where we have used Lemma <ref>(h) to lower bound the denominator. Since X_0 is independent of (M̅ -zI)^-1, by Lemma <ref>,
|X_0^⊤ (M̅-zI)^-1X_0-𝔼(X_0^⊤ (M̅-zI)^-1X_0)| ≤ C√((M̅ -zI)^-1)≤C √(p)/√(v),
for some constant C > 0, which only depends on the second and fourth moments of w_11.
Thus
𝔼1/nξ_0X_0^⊤ (M̅'-zI)^-1X_0=𝔼c_nσ^2ξ_0𝔼S_n(z)/1+c_nσ^2ξ_0𝔼S_n(z)+o(1).
Also,
|𝔼c_nσ^2ξ_0𝔼S_n(z)/1+c_nσ^2ξ_0𝔼S_n(z) -𝔼c_nσ^2ζ𝔼S_n(z)/1+c_nσ^2ζ𝔼S_n(z)|
≤σ^2 c_n|z|^2|𝔼S_n(z)| 𝔼|ξ_0-ζ|/|z+zc_nσ^2ξ_0𝔼S_n(z)||z+zc_nσ^2ζ_0𝔼S_n(z)|
≤σ^2 c_n |z|^2/v^21/v𝔼|ξ_0-ζ|.
Lemma <ref> coupled with the Skorohod representation theorem gives us that 𝔼|ξ_0-ζ|→ 0 as p→∞. Combining everything, we get the desired approximate functional for S_n(z):
c_n+c_nz𝔼S_n(z)=𝔼_ζ[c_nσ^2ζ𝔼S_n(z)/1+c_nσ^2ζ𝔼S_n(z)] + o(1).
This completes the proof.
The equation
1+zs(z)=𝔼_ζ[σ^2ζ s(z)/1+cσ^2ζ s(z)],
z∈ℂ^+, has a unique solution for s(z) in ℂ^+, where ζ is defined in (<ref>).
Suppose (<ref>) has two distinct solutions s_1, s_2 in ℂ^+. Fix some z∈ℂ^2 such that s_1(z) ≠ s_2(z). Note for j=1, 2,
s_j = 1/-z + σ^2 ζ/1 + c σ^2 s_j ζ.
Notice
Im(s_j) = v - Im(σ^2 ζ/1 +c σ^2 s_j ζ)/|-z + σ^2 ζ/1 + c σ^2 s_j ζ|^2 < [ c σ^4 Im(s_j) ζ^2/|1 + c σ^2 s_j ζ|^2]/|-z + σ^2 ζ/1 + c σ^2 s_j ζ|^2.
So,
[ c σ^4 ζ^2/|1 + c σ^2 s_j ζ|^2]/|-z + σ^2 ζ/1 + c σ^2 s_j ζ|^2 > 1.
Also,
s_1 - s_2 = c σ^4 (s_1 - s_2) ζ^2/(1 + c σ^2 s_1 ζ)(1 + c σ^2 s_2 ζ)/(-z + σ^2 ζ/1 + c σ^2 s_1 ζ)(-z + σ^2 ζ/1 + c σ^2 s_2 ζ).
Cancelling s_1 - s_2 from both sides and applying the Cauchy-Schwarz inequality on the right hand side we get
1 ≤[ c σ^4 ζ^2/|1 + c σ^2 s_1 ζ|^2]/|-z + σ^2 ζ/1 + c σ^2 s_1 ζ|^2[ c σ^4 ζ^2/|1 + c σ^2 s_2 ζ|^2]/|-z + σ^2 ζ/1 + c σ^2 s_2 ζ|^2,
which contradicts (<ref>).
Now we are ready to prove Theorem <ref>.
Applying Lemmas <ref> and <ref>, it is enough to consider M̅ instead of M. Since |𝔼S_n(z)|≤1/v, using the Bolzano-Weierstrass theorem, 𝔼S_n(z) has a convergent subsequence. Consider any such subsequence |𝔼S_n_k(z)|→ s(z). Then, by Lemma <ref>, s(z) will satisfy the limiting equation
1+zs(z)=𝔼σ^2ζ s(z)/1+cσ^2ζ s(z).
Lemma <ref> shows that this equation has a unique solution for s(z) in ℂ^+.
This shows that all the subsequential limits of 𝔼S_n(z) are same proving that 𝔼S_n(s)→ s(z) as n→∞ where s(z) is the unique solution in ℂ^+ of the aforementioned equation. We can thus conclude that, almost surely, μ_M̅ converges weakly to some sub-probability measure μ with Stieltjes transform s. In order to show that μ is indeed a probability measure, by Prohorov's theorem it is sufficient to show that ∫ x^2 𝔼μ_M̅(dx) is uniformly bounded. By the trace-moment formula, the last quantity is the same as 1/p𝔼(M̅^2) which we compute next.
1/p𝔼(M̅^2) = 1/pn^2( ∑_i=1^n ξ_i^2 X_iX_i^⊤ X_i X_i^⊤ +∑_i ≠ jξ_i ξ_j X_i X_i^⊤ X_j X_j ^⊤)
= 1/p n^2 (nX_1^4 + n(n-1) (X_1^⊤ X_2)^2)
= O(1),
where the last step follows from the fact that X_1^4 = O(p^2) and (X_1^⊤ X^2)^2 = (X_1^⊤ X_2) = O(p). This completes the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
We again begin with the matrix Rayleigh quotient representation:
M = 1/n^2 X^⊤ L X^⊤.
Observe that 𝔼[A] = α_n, p (J - I) and 𝔼[D]= (n - 1) α_n, p I, where J = 11^⊤. Define
= [L] = [D] - [A] = α_n, p ((n - 1) I - (J - I)) = α_n, p (nI - J).
Further, let
F = 1/n (A - α_n, p (J - I)),
G = 1/n (D - (n - 1)α_n, p I) = (F ).
With the above notation, 1/nL = 1/n - F + G. Thus
M = 1/n^2 X L X^⊤ = 1/n X (1/n + F - G) X^⊤
= 1/n^2 X X^⊤ + 1/n X (F - G) X^⊤
= M̃ + 1/n X (F - G) X^⊤,
where M̃= 1/n^2 X X^⊤. We will show that the M and M̃ have the same LSD. Indeed,
d_W_2(μ_M, μ_M̃) ≤1/√(n)M - M̃_
= 1/n √(n)X (F - G) X^⊤_
≤1/n √(n)X_F - G_F X^⊤_
= 1/nX X^⊤_1/√(n)F - G_
≤1/nX X^⊤_1/√(n)(F_ + G_)
≤1/nX X^⊤_ (F_ + 1/√(n)F1)
≤2/nX X^⊤_F_.
Under the finite fourth moment condition, recall that 1/nX X ^⊤_→σ^2(1 + √(c))^2 almost surely. We now show that F_ 0. Because of Assumption <ref>, we can apply Theorem 1 of <cit.> on A to get that
(F_≥ 2 L ωσ(C + t/√(n))) ≤exp(-t^2 / C^2)
for some C > 0. We can simplify this as follows:
(F_≥ 2 L ωσ t) ≤exp(- C'^2 t^2 ).
for some C'>0. Now choosing t_n= 1/C'√(log n), we note that
(F_≥2 L ωσ√(log n)/C') ≤1/n^2.
Combining this with the fact that L ω = o(1/√(log n)), we conclude that F_→ 0 almost surely. This implies that d_W_2(μ_M, μ_M̃)→ 0 almost surely, that is M and M̃ have the weak limit almost surely if it exists. Notice that ℒ = α_pn I - α_p J. Since, J is of rank 1, by Lemma <ref>,
d_KS(μ_M̃,μ_M̂) ≤1/n
where d_KS denotes the Kolmogorov-Smirnov distance and M̂ = α_p/n XX^⊤. Since
d_W_2(μ_M̂, μ_α/nX X^⊤) ≤|α_p-α|/nX X^⊤_ 0,
we can only consider α_p/nX X^⊤. It is well known that the last matrix has _c, α^2σ^2 as its LSD. Combining everything, we conclude that
μ_M _c, α^2σ^2,
almost surely. This completes the proof.
§.§ Proof of Theorem <ref>
We define A, D, L as in the proof of Theorem <ref> and
Ẽ√(n/p)(n-1/n^2∑_i=1^n ξ_i X_i X_i^⊤ - α_p σ^2 I), E̅ √(n/p)(1/n∑_i=1^n ξ_i X_i X_i^⊤ - α_p σ^2 I).
One can prove analogues of Lemmas <ref> and <ref> in this set up so that d_W_2(μ_E, μ_Ẽ) 0 and d_W_2(μ_Ẽ , μ_E̅) 0, respectively (the analogue of the Bai-Yin result in this regime can be found in <cit.>). For the purpose of finding the LSD, it is thus enough to consider E̅. Now, E̅ can be further decomposed as E̅_1 + E̅_2, where
E̅_1 = 1/√(np)∑_i=1^n ξ_i (X_i X_i^⊤ - σ^2 I), and E̅_2 = σ^2/√(np)∑_i=1^n (ξ_i - α_p) I.
Since the (ξ_i - α_p)'s are centered, independent and bounded, invoking Hoeffding's inequality, E̅_2_≤σ^2/√(p) w.p. ≥ 1 - 2e^-√(p)/2, implying that d_W_2 (μ_E̅, μ_E̅_1) 0. In order to find the LSD of E̅_1, we proceed with the Stieltjes transform once again. We first show (<ref>). Our technique remains same for this part, except that we need analogues of equations (<ref>) and (<ref>). First we define
S̃_n(z) = 1/p( E̅_1 - zI)^-1,
E̅_1k = E̅_1 -1/√(np)ξ_k (X_k X_k^⊤ - σ^2 I).
By Lemma <ref> (b) and (c),
|(E̅_1 - zI)^-1 -(E̅_1k - zI)^-1| ≤p + X_k^2/v^2 √(np).
Now, by virtue of Lemma <ref> for l=4,
|S̃_n(z) - S̃_n(z)|^4 ≤K_4/p^4( ∑_k=1^n (2(p + X_k^2)/√(np))^2 )^2
≤16 K_4 n^2/v^4 n^2 p^6( ∑_k=1^n (2p^2 + 2X_k^4))^2
≤64 K_4 n^2/v^4 n^2 p^6(np^2 + ∑_k=1^n X_k^4)^2
≤128 K_4 n^2/v^4 n^2 p^6(n^2p^4 + (∑_k=1^n X_k^4)^2)
≤128 K_4 n^2/v^4 n^2 p^6(n^2p^4 + nX_1^8 + n(n-1) (X_1^4)^2 )
=O(1/p^2).
Here the last line follows from the facts that X_1^8 = O(p^4) and X_1^4 = O(p^2). Now, by Borel-Cantelli lemma, (<ref>) follows. So, it is enough to consider S̃_n(z). Define s̃_n(z) = S̃_n(z). By Lemma <ref>, we get
σ^4 β_n^2 s̃_n^2(z) + z s̃_n(z) + 1 = o(1),
which has the solutions
s̃_n(z) = -z ±√(z^2 - 4σ^4 β_n^2 + o(1))/2σ^4 β_n^2.
Here √(z'), for any z' ∈ℂ\ℝ, denotes the square root in the upper-half plane. Clearly, one must take the + sign in (<ref>), otherwise the right hand side has negative imaginary part which is not allowed. Now taking limit as n →∞, we get
lim_n →∞s̃_n(z) = -z ±√(z^2 - 4σ^4 β^2)/2σ^4 β^2,
which the Stieltjes transform of the semi-circle law with variance β^2 σ^4. This completes the proof of Theorem <ref>.
Let S̃_n(z) and s̃_n(z) be defined as in Theorem <ref>. Then, s̃_n(z) satisfies the following approximate functional equation
σ^4 β_n^2 s̃_n^2(z) + z s̃_n(z) + 1 = o(1).
To this end, take an independent copy (ξ_0, X_0) of (ξ_1, X_1). Define
E̅'_1 = 1/√(np)∑_i=0^n ξ_i (X_i X_i^⊤ - σ^2 I),
S'_n = 1/p( E̅'_1 - zI)^-1.
Proceeding as in the proof of Theorem <ref>, we can get
1 + z s̃'_n(z) = n+1/p√(np)ξ_0 (X_0^⊤ (E̅'_1 - zI)^-1 X_0 - σ^2 (E̅'_1 - zI)^-1),
where s̃'_n(z) = S̃'_n(z). Since E̅'_1_≤1/v,
|ξ_0 (X_0^⊤ (E̅'_1 - zI)^-1 X_0 - σ^2 (E̅'_1 - zI)^-1)| ≤1/vX_0^2 + p σ^2 /v≤2p σ^2 /v.
Thus, for the asymptotic purposes one can change the n+1 by n in the right hand side of (<ref>). By Lemma <ref>(b), (c),
|(E̅'_1 - zI)^-1 - (E̅_1 - zI)^ -1| ≤1/v^2(1/√(np)X_0^2 + σ^2 √(p/n)),
where the expectation of the right hand side is 2 σ^2/v^2√(p/n). This also implies that |s̃'_n(z) - s̃_n(z)| ≤2 σ^2/v^21/√(np). Now, applying Lemma <ref>(b),
|X_0^⊤ (E̅'_1 - zI)^-1 X_0 - X_0^⊤(E̅_1 + 1/√(np)ξ_0 X_0 X_0^⊤ - zI )^-1 X_0 | ≤σ^2/v √(np)X_0^2,
where the right hand side has expectation σ^2/v√(p/n). Using these perturbation results, we have the following upgrade of (<ref>):
1 + zs̃_n(z) = √(n)/p√(p)ξ_0 (X_0^⊤( E̅_1 + 1/√(np)ξ_0 X_0 X_0^⊤ - zI )^-1 X_0 - σ^2 (E̅_1 - zI)^-1) + o(1).
We apply Sherman-Morrison formula (see Lemma <ref>(i)) on the first term in the right hand side to get
1/√(np)ξ_0 X_0^⊤( E̅_1 + 1/√(np)ξ_0 X_0 X_0^⊤ - zI )^-1 X_0 = 1/√(np)ξ_0 X_0^⊤( E̅_1 - zI )^-1 X_0/1 + 1/√(np)ξ_0 X_0^⊤( E̅_1 - zI )^-1 X_0.
Notice we can write (<ref>) in the form 1 + zs̃_n(z) = - B_1 + B'_1 + B_2 - B'_2 +o(1), where
B_1 = σ^2/p^2ξ_0^2 (E̅_1 - zI)^-1 X_0^⊤ ( E̅_1 - zI )^-1 X_0,
B'_1 = B_1 1/√(np)ξ_0 X_0^⊤ ( E̅_1 - zI )^-1 X_0/1 + 1/√(np)ξ_0 X_0^⊤ ( E̅_1 - zI )^-1 X_0,
B_2 = √(n)/p √(p)ξ_0 R,
B'_2 = 1/p^2ξ_0^3 R X_0^⊤ (E̅_1 -zI)^-1 X_0/1 + 1/√(np)ξ_0 X_0^⊤ (E̅_1 -zI)^-1 X_0,
R = X_0^⊤ (E̅_1 - zI)^-1 X_0 - σ^2 (E̅_1 - zI)^-1.
We show that among the four, only B_1 contributes to the equation. Note
B_1 = σ^4/p^2ξ_0^2 ((E̅_1 - zI)^-1)^2 + σ^2/p^2ξ_0^2 R(E̅_1 - zI)^-1 = σ^4 ξ_0^2 S̃_n^2(z)+σ^4 S̃_n(z) R/p.
By using Lemma <ref>,
|R|^2 ≤ C_2 ((E̅_1 - zI)^-2) ≤C_2 p/v^2,
where C_2 only depends on the fourth moment of w_11. This additionally shows that R= O_P(1). Also, (S̃_n(z)) → 0 because, S̃_n(z) is a bounded sequence that converges almost surely. Thus,
B_1 = σ^4 (ξ_0^2 S̃_n^2(z)) + o(1) = σ^4 ξ_0^2 S̃_n^2(z) +o(1) =σ^4 β_n^2 s̃_n^2(z) + o(1).
By (<ref>) and (<ref>), B_2 = o(1). Notice by Lemma <ref>(h), Im(z + z/√(np)ξ_0 X_0^⊤ (E̅_1 -zI)^-1 X_0) ≥ v, |X_0^⊤ (E̅_1 -zI)^-1 X_0| ≤X_0^2/v and (E̅_1 -zI)^-1≤ p/v. So,
|B'_1| ≤σ^2 |z| /v^3 p √(np)X_0^4 = σ^2 |z|/v^3 p √(np) (p w_11^4 +p(p-1)σ^4) = o(1),
|B'_2| ≤|z|/v^2 p^2 [|R| X_0^2] ≤|z|/v^2 p^2 ( R^2)^1/2 (X_0^4)^1/2 = o(1).
Now, (<ref>) can be written as
σ^4 β_n^2 s̃_n^2(z) + z s̃_n(z) + 1 = o(1)
completing the proof of Lemma <ref>
§ ACKNOWLEDGEMENTS
SG was supported in part by the MOE grants R-146-000-250-133, R-146-000-312-114, A-8002014-00-00 and MOE-T2EP20121-0013. SSM was partially supported by the INSPIRE research grant DST/INSPIRE/04/2018/002193 from the Dept. of Science and Technology, Govt. of India, and a Start-Up Grant from Indian Statistical Institute.
abbrvnat
§ AUXILIARY RESULTS
Here we collect lemmas and results borrowed from the literature. First we define some notations.
Mat_n(ℂ) := The set of all n × n matrices with complex entries.
For A ∈Mat_n(ℂ), define the Hilbert-Schmidt norm of A by
A_ := √(∑_1 ≤ i ≤ j ≤ n|A_ij|^2).
For x ∈ℝ^n, let x = √(∑_i = 1^n x^2_i). The Operator norm of A is defined as
A_ := sup_x = 1Ax.
For a matrix A with eigenvalues λ_1, …, λ_n, let F_A(x) := 1/n∑_i = 1^n 𝕀_[λ_i, ∞)(x) be the empirical distribution function associated with the eigenvalues. Let 𝕊_n denotes the set of all permutations of the set {1, 2, …, n}.
Let A,B ∈Mat_n(ℂ) are two normal matrices, with eigenvalues λ_1(A),λ_2(A), …, λ_n(A) and λ_1(B),λ_2(B), …, λ_n(B) respectively. Then we have
min_σ∈𝕊_n∑_i = 1^n | λ_i(A) - λ_σ(i)(B)|^2 ≤A - B ^2_.
An immediate consequence of this is that
d_W_2(μ_A, μ_B)^2 ≤A - B^2/n.
Let A, B ∈Mat_n(ℂ) are two Hermitian matrices. Then
sup_x ∈ℝ |F_A(x) - F_B(x)| ≤rank(A - B)/n.
Let C ∈Mat_p(ℝ) be a symmetric, positive semi-definite matrix, y ∈ℝ^p, z = u+ι v ∈ℂ^+, ε >0, then
(a) (C - zI)^-1_≤ 1/v;
(b) |(C + y y^⊤ -zI)^-1 - (C - zI)^-1| ≤min{1/v, y^2/v^2};
(c) |(C + ε I -zI)^-1 - (C -zI)^-1| ≤p ε/v^2;
(d) (C + ε I -zI)^-1 - (C -zI)^-1_≤ε/v^2;
(e) |y^⊤ (C+y y^⊤ -zI)^-1 y| ≤ 1 + |z|/v;
(f) (z(C- zI)^-1) ≥ 0;
(g) ((C - zI)^-1) >0;
(h) (z y^⊤ (C- zI)^-1y) ≥ 0;
(i) y^⊤ (c+ y y^⊤ - zI)^-1 y = y^⊤ (C-zI)^-1 y/1 + y^⊤ (C-zI)^-1 y.
In Lemma <ref>, parts (a) and (e)-(i) can be found in <cit.>. Parts (b)-(d) follow from the matrix identity A^-1 -B^-1 = A^-1(B-A)B^-1 provided A and B are invertible. We also state the following result for quadratic form of random vectors <cit.>.
Let A ∈Mat_p(ℂ) be non-random and Y = (Y_1, Y_2, …, Y_p) ∈ℂ^p be a random vectors of independent entries. Suppose Y_i = 0, |Y_i|^2 = 1 and |Y_i|^l≤ν_l for l= 3, 4, …, L, for some L ≥ 4. Then, for all l ≤ L/2, there exists C_l>0 (depends only on l), such that
|Y^* A Y - A|^l ≤ C_l((ν_4 (AA^*))^l/2 + ν_2l ( (AA^*))^l/2).
Let X_k be a complex martingale difference sequence with respect to some filtration {ℱ_k}_k ≥ 1. Then, for l ≥ 1,
|∑_k=1^n X_k|^l ≤ K_p (∑_k=1^n |X_k|^2 )^l/2,
whenever sup_k |X_k|^l < ∞.
|
http://arxiv.org/abs/2409.03562v1 | 20240905141932 | Shift invariant subspaces of large index in the Bloch space | [
"Nikiforos Biehler"
] | math.FA | [
"math.FA",
"math.CV",
"30H30, 30B10, 47A15, 47B91"
] |
§ ABSTRACT
We consider the shift operator M_z, defined on the Bloch space and the little Bloch space and we study the corresponding lattice of invariant subspaces. The index of a closed invariant subspace E is defined as ind(E) = (E/M_zE). We construct closed, shift invariant subspaces in the Bloch space that can have index as large as the cardinality of the unit interval [0,1]. Next we focus on the little Bloch space, providing a construction of closed, shift invariant subspaces that have arbitrary large index. Finally we establish several results on the index for the weak-star topology of a Banach space and prove a stability theorem for the index when passing from (norm closed) invariant subspaces of a Banach space to their weak-star closure in its second dual. This is then applied to prove the existence of weak-star closed invariant subspaces of arbitrary index in the Bloch space.
Meshless quadrature formulas
arising from numerical differentiation
Oleg DavydovDepartment of Mathematics,
University of Giessen, Arndtstrasse 2, 35392 Giessen, Germany,
<[email protected]>,
Bruno Degli EspostiDepartment of Mathematics
and Computer Science, University of Florence,
Viale Morgagni 67, 50134 Firenze, Italy,
<[email protected]>
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION & MAIN RESULTS
Consider the unit disc = { z ∈ : |z| < 1 } of the complex plane. The Bloch space is defined as the set of the functions f, analytic in , that satisfy sup_|z|<1(1-|z|^2)|f'(z)| < +∞. The quantity
f_ = |f(0)| + sup_|z|<1(1-|z|^2)|f'(z)|
is a norm on the Bloch space, which makes it into a Banach space. The closure of analytic polynomials with respect to that norm is a subspace of the Bloch space, called the little Bloch space, and is denoted by _0. An equivalent way of defining the little Bloch space, is as the subspace of functions in the Bloch space that satisfy lim_|z| 1^-(1-|z|^2)|f'(z)| = 0. Functions in the Bloch space enjoy several nice properties. For a function f in the Bloch space, the quantity sup_|z|<1(1-|z|^2)|f'(z)| is Möbius invariant, meaning it remains unchanged after composing f on the right by any automorphism of the unit disc. Another well known fact is that an analytic function belongs to the Bloch space if and only if it is Lipschitz with respect to the hyperbolic metric on the unit disc. The Bloch space has been thoroughly studied (in <cit.>,<cit.> for example) as it is linked with many topics in analytic function theory.
We consider the operator of multiplication M_z :, M_zf(z) = zf(z), also called the Shift operator, and we are interested in studying the lattice of closed invariant subspaces of M_z. Let E be a closed shift invariant subspace.
2.5cm0.4pt
2020 Mathematics Subject Classification: 30H30, 30B10, 47A15, 47B91.
Keywords and phrases: Bloch space, little Bloch space, invariant subspaces, shift operator, index, lacunary Taylor series.
It follows from general properties of the shift operator that zE is closed (see Lemma 1.2), allowing us to define the index of E to be the quantity
ind(E) := (E/zE).
Our goal is to show that, for the spaces under consideration, there exist invariant subspaces for which the index can be as large as possible, that is to say, as large as the space permits it to be.
Our motivation comes from a series of papers, starting of course from the celebrated Beurling Theorem, which characterizes the invariant subspaces of the shift operator in the classical Hardy space H^2. A fact following from that characterisation is that every invariant subspace has the so called codimension one (also called index one) property, i.e. every non-trivial invariant subspace has index equal to one. A classic reference for properties of the index, in this context, and the codimension one property is Richter's article <cit.>.
In a 1985 paper C. Apostol, H. Bercovici, C. Foias and C. Pearcy (<cit.>) proved that what was previously true for the Hardy space, is no longer true for the classical Bergman space A^2 of functions analytic in the unit disc and square integrable with respect to planar Lebesgue measure. In particular they proved the existence of invariant subspaces E_k such that ind(E_k) = k, for k =1,2, …, +∞. Numerous results have been published since then, in hope to understand the lattice of invariant subspaces of the shift operator. In one direction, we have several results proving the codimension one property, such as in the classical Dirichlet space (<cit.>) of analytic functions in the unit disc whose derivative belongs to A^2, or in the space ℓ^1_A of Taylor series in the unit disc with summable coefficients (<cit.>). In the other direction we have plenty of constructions, utilising different properties of each space, to prove the existence of invariant subspaces of arbitrary index. The first concrete example of an invariant subspace of index 2 in A^2 has been given by H. Hedenmalm in <cit.>, using results on sampling and interpolation in the Bergman space. Later lacunary series were used by A. Borichev for a wide range of spaces including the classical Bergman spaces, a variety of mixed norm spaces, growth spaces and some weighted sequence spaces (<cit.>). In <cit.> E. Abakumov and A. Borichev proved the existence of invariant subspaces of arbitrary index for a variety of weighted sequence spaces using solution sets of convolution equations. In particular they show that the space ℓ^p_A, of Taylor series in the unit disc, with p-summable Taylor coefficients, contains invariant subspaces of arbitrary index as long as p>2. The case 1<p<2 is still an open problem. Special families of inner functions have been used for H^∞ (<cit.>,<cit.>).
In the recent years the Bloch space has attracted a lot of attention. In a recent paper (<cit.>), A. Limani and A. Nicolau answer several open questions in the Bloch space, related to invariant subspaces and cyclicity. In particular, they prove a Beurling-type theorem for singly generated, weak-star invariant subspaces in the Bloch space. This gives more motivation to study the index of the shift invariant subspaces in the Bloch space.
In this paper, we exploit ideas developed by A. Borichev in <cit.> and construct lacunary series, with almost maximal growth, in order to prove the existence of closed, shift invariant subspaces of arbitrary large index in each of the spaces and _0 as well as weak-star closed invariant subspaces in , which under this topology is not a Banach space. The maximal growth for Bloch functions can be obtained by integrating the derivative of a function and using the definition of the norm.
In particular for every f ∈ we have:
|f(z)| ≤1/2f_log1 + |z|/1- |z|≲f_log1/1- |z|.
A more accurate result concerning the growth of Bloch functions comes from Makarov's Law of Iterated Logarithm (<cit.>). Inequality (1) also means that integrating a Bloch function, results in a function in the little Bloch space, which will be used with little mention throughout the text.
By lacunary series we will always mean some function of the form:
f(z) = ∑_n=0^∞ a_n z^s_n , z ∈ ,
where {a_n}_n=0^∞ and {s_n}_n=0^∞, and such that the exponents s_n grow sufficiently fast. The rate at which the sequence s_n should grow is described by the Ostrowski-Hadamard gap theorem (see also Fabry's gap theorem), which asserts that if s_n+1/s_n≥ q > 1 for some q>1 and all n ∈, and if the Taylor coefficients are such that the radius of convergence of the above series is equal to 1, then that function has the unit circle as a natural boundary. Such sequences {s_n} will simply be called lacunary sequences. Even though series of this type are notoriously badly behaved, they can be useful to construct functions with one's desired properties. Lacunary functions are neatly characterised in the Bloch space:
Let f(z) = ∑_n=0^∞ a_n z^s_n be a lacunary series. Then f belongs to the Bloch space (little Bloch space resp.) if and only if (a_n) is bounded ( a_n 0 resp.)
Proof of this can be found in <cit.>, Theorem 1.14. A key tool in proving a given invariant subspace has the desired index is the following lemma on summation of indices.
Let X be a Banach space of analytic functions on the unit disc, satisfying the division property(<cit.>), i.e.
* X is a Banach space contained in Hol(), the space of analytic functions on .
* Evaluation functionals k_λ are bounded on X, for all λ∈.
* zf belongs to X whenever f ∈ X.
* If f ∈ X and f(λ) = 0, then there exists a function g∈ X such that (z - λ)g(z) = f(z).
Let N ∈{1,2, …, +∞}, and for each 0 ≤ n < N let E_n X be an invariant subspaces of index one, and f_n ∈ E_n with f_n(0) ≠ 0. Suppose moreover that for each 0 ≤ n < N there exists c_n > 0 such that
c_n|g(0)| ≤g + h _X , g ∈ E_n , h ∈⋁_k ≠ nE_k.
Then ind(⋁_0 ≤ n < N E_n) = N.
A proof of the above lemma is given in <cit.>. In the original paper of S. Richter <cit.> it is proven that when the shift operator M_z is defined on such a space X, then M_z is bounded from above and below, meaning also that zE is closed whenever E is closed. As a consequence the index is well defined for every closed invariant subspace. In our case the spaces E_n will always be cyclic invariant subspaces [f_n] := span{ pf_n : p polynomial} with f_n(0) ≠ 0, and it will be enough to check condition (2) for polynomial multiples of the generating functions. A proof of the fact that the Bloch space verifies the requirements of Lemma 1.2. will be provided in the Section 2.
With all the preliminary information established, we present the results contained in this article.
For every N ∈{ 1 , 2, … , + ∞} there exists an invariant subspace E_N such that ind(E_N) = N.
Since the Bloch space is non-separable with that norm we are able to produce an example of an invariant subspace with uncountable index.
There exists an invariant subspace E of such that ind(E) = card([0,1]).
The above two theorems utilise elementary properties of the Bloch space and its norm, and the vectors generating the cyclic subspaces are constructed inductively. If one wants to pass from the Bloch space to its “little-oh” analogue, one will be forced to use different techniques. Since functions in the little Bloch space satisfy (1-|z|^2)|f'(z)| 0 as |z| 1^-, the norm on the space is unable to capture the behaviour of the function by looking close to the boundary. This impediment proves fatal to the argument used in Theorems 1.2 and 1.3, but gave us motivation to develop a new argument, which requires the construction of lacunary functions with several special properties. The growth of the functions, the most crucial part of their behaviour, is studied with the aid of a classical theorem of R. Salem and A. Zygmund about the distribution function of trigonometric series. In this approach we construct functions which are almost maximally large on sufficiently massive subsets of the unit disc, and use an iterative argument to prove good bounds on the L^p means of polynomials on the aforementioned sets.
For every N ∈{ 1 , 2, … , + ∞} there exists an invariant subspace E_N _0 such that ind(E_N) = N.
Finally, we turn to a more abstract setting and extend several results from S. Richter's paper <cit.> for the weak-star topology. In particular, let X_0, X, Y be Banach spaces of analytic functions satisfying the division property, and that satisfy the following dualities: X_0^∗≅ Y and Y^∗≅ X. The space X_0 can always be continuously embedded into X. Given an invariant subspace E X_0, write E^w^∗ for its weak-star closure in X. Under some natural assumptions on the spaces we prove the following theorem:
Let E be an invariant subspace of X_0 such that ind(E)= N for some N ∈{1, 2, … ,+∞}. Then the subspace E^w^∗ X is invariant, weak-star closed and ind(E^w^∗) = N.
We may then equip the Bloch space with the weak-star topology, inherited from its pre-dual, which can be identified as the Bergman space A^1(). This weaker topology makes the Bloch space separable, and can be viewed as a more natural choice for topology when studying certain problems, such as existence of cyclic vectors of the shift operator. With this set-up, it makes sense to look for invariant subspaces that are weak-star closed. Since the space becomes separable with the weak-star topology we cannot expect the index to be as large as we demonstrate in Theorem 1.3, but at most countable. Theorem 1.6 can be combined with Theorem 1.4 to obtain weak-star closed, invariant subspaces of arbitrary large, index for the Bloch space.
For every N ∈{ 1 , 2, … , + ∞} there exists a weak-star closed invariant subspace E_N such that ind(E_N) = N.
Theorem 1.7 provides an interesting antithesis to a phenomenon previously known in H^∞. There, the invariant subspaces behave quite differently when we pass from a strong to a weaker topology. On one hand, there are plenty of examples demonstrating that norm-closed invariant subspaces may have arbitrary large index. On the other hand, once the space is equipped with the weak-star topology, it a Beurling-type theorem will hold, as in the classical case. Theorem 1.7 demonstrates that it is no longer the case in the Bloch space.
In each one of the Theorems 1.3, 1.5, 1.7 it is actually proven that there is a sequence of functions f_n, 1 ≤ n < ∞ in the appropriate space (either or _0) such that for every 1 ≤ n_1 < ⋯ < n_N < ∞, the invariant subspace ⋁_1 ≤ k ≤ N[f_n_k] has index equal to N.
The paper is organized as follows. Section 2 is dedicated to the proof of Theorems 1.3 and 1.4. In section 3 we provide the proof as well as the necessary backgroung for Theorem 1.5. Finally section 4 is left to prove Theorem 1.6 and Corollary 1.7, as well as to extend the required results from Richter's article.
§ INVARIANT SUBSPACES IN THE BLOCH SPACE
We begin by verifying that the Bloch space satisfies the hypotheses of Lemma 1.2, i.e, that it is indeed a Banach space of analytic functions satisfying the division property. The case of the little Bloch space is almost identical; one simply needs to verify that dividing out the zero of a given function in the little Bloch space, results in a function which is still in the little Bloch space. Property (i) is obvious, and inequality (1) guarantees (ii) and (iii) are satisfied. It remains to check (iv), a.k.a the division property. Let E_0 = { f ∈ | f(0) = 0 }, and R_0 : E_0 the operator that maps f ↦ f(z) / z. We will prove that R_0 is bounded, and hence well-defined as well. The general case λ∈ follows from the Möbius invariance of the Bloch space.
Consider f ∈ with f(0) = 0 and fix 0 < s < 1. We can find an analytic function g ∈ Hol() such that f(z) = zg(z). We have:
‖ R_0(f) ‖_ = |g(0)| + sup_|z|<1 (1-|z|^2)|g'(z)|.
Obviously we have that g(0) = f'(0) so |g(0)| ≤‖ f ‖_. For the rest we may write :
sup_|z|<1 (1-|z|^2)|g'(z)| ≤max{sup_|z|≤ s (1-|z|^2)|g'(z)| , sup_s<|z|<1 (1-|z|^2)|g'(z)| }
Let 0< < 1-s. and define γ to be the anti-clockwise oriented circle centered at the origin and of radius s +. Then, when |z|< s + we get by Cauchy's formula:
g'(z) = 1/2π i∫_γg'(ζ)/ζ - z dζ =1/2π i∫_γf'(ζ)-g(ζ)/ζ(ζ - z) dζ =
= 1/2π i∫_γf'(ζ)/ζ(ζ - z) dζ -1/2π i∫_γζ g(ζ)/ζ^2(ζ - z) dζ =
= 1/2π i∫_γf'(ζ)(1-|ζ|^2)/ζ(ζ - z)(1-|ζ|^2) dζ - 1/2π i∫_γζ g(ζ)(1-|ζ|^2)/ζ^2(ζ - z)(1-|ζ|^2) dζ.
From this we get the estimate:
sup_|z| ≤ s(1-|z|^2)|g'(z)| ≤sup_|z| ≤ s+(1-|z|^2)|f'(z)|·1/s1/1-(s+)^2
+ sup_|z| ≤ s+|(1-|z|^2)|f(z)|·1/s(s+)1/1-(s+)^2.
This is true for every >0 small enough, hence the above inequality, along with the fact that the integration operator is bounded on the Bloch space (inequality (1)), we get that there is a constant C(s)>0, independent of f such that sup_|z|<s (1-|z|^2)|g'(z)| ≤ C(s) f_. On the other hand :
sup_s <|z|<1 (1-|z|^2)|g'(z)| ≤1/ssup_s <|z|<1 (1-|z|^2)|zg'(z)| = 1/ssup_s <|z|<1 (1-|z|^2)|f'(z) - g(z)| ≤
≤1/ssup_s <|z|<1 (1-|z|^2)|f'(z)| + 1/ssup_ <|z|<1 (1-|z|^2)|g(z)| ≤
≤1/ssup_s <|z|<1 (1-|z|^2)|f'(z)| + 1/s^2sup_s <|z|<1 (1-|z|^2)|f(z)|.
Similarly we obtain a constant C'(s)>0 such that sup_s< |z|<1 (1-|z|^2)|g'(z)| ≤ C'(s) f_. Overall g_≤max{1, C(s), C'(s)}f_, and the proof is finished.
We may pass to the proof of Theorems 1.3 and 1.4. For every n∈ we introduce the auxiliary function U_n(r) = (1-r^2)nr^n-1 , 0 ≤ r < 1. Note that sup_0<r<1U_n(r) = z^n_. We also consider the sequence of radii r_n = (1-1/n)^1/2. The radius r_n is sufficiently close to the maximizing point of the function U_n, and we see that U_n(r_n) 1/√(e). We begin by stating a lemma that describes the construction of several lacunary sequences with some additional properties.
There exists a sequence { s(n,i)}_1 ≤ i ≤ n < ∞ such that:
* 1 < s(1,1) < s(2,1) < s(2,2) < s(3,1) < s(3,2) < s(3,3) < s(4,1) < s(4,2) < ⋯
* For every 1 ≤ i ≤ n, we have s(n+1,i)/s(n,i)≥ 2, for all n ∈.
* For every (n,i) ≠ (n',i') we have U_s(n,i)(r_s(n',i')) < 1/2^n+n'.
The proof of this lemma will be given at the end of this section. We may proceed to prove Theorem 1.3.
Proof of Theorem 1.3.
We consider a sequence { s(n,i)}_1 ≤ i ≤ n < ∞, as constructed using Lemma 2.1. We define a collection of functions as follows:
f_i(z) = 1 + ∑_n=i^∞z^s(n,i) , z∈, 1 ≤ i < ∞.
Note that condition (ii) of Lemma 2.1 states that for each i, the function f_i is lacunary. Moreover, Proposition 1.1 guarantees that these functions all belong to the Bloch space as their Taylor coefficients are bounded. For each N ∈ we define the invariant subspace E_N = ⋁_1 ≤ i ≤ N [f_i]. For N = ∞ we define E_∞ =⋁_1 ≤ i < ∞ [f_i] and our goal is to show that ind(E_N) = N. According to Lemma 1.2, it suffices to show that for every 0 ≤ M < ∞ and for every 1 ≤ i_0 < ∞ there exists some c_i_0 > 0 such that for every 1 ≤ i_1 < i_2 < ⋯ < i_M, with i_j ≠ i_0 for j ≠ 0, and for every polynomials p_0, p_1, …, p_M, we have:
c_i_0|p_0(0)| = c_i_0|f_i_0(0)p_0(0)| ≤‖ f_i_0p_0 + ∑_j=1^M f_i_jp_j ‖_.
To that end, fix 0 ≤ M < ∞, and consider 1 ≤ i_0 < ∞ and 1 ≤ i_1 < i_2 < ⋯ < i_M satisfying i_j ≠ i_0 for j ≠ 0 as well as polynomials p_0,p_1,…,p_M. To simplify notation we denote U_n,i := U_s(n,i) and r_n,i := r_s(n,i). For n ≥ i_0 we have that:
‖ f_i_0p_0 + ∑_j=1^Mf_i_jp_j ‖_ ≥sup_|z| = r_n,i_0(1-|z|^2)|(f_i_0p_0 + ∑_j=1^Mf_i_jp_j)'| ≥
sup_|z|=r_n,i_0(1-|z|^2)|p_0(z)s(n,i_0)z^s(n,i_0)-1|
- sup_|z|=r_n,i_0(1-|z|^2)|p_0(z) ∑_k ≥ i_0k ≠ ns(k,i_0)z^s(k,i_0)-1|
- sup_|z|=r_n,i_0(1-|z|^2)|p'_0(z)f_i_0(z)|
- ∑_j=1^Msup_|z|=r_n,i_0(1-|z|^2)|p_j(z)f'_i_j(z)|
- ∑_j=1^Msup_|z|=r_n,i_0(1-|z|^2)|p'_j(z)f_i_j(z)|.
For (5) we have that:
sup_|z|=r_n,i_0(1-|z|^2)|p_0(z)s(n,i_0)z^s(n,i_0)-1| = U_n,i_0(r_n,i_0) ·sup_|z| = r_n,i_0|p_0(z)|.
Note that U_n,i_0(r_n,i_0) 1/√(e), as n ∞. The maximum principle guarantees that
sup_|z| = r_n,i_0|p_0(z)| p_0_∞, as n ∞. For (6) we have that:
sup_|z|=r_n,i_0(1-|z|^2)|p_0(z) ∑_k ≠ ns(k,i_0)z^s(k,i_0)-1| ≤p_0_∞·∑_k ≠ n U_k,i_0(r_n,i_0) ≤p_0_∞·∑_k ≠ n1/2^k+n,
where we used property (iii) of Lemma 2.1. The quantities in (7) and (9) can be treated alike. If 0 ≤ j ≤ M, then:
sup_|z|=r_n,i_0(1-|z|^2)|p'_j(z)f_i_j(z)| ≤p'_j_∞·sup_|z|=r_n,i_0(1-|z|^2)|f_i_j(z)|.
Since functions in the Bloch space grow at most logarithmically, we obtain that
sup_|z|=r_n,i_0(1-|z|^2)|f_i_j(z)| n ∞⟶ 0.
Finally, for (8) we have that:
∑_j=1^Msup_|z|=r_n,i_0(1-|z|^2)|p_j(z)f'_i_j(z)| ≤∑_j=1^M p_j_∞·sup_|z|=r_n,i_0(1-|z|^2)|f'_i_j(z)|
≤∑_j=1^M p_j_∞·∑_k = i_j^∞U_k,i_j(r_n,i_0) ≤∑_j=1^M p_j_∞·∑_k = i_j^∞1/2^k+n.
where we used Lemma 2.1 once again. By substituting (10)-(13) into (5)-(9), we get that:
U_n,i_0(r_n,i_0) ·sup_|z| = r_n,i_0|p_0(z)| ≤‖ f_i_0p_0 + ∑_j=1^Mf_i_jp_j ‖_ + p_0_∞·∑_k ≠ n1/2^k+n
+ ∑_j=0^M p'_j_∞·sup_|z|=r_n,i_0(1-|z|^2)|f_i_j(z)| + ∑_j=1^M p_j_∞·∑_k = i_j^∞1/2^k+n
≤‖ f_i_0p_0 + ∑_j=1^Mf_i_jp_j ‖_ + p_0_∞·1/2^n + ∑_j=0^M p'_j_∞·sup_|z|=r_n,i_0(1-|z|^2)|f_i_j(z)| + 1/2^n·∑_j=1^M p_j_∞.
By letting n ∞ we get that:
1/√(e)·p_0_∞≤‖ f_i_0p_0 + ∑_j=1^Mf_i_jp_j ‖_ ,
and hence:
1/√(e)· |p_0(0)| ≤1/√(e)·p_0_∞≤‖ f_i_0p_0 + ∑_j=1^Mf_i_jp_j ‖_.
And we obtained inequality (4) with c_i_0 = 1/√(e). ▪
The above proof may serve as a model to prove Theorem 1.4. The challenge that arises is to define a continuum of functions with the properties described above, instead of countably many.
Proof of Theorem 1.4.
We apply Lemma 2.1 and set s_n = s(n,1), n ≥ 1. The properties of the sequence may be summarized as follows:
* 1 < s_1 < s_2 < ⋯.
* s_n+1/s_n≥ 2, for all n ∈.
* For every n ≠ n' we have U_s_n(r_s_n') < 1/2^n+n'.
To define an invariant subspace, we are in need of a lemma that we will use without proof, as a detailed proof can be found in Lemma 3.2, <cit.>.
There exists a family {N_α : α∈ [0,1] } such that for every M ∈ and every finitely many indices α_0,α_1, …, α_M ∈ [0,1] with α_0 ≠α_i for 1 ≤ i ≤ M we have:
card( N_α_0\⋃_i=1^M N_α_i) = ∞.
We consider the family {N_α}_α∈ [0,1] and we define for each α∈ [0,1] a function:
f_α(z) = 1 + ∑_n∈ N_αz^s_n , z∈.
Once again, these functions are lacunary, and they belong to the Bloch space. The invariant subspace is defined similarly, E = ⋁_α∈ [0,1][f_α] and we aim to show that ind(E) = (E/zE) = card([0,1]). By definition, it suffices to show that for any α_0 ∈ [0,1] we have:
f_α_0 + zE ∉span{ f_α + zE : α∈ [0,1] , α≠α_0 },
where the closure is in the quotient topology of E/zE, and f_α+ zE denotes the equivalence class of f_α in the quotient space. Let α_0 ∈ [0,1]. It is sufficient to find some constant c >0 such that:
f_α_0 + zE + u _E/zE≥ c ,
for every u ∈span{ f_α + zE : α∈ [0,1] , α≠α_0 }
To that end, consider any finite number of indices α_1, …, α_M ∈ [0,1] with α_i ≠α_0, and any polynomials q,p_1,…,p_M. Then we need to show that
f_α_0(1 + z · q) + ∑_i=1^M p_i f_α_i_≥ c_0.
Set p_0 = 1 + zq. From Lemma 2.2 we have that there exists some increasing sequence k_n N_α_0 such that k_n ∉⋃_i=1^M N_α_i. Starting once again as in Theorem 1.3 we obtain:
‖ f_α_0p_0 + ∑_j=1^M f_α_jp_j ‖_ ≥sup_|z| = r_k_n(1-|z|^2)|(f_α_0p_0 + ∑_j=1^M f_α_jp_j)'| ≥
sup_|z|=r_k_n(1-|z|^2)|p_0(z)s_k_nz^s_k_n-1|
- sup_|z|=r_k_n(1-|z|^2)|p_0(z) ∑_m ≠ k_ns_mz^s_m-1|
- sup_|z|=r_k_n(1-|z|^2)|p'_0(z)f_α_0(z)|
- ∑_j=1^Msup_|z|=r_k_n(1-|z|^2)|p_j(z)f'_α_j(z)|
- ∑_j=1^Msup_|z|=r_k_n(1-|z|^2)|p'_j(z)f_α_j(z)|.
Since k_n ≠ m for all indices m appearing in the sums above, condition (iii') is satisfied, and hence we may replicate the argument of Theorem 1.3. Therefore by letting k_n ∞ we obtain:
1/√(e)·p_0_∞≤‖ f_α_0p_0 + ∑_j=1^M f_α_jp_j ‖_,
and since p_0_∞ = 1+zq_∞≥ 1 we arrive at:
1/√(e)≤‖ f_α_0p_0 + ∑_j=1^M f_α_jp_j ‖_,
which is inequality (17) with c=1/√(e). ▪
We finish this section by proving Lemma 2.1.
Proof of Lemma 2.1.
We will construct the sequence s(n,i) inductively. We may define s(1,1) > 1 as we like. To define the integer s(2,1), we take it to be s(2,1) ≥ 2 s(1,1) so that it satisfies conditions (i) and (ii). For condition (iii), we notice that
For every n∈, U_n(r) 0, as r 1^- and
for every r ∈ (0,1), U_n(r) 0, as n ∞.
This means we can choose s = s(2,1) large enough so that:
U_2,1(r_1,1) < 1/2^2+1 and
U_1,1(r_2,1) < 1/2^2+1.
To continue, assume that we have already defined all terms up to s(n,i) for some (n,i). The next term has the form s(n+1,1), if n=i, and has the form s(n,i+1) if n>i. Without loss of generality, we may assume the first case. First choose s(n+1,1) large enough so that s(n+1,1) > s(n,n) and s(n+1,1) ≥ 2 s(n,1). That way conditions (i) and (ii) are taken care of. For the last one we take s(n+1,1) additionally as large as to have:
U_n+1,1(r_m,i) < 1/2^n+1+m for all 1 ≤ m ≤ n and 1 ≤ i ≤ m
U_m,i(r_n+1,1) < 1/2^n+1+m for all 1 ≤ m ≤ n and 1 ≤ i ≤ m.
This is possible because of (25), (26) and because we have only finitely many predefined terms for which we need to verify the inequalities. By the inductive hypothesis we can construct the whole sequence s(n,i) satisfying all properties (i),(ii)
and (iii). The proof is complete.▪
Lemma 2.1 may be adapted for the spaces _α, 0<α < 1, which are Bloch type spaces with norm |f(0)|+sup_|z|<1(1-|z|^2)^α|f'(z)|. One can then prove Theorem 1.3 and 1.4 for those spaces.
§ INVARIANT SUBSPACES IN THE LITTLE BLOCH SPACE
In this section we will construct invariant subspaces in the little Bloch space of arbitrary, but countable, index. The section is split into two parts. We begin by stating some preliminary results, that will be used in the construction. Then we pass to the construction of the functions generating the invariant subspaces and prove that it has the correct index. The proof of the desired index is achieved with the aid of Lemma 1.2. We denote by m the normalized Lebesgue measure of [0,2π] or = {z ∈ : |z| = 1}.
§.§ Preliminary results
The first result is known as Makarov's inequality and it will be useful to us since it permits us to pass from estimates involving Bloch norms, to estimates involving L^p-means . Following that, is an exponential version of it, which follows by a simple calculation.
(Makarov's Inequality)
Let g ∈. Then for every 0 ≤ r < 1 and every n ∈ the following inequality holds:
( |g(rζ)|^2n)^1/2n≤g_( 1 + (n!)^1/2n√(log1/1-r)).
A proof of the above can be found in Makarov's original paper <cit.>. Theorem 8.9 in <cit.> provides a refined version involving the above numerical constants.
(Makarov's Inequality; exponential form)
Let g ∈, with g_≤ 1. Then for every 1-1/e≤ r<1 we have:
∫_rexp{|g(ζ)|^2/8log1/1-r} dm(ζ) ≤ 2
From Makarov's inequality for the function g we obtain for every n ∈:
∫_r|g|^2n dm ≤g_^2n( 1 + (n!)^1/2n√(log1/1-r))^2n
When r ≥ 1 - 1/e we get 1 + (n!)^1/2n√(log1/1-r)≤ 2(n!)^1/2n√(log1/1-r), and so:
∫_r|g|^2n dm ≤ 4^n n! log^n 1/1-r
Hence
∫_r( |g|^2/8log1/1-r)^n 1/n! dm ≤1/2^n
Using the monotone convergence theorem and summing up all the integrals, we get the result.
Theorem 3.3 is a classic result of Salem and Zygmund, as in their original paper <cit.>.
(Salem-Zygmund)
Let F(z) = ∑_k=1^∞c_kz^n_k, z ∈, be a lacunary power series, i.e. n_k+1/n_k≥ q > 1 for some q >1 and all k ∈. Let C_N = 1/√(2)(|c_1|^2 + ⋯ + |c_N|^2)^1/2 and let R_N,I_N be the real and imaginary parts of the N-th partial sum respectively. If C_N +∞ and c_N/C_N 0, then:
m( { z∈ : R_N(z)/C_N≤ x , I_N(z)/C_N≤ y }) 1/2π∫_-∞^x ∫_-∞^y e^-1/2(t^2+s^2) dt ds, x,y ∈.
We will need to work with the modulus of a complex function instead of the real and imaginary part. We thus compute the asymptotic distribution of it.
Let F_N be the partial sum of the series F(z) = ∑_k=1^∞c_kz^n_k, z ∈. Under the hypotheses of Theorem 3.3 we have that:
m( { z ∈ : |F_N(z)|/C_N≤ x }) 1 - e^-x^2/2, x ≥ 0.
Let U_N = R_N/C_N and V_N = I_N/C_N. We will compute the distribution of √(|U_N|^2+|V_N|^2). First of all, for x ≥ 0 we have |U_N| ≤ x |U_N|^2 ≤ x^2, which means that:
m(|U_N|^2 ≤ x^2) 1/√(2 π)∫_-x^x e^- 1/2 t^2 dt = 2/√(2 π)∫_0^x e^- 1/2 t^2 dt
And thus by a change of variables:
m(|U_N|^2 ≤ x) 1/√(2 π)∫_0^x 1/√(t) e^- t/2 dt =
1/√(2 π)∫_-∞^x 1/√(t) e^- t/2_{t ≥ 0} dt
The same can be said about the distribution of |V_N|^2. Since the joint distribution of U_N and V_N is asymptotically Gaussian, the variables are asymptotically independent. That means that we can use the continuous mapping theorem, to compute the asymptotic distribution of |U_N|^2 + |V_N|^2 by convoluting the density functions of each of the summands. Therefore the density of |U_N|^2 + |V_N|^2 will be given by the formula:
ϱ(x) = ∫_-∞^∞1/2π1/√(x-y)1/√(y)_{y ≥ 0 }_{x ≥ y } e^-(x-y)/2 e^-y/2 dy
= e^-x/2/2π∫_0^x 1/√(x-y)1/√(y) dy , x ≥ 0.
We substitute √(y) = u to obtain:
ϱ(x) = e^-x/2/2π∫_0^√(x)2/√(x)1/√(1 - (u/√(x))^2) du, x ≥ 0.
Then we substitute u/√(x) = sin t to get:
ϱ(x) = e^-x/2/2π∫_0^π/22/√(x)√(x)cos t/cos t dt = e^-x/2/2 , x ≥ 0.
What we computed means that:
m( |U_N|^2 + |V_N|^2 ≤ x ) ∫_0^x 1/2 e^-t/2 dt , x ≥ 0.
Hence a final change of variables gives us that:
m ( |F_N|/C_N≤ x ) ∫_0^x t e^-1/2t^2 dt = 1-e^-x^2/2 , x ≥ 0.
We introduce a family of lacunary polynomials which will be the basic building blocks to construct functions that generate invariant subspaces of high index. Let f_s(z) = ∑_m=s^2s z^3^m, s ∈ be a lacunary polynomial. We associate to every f_s a radius r_s = 1 - 1/3^2s, and a function X_s(r) = 1/√(log1/1-r) f_s(r), 0<r<1.
Let f_s and r_s be as above. There exists a constant C>0 such that:
1/C· s ≤‖ f_s(r_s ·) ‖_L^2()^2 ≤ C · s , s ≥ 1.
By Parseval's formula we have:
‖ f_s(r_s ·) ‖_L^2^2 = ∑_m=s^2sr_s^2 · 3^m≤ s+1
On the other hand,
∑_m=s^2sr_s^2 · 3^m≥∑_m=s^2sr_s^2 · 3^2s = ∑_m=s^2s(1 - 1/3^2s)^2 · 3^2s = (1 - 1/3^2s)^2 · 3^2s (2s - s +1).
Since (1 - 1/n)^2n converges to 1/e^2 as n + ∞ we get the reverse inequality.
Let f_s and r_s be as above. There exists a constant c > 0 such that for every >0 and every M > 0 there exists arbitrary large s ∈ with the property that for every 0 ≤ x ≤ M we have
m ( {ζ∈ : |f_s(r_s ζ)| > x √(log1/1-r_s)}) ≥ e^-cx^2 - .
We wish to apply Theorem 3.4. First we notice that:
√(log1/1-r_s) = √(2s log3)≍‖ f_s(r_s ·)‖_L^2 + ∞.
Following the notation of Theorem 3.4, the above means that:
C_2s = 1/√(2)(|c_1|^2 + |c_2|^2 + ⋯ + |c_2s|^2)^1/2 = 1/√(2)‖ f_s(r_s ·)‖_L^2 + ∞
Moreover it is clear that all non-zero Taylor coefficients are bounded. This means that both conditions of Theorem 3.4 are satisfied. The fact that we can apply Theorem 3.4 to a lacunary polynomial of this form comes from the fact that the convergence in Salem and Zygmund's Theorem is guaranteed only by the “length” s+1 of the block, and the fact that the lacunary gap on the exponents is at least 3. Finally, the convergence is uniform in x whenever it belongs to a fixed bounded interval.
We finish this section by observing the following property of the functions X_s(r):
For every s ∈, X_s(r) 0 , as r 1^- and
for every r ∈ (0,1), X_s(r) 0, as s ∞.
These properties are analogous to (18) and (19). Property (20) is straightforward. For property (21), fix r ∈ (0,1). Then:
X_s(r) = 1/√(log1/1-r)∑_m=s^2sr^3^m≤1/√(log1/1-r)∑_m=s^2sr^3^s = 1/√(log1/1-r) (s+1) r^3^s
And (s+1)r^3^s converges to zero as s ∞.
§.§ Invariant subspaces in _0 of arbitrary index - Construction
We consider the following functions:
f_i(z) = 1 + ∑_j=i^∞δ_j f_i,j(z) , z ∈, 1 ≤ i < + ∞ ,
where f_i,j are the blocks
f_i,j(z) = ∑_m = s(i,j)^2s(i,j) z^3^m , z∈,
and δ_j = 2^9 ·√(c/j), with c the constant appearing in Lemma 3.6. Since every function is lacunary and the coefficients δ_j tend to zero, these functions belong to _0. To each block are assigned its associated parameters:
* i ≥ 1 , j ≥ i set
r(i,j) = 1 - 1/3^2s(i,j)
* i ≥ 1 , j ≥ i , r ∈ (0,1) set
X_i,j(r) = 1/√(log1/1-r)∑_m=s(i,j)^2s(i,j)r^3^m
There exists a sequence {s(i,j)}_1 ≤ i ≤ j < ∞, such that:
(1.i.j) For all i ∈ and j ≥ i we have s(i,j) > max{ 2^4j+4 , s(i',j') : i' < i , j' < j }
(2.i.j) For all i ∈ and j ≥ i and (i',j') ≠ (i,j) we have
X_i,j(r(i',j')) ≤1/δ_j·1/2^i+i'+j+2j'+2
(3.i.j) For all i ∈ and j ≥ i we have
m( {ζ∈ : |f_i,j(r(i,j) ζ)| > √(1/c · 2^j+6)·√(log1/1-r(i,j))}) ≥ 1 - 1/2^j+5,
where c is the constant appearing in Lemma 3.6.
Set E_i,j = {ζ∈ : |f_i,j(r(i,j)ζ)| ≥√(1/c · 2^j+6)·√(log1/1-r(i,j))} and F_i,j = \ E_i,j
We define the sequence inductively, respecting the lexicographic order. It is clear that to obtain the above conditions it just suffices to make sure that s(i,j) is large enough at each step. In particular, for condition (3.i.j) we apply Lemma 3.6 with x = √(1/c · 2^j+6), = 1/2^j+6 and apply the inequality e^-x > 1 - x which holds for x<1.
Proof of Theorem 1.5.
As in the proof of Theorem 1.3, we define again E_N = ⋁_1 ≤ i ≤ N[f_i] _0 and E_∞ =⋁_1 ≤ i < ∞[f_i] _0 where f_i, 1 ≤ i < ∞ are the functions described in the beginning of section 3.2 and the implicit sequence { s(i,j)}_1 ≤ i≤ j < ∞ satisfies the conditions of Proposition 3.7. Following a similar argument as in Theorem 1.3, we fix 1 ≤ M < ∞, 1 ≤ i_1 < i_2 < ⋯ < i_M ∈ and p_1,p_2, …, p_M be polynomials, such that ∑_m=1^M p_m f_i_m_≤ 1. We wish to apply Lemma 2.1 to the function ∑_m=1^M p_m f_i_m in order to bound the values p_m(0), 1 ≤ m ≤ M, by some constants independent of the polynomials.
Define for every 1 ≤ m ≤ M and every j ≥ i_m the set:
U_i_m,j = {ζ∈ E_i_m,j : |p_m(r(i_m,j)ζ)| ≥ 2^j-1}.
We will implement the following scheme. We assume that for some J > i_M we have the following two inequalities:
m(U_i_m,J) ≤J/2^J+5 , 1 ≤ m ≤ M,
p_m_L^2^J+1(r(i_m,J+1))≤ 2^J+1 , 1 ≤ m ≤ M,
and we will prove that :
m(U_i_m,J-1) ≤J-1/2^J+4 , 1 ≤ m ≤ M,
p_m_L^2^J(r(i_m,J))≤ 2^J , 1 ≤ m ≤ M.
The fact that we can assume (22) and (23) for all J large enough comes from the fact that the polynomials p_m are bounded on the closed unit disc. Give the success of this argument, we may iterate it, starting from some radius close enough to one until we obtain:
p_m_L^2^i_M(r(i_m,i_M))≤ 2^i_M , 1 ≤ m ≤ M,
which means that |p_m(0)| ≤ 2^i_M for every 1 ≤ m ≤ M. The constant does not depend on the polynomials p_m, so using Lemma 2.1 we will have proven Theorem 1.5.
For the rest of the proof, fix some 1 ≤ n ≤ M and some J > i_M. First we demonstrate how to obtain inequality (25). Let A = {ζ∈ : |p_n(r(i_n,J)ζ)| ≥ 2^J-1}. Then:
m(A) = m(A ∩ E_i_n,J) + m(A ∩ F_i_n,J)
≤ m(U_i_n,J) + m(F_i_n,J)
≤J/2^J+5 + 1/2^J+5
≤J/2^J·1/2^5 + 1/2^5
≤ 2 ·1/2^5 = 1/16.
We may now use Minkowski's inequality to write:
p_n_L^2^J(r(i_n,J)) = ( ∫_\ A|p_n(r(i_n,J)ζ)|^2^J dm(ζ) + ∫_A|p_n(r(i_n,J)ζ)|^2^J dm(ζ) )^1/2^J
≤( ∫_\ A|p_n(r(i_n,J)ζ)|^2^J dm(ζ) )^1/2^J + ( ∫_A|p_n(r(i_n,J)ζ)|^2^J dm(ζ) )^1/2^J .
Applying Cauchy-Schwartz inequality to the second integral we obtain:
p_n_L^2^J(r(i_n,J))≤( ∫_\ A|p_n(r(i_n,J)ζ)|^2^J dm(ζ) )^1/2^J + √(m(A))·p_n_L^2^J+1(r(i_n,J)).
Using the bound for the polynomial on the set \ A, our hypothesis (23) and the fact that r(i_n,J) < r(i_n,J+1), we get that
p_n_L^2^J(r(i_n,J))≤ 2^J-1 + 1/4· 2^J+1 = 2^J.
Inequality (25) is proven. We proceed to prove inequality (24). By (1.i.j) we see that s(i,j) > s(1,1) ≥ 3 and hence all radii r(i,j) satisfy r(i,j) ≥ 1 - 1/e. We may thus apply Proposition 3.2 for the function ∑_m=1^M p_m f_i_m, on the radius r(i_n,J-1) and get:
∫_r(i_n,J-1)exp{|∑_m=1^M p_m f_i_m|^2/8log1/1-r(i_n,J-1)} dm ≤ 2.
Applying Jensen's inequality for the exponential function yields:
exp{∫_r(i_n,J-1)|∑_m=1^M p_m f_i_m|^2/8log1/1-r(i_n,J-1) dm }≤ 2.
Using the inequality |x + y|^2 = |x|^2 + |y|^2 + 2Re(xy) ≥ |x|^2 + |y|^2 - 2|x||y| and applying it for x = p_nδ_J-1 f_i_n,J-1 and y = ∑_m=1^M p_m f_i_m - p_nδ_J-1 f_i_n,J-1 we obtain:
| ∑_m=1^M p_m f_i_m|^2 ≥
|p_nδ_J-1 f_i_n,J-1|^2 - 2 |p_nδ_J-1 f_i_n,J-1| ·| p_n ( ∑_j ≥ i_nj ≠ J-1δ_j f_i_n,j) + ∑_1 ≤ m ≤ Mm ≠ n p_m ( ∑_ j ≥ i_m δ_j f_i_m,j) |.
Since we are integrating over the radius r(i_n,J-1) we may use (2.i.j) to get the following bound:
| p_n ( 1 + ∑_j ≥ i_nj ≠ J-1δ_j f_i_n,j) + ∑_1 ≤ m ≤ Mm ≠ n p_m ( 1 + ∑_ j ≥ i_m δ_j f_i_m,j) |
≤ |p_n| ( 1 +∑_j ≥ i_nj ≠ J-1δ_j X_i_n,j(r(i_n,J-1)) ·√(log1/1-r(i_n,J-1)))
+ ∑_1 ≤ m ≤ Mm ≠ n |p_m| ( 1 + ∑_ j ≥ i_m δ_j X_i_m,j(r(i_n,J-1)) ·√(log1/1-r(i_n,J-1)))
≤ |p_n| ( 1 +∑_j ≥ i_nj ≠ J-11/2^2i_n + j + 2(J-1) + 2 ·√(log1/1-r(i_n,J-1)))
+ ∑_1 ≤ m ≤ Mm ≠ n |p_m| ( 1 + ∑_ j ≥ i_m 1/2^i_n + i_m + j + 2(J-1) + 2 ·√(log1/1-r(i_n,J-1))).
A short calculation then yields:
| p_n ( 1 + ∑_j ≥ i_nj ≠ J-1δ_j f_i_n,j) + ∑_1 ≤ m ≤ Mm ≠ n p_m ( 1 + ∑_ j ≥ i_m δ_j f_i_m,j) | ≤∑_m=1^M |p_m|( 1 + 1/2^2J+1√(log1/1-r(i_n,J-1))).
As a result:
|∑_m=1^M p_m f_i_m|^2/8log1/1-r(i_n,J-1)≥|p_nδ_J-1 f_i_n,J-1|^2/8log1/1-r(i_n,J-1) - 1/4·|p_n δ_J-1 f_i_n,J-1|/√(log1/1-r(i_n,J-1))·( ∑_m=1^M |p_m|(1/√(log1/1-r(i_n,J-1)) + 1/2^2J+1) ).
Taking into account condition (1.i.j) gives that 1/√(log1/1-r(i_n,J-1))≤1/2^2J+1. Therefore after integration on the circle of radius r(i_n,J-1) and using Cauchy-Schwartz inequality, we obtain:
∫_r(i_n,J-1)|∑_m=1^M p_m f_i_m|^2/8log1/1-r(i_n,J-1) dm ≥
∫_r(i_n,J-1)|p_nδ_J-1 f_i_n,J-1|^2/8log1/1-r(i_n,J-1) dm - 1/4∑_m=1^M 1/2^2J(∫_r(i_n,J-1)|p_nδ_J-1 f_i_n,J-1|^2/log1/1-r(i_n,J-1) dm )^1/2p_m_L^2(r(i_n,J-1)) .
Finally, we notice that because of (1.i.j):
p_m_L^2(r(i_n,J-1))≤p_m_L^2^J(r(i_n,J-1))≤p_m_L^2^J(r(i_m,J))≤ 2^J.
Moreover J > i_M so M/2^J≤ 1. Therefore we obtain the following inequality:
2 ≥exp{∫_r(i_n,J-1)|∑_m=1^M p_m f_i_m|^2/8log1/1-r(i_n,J-1) dm }≥exp(X(X-1)),
where X =(∫_r(i_n,J-1)|p_nδ_J-1 f_i_n,J-1|^2/log1/1-r(i_n,J-1) dm )^1/2. As a result it must be that X ≤ 2, which in turn gives:
∫_r(i_n,J-1)|p_nδ_J-1 f_i_n,J-1|^2/log1/1-r(i_n,J-1) dm ≤ 32.
Restricting ourselves on the set U_i_n,J-1 and using the growth of the block f_i_n,J-1 and the size of the polynomial p_n yields exactly the desired result, i.e:
m(U_i_n,J-1) ≤J-1/2^J+4.
§ WEAK STAR CLOSED INVARIANT SUBSPACES & STABILITY OF INDEX
In this last section we discuss the index of a weak-star (w^∗) closed, invariant subspaces of a Banach space. In the first subsection duality in the Bloch spaces is introduced, to provide a concrete example. Following that we extend several of Richter's results from <cit.> for the weak-star topology, and prove Theorem 1.6. Finally we may apply that to the Bloch spaces and obtain Theorem 1.7.
§.§ Duality in the Bloch spaces
Consider the Bergman space A^1 of integrable functions in the unit disc, as well as the following dual pairings:
⟨· , ·⟩ : _0 × A^1
⟨ f , g ⟩ = lim_r 1∫_f(rz) g(rz) dA(z),
and,
⟨· , ·⟩ : A^1 ×
⟨ g , f ⟩ = lim_r 1∫_g(rz) f(rz) dA(z).
It is proven in <cit.>,<cit.>, <cit.> that these pairings are well defined and realize the dualities ( _0 )^∗≅ A^1 and (A^1)^∗≅. We can therefore endow the space _0 with the weak topology inherited from its dual space, and the space with the w^∗-topology inherited from its pre-dual.
The topology in can be characterized in terms of nets, as is done in <cit.>. If { f_i }_i ∈ I is a net, then:
f_i w^∗ 0
f_i(z) 0 , z ∈
lim sup_if_i_ < + ∞
The above statement remains true if we replace w^∗-convergence by weak convergence and the net { f_i }_i ∈ I belongs to _0. If f ∈_0 then f is norm-cyclic in _0 if and only if f is w^∗-cyclic in (<cit.>). Since polynomials are norm-dense in _0, the constant function 1 is norm-cyclic in _0 and so 1 must be w^∗-cyclic in , or equivalently, polynomials are w^∗-dense in . In particular, we may consider polynomials belonging in the set { a_0 + a_1z + ⋯ + a_N z^N : N ∈, a_i ∈ + i}, thus obtaining a countable, norm-dense set in _0 and hence a countable, w^∗-dense set in . This means that the space (, w^∗) is separable.
Our aim is to produce w^∗-closed invariant subspaces of arbitrary index in the Bloch space. Since (, w^∗) is separable, the index of an invariant subspace can be at most countable. In <cit.> and <cit.> it is proven that H^∞ with the norm topology contains invariant subspaces of index equal to the cardinality of the interval [0,1]. Theorem 1.4 is the Bloch space equivalent of that. It is also known that if H^∞ is equipped with the w^∗-topology then Beurling's theorem holds, i.e. for every E H^∞ invariant, then E = ϕ H^∞ for some inner function ϕ (<cit.>). This implies that all invariant subspaces have the index one property, and thus Theorem 1.7 provides a contrasting phenomenon to the situation in H^∞.
We remind the reader of two properties that hold in the Bloch space, and will be used in what follows. If we have a function f ∈ and F' = f then F ∈_0 and F_≤ C f_ for some constant C independent of f. The second one, called the “division property” states that if f ∈ and f(λ)=0 then f/z-λ∈. In a Banach space of analytic functions (see Lemma 1.2) the division operator R_λ, defined on E_λ = {f ∈ : f(λ) =0 }, is bounded for any λ∈, which is also equivalent to saying that the operator M_z - λ is bounded below for any λ∈ (<cit.>) . Next is a useful proposition.
Let M_z : be the Shift operator. Then:
* M_z is w^∗ - w^∗ continuous on
* R_λ is w^∗ - w^∗ continuous for every λ∈
To prove continuity of M_z, consider a converging net f_i w^∗ 0. We need to show that zf_i w^∗ 0. Pointwise convergence is obvious. Moreover,
zf_i_ = M_zf_i_≤M_zf_i_ so,
lim sup_izf_i_≤M_zlim sup_i f_i_ < + ∞
The two combined guarantee convergence of the net. For the second claim, consider a converging net f_i w^∗ f in E_λ. There are functions g_i, g ∈ such that (z - λ )g_i = f_i and (z- λ)g = f. This implies that R_λf_i = g_i and R_λf = g, and thus it suffices to show that g_iw^∗g. This can be deduced by the boundedness of R_λ and by proceeding as for the shift operator.
The above proposition has the following consequence: If E is a w^∗-closed subspace of , then M_zE is also w^∗-closed. This means that it is meaningful to consider the quotient E / zE for an invariant subspace which is w^∗-closed, and that will it be a well-defined locally convex space.
§.§ Extension of Richter's results - Stability of index
We consider the general situation where X_0, X, Y are Banach spaces of analytic functions satisfying the division property, and that satisfy the following dualities: X_0^∗≅ Y and Y^∗≅ X. We furthermore assume that Proposition 4.1 is true for the space, i.e. M_z and R_λ are continuous with respect to the w^∗-topology in X. We will denote by Lat_X_0(M_z) the lattice of norm closed, invariant subspaces of X_0 and by Lat_X(M_z,w^∗) the lattice of w^∗-closed, invariant subspaces of X. As mentioned above for given ℳ∈ Lat_X(M_z,w^∗), quotients of the form ℳ/zℳ make sense under the above assumptions, and the projection operator onto the quotient space is always continuous. The dimension of ℳ/zℳ is the same as that of ℳ/(z-λℳ) for λ∈, as follows from general properties of the shift operator and Proposition 4.1. If A is any subset of X, we denote by A^w^∗ the w^∗-closure of A in X. This coincides with all the w^∗-limits of nets in A. If f ∈ X, we will write [f]_∗ for the w^∗-closed, invariant subspace generated by f. Moreover, when writing ℳ∨𝒩 for ℳ, 𝒩∈ Lat_X(M_z,w^∗) we will mean the smallest w^∗-closed invariant subspace containing ℳ + 𝒩. Finally, for given ℳ∈ Lat_X(M_z,w^∗) we define 𝒵(ℳ) = {λ∈ | f(λ) = 0 , f ∈ℳ}.
Let ℳ, 𝒩∈ Lat_X(M_z,w^∗). Then:
* ind(ℳ∨𝒩) ≤ind(ℳ) + ind(𝒩),
* If ind(ℳ) = m ≥ 2, with m finite, and n_1 +n_2 = m then there exist 𝒩_1, 𝒩_2 ∈ Lat_X(M_z,w^∗), 𝒩_1, 𝒩_2 ℳ, such that ind(𝒩_i) = n_i and ind(𝒩_1 ∨𝒩_2) = m.
For the first implication, if either ℳ or 𝒩 have infinite index, then we have nothing to show, so assume the index of both is finite. In that case there exist ℳ_1 ℳ and 𝒩_1 𝒩, finite dimensional subspaces, such that:
ℳ = zℳ + ℳ_1 , 𝒩 = z𝒩 + 𝒩_1 and,
ind(ℳ) = (ℳ_1) , ind(𝒩) = (𝒩_1).
Then
ℳ + 𝒩 = z(ℳ+ 𝒩) + (ℳ_1 + 𝒩_1) z(ℳ∨𝒩) + (ℳ_1 + 𝒩_1) ℳ∨𝒩.
Since ℳ_1 + 𝒩_1 is finite dimensional, it is w^∗-closed. The space z(ℳ∨𝒩) + (ℳ_1 + 𝒩_1) is then also w^∗-closed. Indeed, consider the natural projection of X onto the quotient X / z(ℳ∨𝒩). Consider a base h_1, …, h_n of ℳ_1 + 𝒩_1. That map is well defined because z(ℳ∨𝒩) is w^∗-closed and continuous. The space spanned by Ph_1, …, Ph_n is finite dimensional, thus closed in the quotient topology which is also locally convex. But looking at the inverse image we see that:
P^-1(span{Ph_1,…, Ph_n }) = z(ℳ∨𝒩) + (ℳ_1 + 𝒩_1).
By the continuity of P we deduce that this space is w^∗-closed. Since ℳ + 𝒩 is w^∗-dense in ℳ∨𝒩 we get from the above inclusions that
z(ℳ∨𝒩) + (ℳ_1 + 𝒩_1) = ℳ∨𝒩,
and so,
ind(ℳ∨𝒩) = (ℳ_1 + 𝒩_1) ≤(ℳ_1) + (𝒩_1) = ind(ℳ) + ind(𝒩).
To prove the second implication we use a similar argument.
Let ℳ∈ Lat_X(M_z,w^∗) and λ∉𝒵(ℳ). The following are equivalent:
* ind(ℳ)=1,
* If f∈ℳ such that f(λ)=0 then there exists some h∈ℳ such that (z-λ)h = f,
* If (z-λ)h = f ∈ℳ for some h ∈ X then h ∈ℳ.
The equivalence of (2) and (3) is elementary. We will first prove that (1) implies (3).
Let (z- λ)h ∈ℳ for some h ∈ X, and suppose that h ∉ℳ. The function f:= (z-λ)h ∈ℳ satisfies f(λ)=0 but f ∉ (z-λ)ℳ. Hence the equivalence class f̅∈ℳ/(z-λ)ℳ is non zero. Since λ∉𝒵(ℳ) there exists some function g ∈ℳ such that g(λ) ≠ 0, which also means that g ∉ (z-λ)ℳ so g̅≠ 0. Since ind(ℳ)=1 there exists μ∈\{ 0} such that g̅ = μf̅. That means precisely that g ∈μ f + (z-λ)ℳ. By evaluating at z=λ we get g(0) = μ f(λ) +0 = 0 which is contradictory.
To prove that (3) implies (1), suppose that whenever (z-λ)h ∈ℳ for some h ∈ X, we have that h ∈ℳ. Consider f,g ∈ℳ and their respective equivalence classes, f̅,g̅∈ℳ/(z-λ)ℳ. We need to show that they are linearly dependent as vectors and that way conclude that the dimension of the quotient is in fact equal to one. If either f̅ or g̅ are zero then there is nothing to show, so we may assume neither of them are. That in particular means that, thanks to the hypothesis, that f(λ),g(λ)≠ 0. Consider the function g_0(z) = g(z) ·f(λ)/g(λ)∈ℳ. Then f-g_0 ∈ℳ and f(λ) - g_0(λ) = 0. Therefore we can write f-g_0 = (z-λ)h for some h ∈ X, and by the hypothesis that means that h ∈ℳ and thus we conclude that f-g_0 ∈ (z-λ)ℳ. Therefore f-g_0 = 0 f̅-g̅_̅0̅ =0 f̅ = g̅_̅0̅f̅ = f(λ)/g(λ)g̅, and thus the equivalence classes of f and g are linearly dependent.
Let ℳ∈ Lat_X(M_z,w^∗) and λ∉𝒵(ℳ). The following are equivalent:
* ind(ℳ)=1,
* There exists a (not necessarily closed) subspace L ℳ such that L^w^∗ = ℳ, with the properties that λ∉𝒵(L) and (z-λ)h ∈ L for some h ∈ X implies h ∈ℳ.
Proving that (1) implies (2) is achieved by simply taking L = ℳ and applying Proposition 4.3.
To prove the converse, we will verify condition (3) of Proposition 4.3. Let (z-λ)h ∈ℳ for some h ∈ X. Since L^w^∗ = ℳ, there exists a net { f_i}_i∈ I L such that f_i w^∗ (z-λ)h. In particular by continuity of the evaluation functionals we have that f_i(λ) = k_λ(f_i) k_λ((z-λ)h) = 0. Since λ∉𝒵(L) we can find a g ∈ L such that g(λ)≠ 0. For every i ∈ I we consider the function
g_i(z) = f_i(z) - f_i(λ)/g(λ)g(z) ∈ L.
This new net { g_i }_i∈ I has g_i(λ)=0 for all i ∈ I. That means that there is a net { h_i }_i ∈ I X such that g_i = (z-λ)h_i. But by the hypothesis this means that every h_i ∈ℳ for all i ∈ I. Since f_i w^∗(z-λ)h we get that g_i w^∗(z-λ)h and hence (z-λ)h_i w^∗ (z-λ)h. By continuity of the division operator we get h_i w^∗h. Since h_i ∈ℳ for all i ∈ I, and ℳ is w^∗-closed, we obtain that h ∈ℳ. Condition (3) is therefore satisfied and ind(ℳ)=1.
Let f ∈ X, f ≠ 0. Then ind[f]_∗ = 1.
Since f ≠ 0 there is some λ∈ such that f(λ) ≠ 0. It suffices then to verify condition (2) of Proposition 4.4 by taking L = {pf : p polynomial}.
Let ℳ_1 , ℳ_2 ∈ Lat_X(M_z,w^∗) have the index one property, and let λ∉𝒵(ℳ_1) ∪𝒵(ℳ_2). The following are equivalent:
* ind(ℳ_1 ∨ℳ_2)=1,
* There exist nets {g_i^1 }_i ∈ Iℳ_1,{g_i^2 }_i ∈ Iℳ_2 such that g_i^1(λ) = g_i^2(λ) =1 and g_i^1 - g_i^2 w^∗ 0.
Suppose that ind(ℳ_1 ∨ℳ_2)=1. Since λ∉𝒵(ℳ_1) ∪𝒵(ℳ_2) there are f_1 ∈ℳ_1, f_2 ∈ℳ_2 such that f_1(λ) = f_2(λ) = 1. We have that f_1 - f_2 ∈ℳ_1 ∨ℳ_2 and (f_1-f_2)(λ) = 0 so f_1 - f_2 = (z-λ)h for some h ∈ X. By Proposition 4.3, h ∈ℳ_1 ∨ℳ_2 and since ℳ_1 + ℳ_2 is dense in ℳ_1 ∨ℳ_2 there exists a net { h_i}_i ∈ Iℳ_1 + ℳ_2 with h_i w^∗h. By the definition of ℳ_1 + ℳ_2 we can find nets {h_i^1 }_i ∈ Iℳ_1,{h_i^2 }_i ∈ Iℳ_2 such that h_i = h_i^1 - h_i^2 w^∗h. Then (z-λ)h_i^1 - (z-λ)h_i^2w^∗ (z-λ)h = f_1 - f_2 by continuity of the shift operator. Define the nets:
g_i^1 = (z-λ)h_i^1 + f_1 ∈ℳ_1
g_i^2 = (z-λ)h_i^2 + f_2 ∈ℳ_2 , with
g_i^1(λ) = 0 + f_1(λ) = 1 and g_i^2(λ) = 0 + f_2(λ) = 1
and notice that
g_i^1 - g_i^2 = (z-λ)h_i^1 + f_1 -(z-λ)h_i^2 -f_2 = (z-λ)(h_i^1-h_i^2) +f_1 -f_2 w^∗ (z-λ)h - (z-λ)h = 0,
which gives (2). To prove the contrary we will verify condition (2) of proposition 4.4. To that end, take L = ℳ_1 + ℳ_2 and consider a function h ∈ X with (z-λ)h ∈ℳ_1 + ℳ_2. We will show that h ∈ℳ_1 ∨ℳ_2. We write (z-λ)h ∈ℳ_1 + ℳ_2 as (z-λ)h = f_1 + f_2 with f_i ∈ℳ_i. By the hypothesis there are nets
{ g_i^1 }_i∈ Iℳ_1 , { g_i^2 }_i∈ Iℳ_2 , with
g_i^1(λ) = g_i^2(λ) = 1 and g_i^1 - g_i^2 w^∗ 0.
We write:
f_1(z) = f_1(z) - f_1(λ)g_i^1(z) + f_1(λ)g_i^1(z),
f_2(z) = f_2(z) - f_2(λ)g_i^2(z) + f_2(λ)g_i^2(z).
and notice that:
f_1 - f_1(λ)g_i^1 ∈ℳ_1 and f_1(λ) - f_1(λ)g_i^1(λ) = 0 i ∈ I ,
f_2 - f_2(λ)g_i^2 ∈ℳ_2 and f_2(λ) - f_2(λ)g_i^2(λ) = 0 i ∈ I.
Then there are nets { h_i^1 }_i∈ I,{ h_i^2 }_i∈ I X such that:
f_1 - f_1(λ)g_i^1 = (z-λ)h_i^1 and f_2 - f_2(λ)g_2^1 = (z-λ)h_i^2.
Moreover, by the fact that each of the subspaces is of index one and by Proposition 4.3 we can conclude that in fact h_i^1 ∈ℳ_1 and h_i^2 ∈ℳ_2 for all i ∈ I. Notice as well that f_1(λ)+f_2(λ) = 0 f_1(λ) = -f_2(λ). This permits us to write:
f_1 + f_2 = (z-λ)h_i^1 + f_1(λ)g_i^1 + (z-λ)h_i^2 + f_2(λ)g_i^2 = (z-λ)(h_i^1 + h_i^2) + f_1(λ)(g_i^1-g_i^2).
Since g_i^1 - g_i^2 w^∗ 0 we have that (z-λ)(h_i^1+h_i^2) w^∗ f_1+f_2 = (z-λ)h. Once again by continuity of the division operator we may conclude that (h_i^1+h_i^2) w^∗ h, which gives that h ∈ℳ_1 ∨ℳ_2.
Let ℳ∈ Lat_X_0(M_z) be an invariant subspace of index 1. Then ℳ^w^∗∈ Lat_X(M_z, w^∗) has index 1.
Let λ∉𝒵(ℳ). Then λ∉𝒵(ℳ^w^∗). We set L = ℳ and then L^w^∗ = ℳ^w^∗. Let h ∈ X with (z - λ)h ∈ L = ℳ X_0. Since ℳ has index 1, we deduce from the analog of Proposition 4.3 for the norm topology case that h ∈ℳ. By Proposition 4.4, ind(ℳ^w^∗) = 1.
Let ℳ∈ Lat_X_0(M_z) be an invariant subspace with ind(ℳ) = M, where M is finite or countably infinite. Then ℳ^w^∗∈ Lat_X(M_z, w^∗) satisfies ind(ℳ^w^∗) ≤ M.
By (1) of Proposition 4.2, we may write ℳ = ℳ_1 ∨ℳ_2 ∨⋯∨ℳ_M, where each ℳ_n has index 1. Then ℳ^w^∗ = ℳ_1^w^∗∨ℳ_2^w^∗∨⋯∨ℳ_M^w^∗. By combining Propositions 4.2 and 4.7 we obtain the result.
If ℳ X_0, is a norm closed and convex set, then ℳ^w^∗∩ X_0 = ℳ.
One inclusion is obvious. For the other inclusion consider h ∈ℳ^w^∗∩ X_0. There exists a net {h_i }_i∈ Iℳ X_0 such that h_i w^∗ h in X. Notice that both the net and its limit belong to the space X_0. By the dualities X_0 ≅ Y and Y ≅ X we know that h_i w^∗ h in X is the same as h_i h weakly in X_0. Hence h belongs to the weak closure of ℳ in X_0. Since ℳ is convex, its weak closure coincides with its norm closure, and as such we deduce that h ∈ℳ, as ℳ is itself norm closed.
We may now prove one of the main theorems of this section.
Proof of Theorem 1.6.
We assume first that M< ∞ and suppose that ind(ℳ^w^∗) < M. Without loss of generality we may assume that 0 ∉𝒵(ℳ). Let {f_1,f_2,…,f_M } be a set of functions whose equivalence classes in ℳ/zℳ form a base. By our assumption, the set {f_1, f_2, …, f_M }ℳ^w^∗/ zℳ^w^∗ has to be linearly dependent, and therefore there exist λ_1,λ_2, …, λ_M not all zero such that:
∑_m=1^Mλ_m f_m = 0 , in ℳ^w^∗/ zℳ^w^∗.
That means that there exists a function h ∈ℳ^w^∗ such that:
∑_m=1^Mλ_m f_m = zh , in X.
It suffices to prove that h ∈ℳ, because that provides a contradiction to the fact that the equivalence classes of the functions f_m , 1 ≤ m ≤ M form a base in ℳ/z ℳ. Notice that (27) actually says that zh ∈ X_0. From the division property on X_0 we deduce that h ∈ X_0. Since h ∈ℳ^w^∗ we get that h ∈ℳ^w^∗∩ X_0. Moreover, ℳ is convex, as it is a linear subspace of X_0. From Lemma 4.9, we obtain that h ∈ℳ.
In the above argument we essentially demonstrated that, given an ℳ∈ Lat_X_0(M_z) and a set of functions f_1,f_2,…,f_M, linearly independent in the quotient space ℳ/zℳ, the same set of functions forms a linearly independent set in the quotient space ℳ^w^∗/ zℳ^w^∗. In the case where M = ∞ we consider an infinite set { f_1, f_2, …}ℳ that forms a base in ℳ/zℳ. Then for any N ∈ the set {f_1, f_2, … f_N } will form a linearly independent set in ℳ^w^∗/ zℳ^w^∗, proving that for every N ∈, ind(ℳ^w^∗) ≥ N, and hence ind(ℳ^w^∗) = ∞.
▪
Note that the above argument works exactly in the same way if M=1, reproving Proposition 4.7. Lastly, we can apply this to prove Theorem 1.7:
Proof of Theorem 1.7.
Let N ∈{2, …, ∞}. By Theorem 1.5 there exist functions f_n _0, 1 ≤ n < N such that E_N := ∨_n=1^N[f_n] has index N in _0. An application of Theorem 1.6 for for X_0 = _0, Y= A^1 and X = yields the result.
▪
ACKNOWLEDGEMENTS.
I would like to thank my advisors Evgeny Abakumov and Alexander Borichev for proposing me the problems and for proof-checking my drafts. I would especially like to thank A. Borichev for sharing with me his rich ideas and valuable techniques.
[heading=bibintoc,title=References]
NIKIFOROS BIEHLER:
Univ Gustave Eiffel, Univ Paris Est Creteil, CNRS, LAMA UMR8050, F-77447 Marne-la-Vallée, France
Email address : [email protected]
|
http://arxiv.org/abs/2409.03365v1 | 20240905091040 | Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management | [
"Yujie Wang",
"Shenhan Zhu",
"Fangcheng Fu",
"Xupeng Miao",
"Jie Zhang",
"Juan Zhu",
"Fan Hong",
"Yong Li",
"Bin Cui"
] | cs.DC | [
"cs.DC",
"cs.LG"
] |
^1Peking University
^2Purdue University
^3Alibaba Group
^1{alfredwang, shenhan.zhu, ccchengff, bin.cui}@pku.edu.cn
^[email protected]
^3{wanglin.zj, zhujuan.zj, hongfan.hf, jiufeng.ly}@alibaba-inc.com
§ ABSTRACT
Recent foundation models are capable of handling multiple machine learning (ML) tasks and multiple data modalities with the unified base model structure and several specialized model components.
However, the development of such multi-task (MT) multi-modal (MM) models poses significant model management challenges to existing training systems. Due to the sophisticated model architecture and the heterogeneous workloads of different ML tasks and data modalities, training these models usually requires massive GPU resources and suffers from sub-optimal system efficiency.
In this paper, we investigate how to achieve high-performance training of large-scale MT MM models through data heterogeneity-aware model management optimization.
The key idea is to decompose the model execution into stages and address the joint optimization problem sequentially, including both heterogeneity-aware workload parallelization and dependency-driven execution scheduling.
Based on this, we build a prototype system and evaluate it on various large MT MM models.
Experiments demonstrate the superior performance and efficiency of our system, with speedup ratio up to 71% compared to state-of-the-art training systems.
Recent foundation models are capable of handling multiple machine learning (ML) tasks and multiple data modalities with the unified base model structure and several specialized model components. However, the development of such multi-task (MT) multi-modal (MM) models poses significant model management challenges to existing training systems. Due to the sophisticated model architecture and the heterogeneous workloads of different ML tasks and data modalities, training these models usually requires massive GPU resources and suffers from sub-optimal system efficiency.
In this paper, we investigate how to achieve high-performance training of large-scale MT MM models through data heterogeneity-aware model management optimization. The key idea is to decompose the model execution into stages and address the joint optimization problem sequentially, including both heterogeneity-aware workload parallelization and dependency-driven execution scheduling. Based on this, we build a prototype system and evaluate it on various large MT MM models. Experiments demonstrate the superior performance and efficiency of our system, with speedup ratio up to 71% compared to state-of-the-art training systems.
Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management
Yujie Wang^1, Shenhan Zhu^1, Fangcheng Fu^1, Xupeng Miao^2, Jie Zhang^3, Juan Zhu^3, Fan Hong^3, Yong Li^3, Bin Cui^1
September 9, 2024
=========================================================================================================================
PVLDB Reference Format:
. . PVLDB, (): , .
https://doi.org/doi:
[This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:[email protected]@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment.
Proceedings of the VLDB Endowment, Vol. , No.
ISSN 2150-8097.
https://doi.org/doi:
]footnote-1
PVLDB Artifact Availability:
The source code, data, and/or other artifacts have been made available at <>.
§ INTRODUCTION
Machine learning (ML) has become an essential tool for understanding and generating knowledge from data and tackling complex tasks for humans. In the past few years, our data management community has put great efforts in developing systems to support the whole ML lifecycle <cit.>, such as data preparation <cit.>, model development <cit.>, model selection <cit.> and model deployment <cit.>.
Recently, as the rapid rise of large-scale foundation models <cit.>, developing these large models is becoming increasingly challenging due to the substantial GPU resource requirements. For example, how to design training systems for those models consisting of billion of parameters over more than dozens of GPUs demands extensive expertise, which has attracted lots of research interests from our community <cit.>. As a result, ML models themselves have become another form of data, and their management techniques are becoming increasingly important <cit.>.
Considering the multi-modal nature of real-world data, ML researchers have shifted their focus to developing model beyond the the language domain (e.g., ChatGPT <cit.>) to many other data modalities (e.g., images <cit.>, speech <cit.>, video <cit.>).
The recent extension further involves composite scenarios <cit.>, where models are capable of processing and interpreting data across several tasks simultaneously.
However, existing large model training systems are mainly designed for a single model with only one input data modality. Despite the extensive research and engineering efforts aimed at optimizing these systems from multiple perspectives, including distributed communication <cit.>, memory management <cit.>, and GPU computation <cit.>, their performance is still limited when it comes to handling the increasingly complex requirements of multi-task (MT) multi-modal (MM) models. We identify two unique obstacles when building training systems for MT MM models.
One is the workload heterogeneity due to the divergent data flows across modalities or tasks. On the one hand, MM models often handle data that vary significantly in structure and size, demanding specialized preprocessing and computational approaches. For example, language models (e.g., GPT-family <cit.>, LLaMA-family <cit.>) are usually equipped with dozens of layers with the same configuration (e.g., hidden size), while vision models may involve uneven layers to compute in various resolutions <cit.>. On the other hand, as depicted in Fig. <ref>, multiple tasks usually leverage distinct data flows and activate individual model components, leading to inter-task workload heterogeneity.
Due to such heterogeneous modality data flows and sub-models, different modalities and tasks exhibit distinct execution overhead (detailed in Fig. <ref>, <ref>).
Existing training systems usually overlook such workload heterogeneity and apply sub-optimal training methodologies.
Another is the data flow execution dependency among different model components. Recent MT MM model development usually adopts a sub-model sharing approach <cit.>, where partial model layers containing common knowledge are shared across different modalities and tasks. As shown in Fig. <ref>, each data type also has its own learning component. Within every training iteration, the input data mixed with multiple modalities are simultaneously fed into the sophisticated model, where different model components are intricately activated and updated.
To avoid redundant resource usage, the shared components are usually responsible for the data flows from multiple sources, resulting in execution barriers and blocking the following model layers.
In addition, the proportion of different data modalities in MT workloads may shift over time due to task addition and completion, introducing further training complexity.
To the best of our knowledge, none of existing training systems can deal with these unforeseen dependency efficiently due to the lack of understanding MT MM model execution.
To tackle these obstacles, this paper introduces , a resource-efficient and high-performance training system for large-scale MT MM models via data heterogeneity-aware model management.
Considering the workload heterogeneity and execution dependency, a naïve solution is to decouple the model structure based on modality and task, replicate the shared components, and deploy them on separate devices. In this way, each sub-model can be optimized by existing systems, but it also brings significant resource wastage and underutilization, as well as additional overheads from replica synchronization.
As an example, Fig. <ref> showcases that such a naïve, decoupled execution suffers from fluctuating device utilization both intra-task and inter-task due to workload heterogeneity, leading to low or even idle GPU utilization for some time slots.
Instead of decoupling, manages to directly train the whole complex model without disjoint sub-model to minimize the resource usage.
A key insight behind 's design is that heterogeneous and dependent sub-models can be decomposed into several sequentially executed and independent stages, each of which contains multiple parallel model modules with similar execution overheads.
To achieve resource-efficient and high-performance training of MT MM models, there are three key model management challenges for to address. In the following, we will introduce each challenge and how solves them.
C1: Model Parallelization. First, finding the optimal model parallel configuration for heterogeneous workloads with diverse computational characteristics is a complex combinatorial problem. Existing single-model automatic parallelization approaches (e.g., Alpa <cit.>, Unity <cit.>, Galvatron <cit.>) assume a spatial pipeline stage partition, and each operator (Op) is executed by all devices of the corresponding pipeline stage. Unfortunately, such assumptions only work for homogeneous models, failing to adapt to heterogeneous MT MM models.
Instead of solving the parallel configuration directly, captures the workload heterogeneity at the operator granularity and estimates its execution overheads under different amount of allocated resources and parallel configurations (<ref>). The final configuration decision is left to the later step since it requires to be jointly optimized with considering the execution dependency. also introduces to contract the graph (i.e., fusing continuous identical operators) to avoid redundant estimation overheads and shrink the problem scale (<ref>).
C2: Model Division. Second, breaking down the whole model into sequentially executed stages is straightforward, but it may easily result in inefficiencies.
Determining the optimal division of stages is complicated since the operators differ significantly in their execution overheads and have intricate operator dependencies.
addresses this problem with two steps: 1) 's resource allocator (<ref>) traverses the computation graph following the dependency topology and decides the optimal resource allocation for in each candidate set (i.e., currently executable ). Here we reformulate this issue as a malleable project scheduling problem (MPSP) and subsequently derive the optimal solution. 2) After obtaining the parallel configuration of each , the stage scheduler (<ref>) greedily slices and selects to craft compact stages and minimizes the overall execution time.
C3: Model Mapping. Third, given the stage-based resource allocation and execution schedule plan, how to map them into physical devices is still a problem,
since different mapping may lead to distinct inter-stage communication overheads and per-device memory consumption.
To further improve the overall system efficiency, carefully considers these trade-offs and the real environment constraints (e.g., inter-device bandwidth, memory capacity) when generating the device placement plan (<ref>).
Our contributions are summarized as follows:
* We present , a high-performance and resource-efficient training system for large-scale MT MM models via data heterogeneity-aware model management optimization.
* We propose a jointly optimization framework to achieve heterogeneity-aware workload parallelization and dependency-driven execution scheduling.
* We build a general runtime engine to perform the stage-based schedule, automatically resolving execution dependencies at the stage boundaries.
* We evaluate on various large MT MM models, and the results demonstrate the superior performance and efficiency of compared with the state-of-the-art baselines, with the speedup ratio up to 71%.
For example, speech recognition and image captioning tasks shall share the text encoder but feed the visual- and audio-inputs into different encoders <cit.>.
Many empirical results have also shown that such a joint multi-task training paradigm achieves better multi-modal capabilities for generalist models than performing single-task training separately <cit.>.
§ PRELIMINARY
§.§ Multi-Task Multi-Modal Models
§.§.§ Multi-Modal Application of Foundation Models
The advent of foundation models <cit.> has revolutionized deep learning (DL).
Beginning with the birth of BERT <cit.> and GPT <cit.> based on Transformer <cit.> structure, followers such as the GPT series <cit.>, T5 <cit.>, OPT <cit.>, and the LLaMA series <cit.> have set new benchmarks across a range of language tasks.
Foundation models have also been successfully adapted for tasks of other data modalities, including image processing <cit.>, audio processing <cit.>, video analysis <cit.>.
Multi-modal models <cit.> leverage these foundation models to integrate information from multiple data modalities. They can be primarily categorized into two types.
The first category fuses modality information via contrastive learning objectives <cit.>, with CLIP <cit.> being a notable example, and ImageBind <cit.> further extending CLIP to six modalities.
These models typically have a multi-tower structure, where each modality has its own encoder. They take paired modality data (e.g., image-text pairs for CLIP), extract features via modality encoders, and perform cross-modal alignment using contrastive objectives.
The second category merges modality features through a language model's generative loss <cit.>. These models usually consist of multi-tower modality encoders and a cross-modal module. Modality encoders extract features from each modality, which are then fed into the cross-modal module for feature fusion.
Recently, with the success of open-sourced large language models (LLMs) <cit.>, researchers have started to enhance multi-modal models with powerful pretrained LLMs <cit.>.
These multi-modal LLMs, also falling into the second category.
§.§.§ Multi-Task Multi-Modal Models
Recently, researchers have begun to construct multi-task multi-modal (MT MM) models <cit.>, enabling the support for diverse multi-modal tasks within a unified model.
This is because each modality encompasses various tasks, and each task often involves multiple modalities as well.
The general model structure and the training flow is illustrated in the upper side of Fig. <ref>.
MT MM models reflects researchers' aspiration towards a general-purpose AI.
Flamingo <cit.> is among the first to handle multiple vision-language tasks.
OFASys <cit.> proposes a general MT MM learning paradigm,
as shown in Fig. <ref>,
designing distinct modality encoders and cross-modal modules for different tasks and modalities, allowing the activation of different components as required by the task and modality at hand.
For example, speech recognition and image captioning tasks shall activate and share the text encoder but feed the visual- and audio-inputs into different encoders.
Many empirical results have also shown that such a joint multi-task training paradigm achieves better multi-modal capabilities for MT MM models than performing single-task training separately <cit.>.
§.§ Parallelisms in Distributed Training
As model sizes and training data volumes grow, modern DL systems commonly utilize clusters of multiple GPUs for distributed training, thereby enhancing efficiency.
Various parallelisms are employed to manage model parameters or training data in a distributed manner.
Data parallelism (DP) <cit.> splits the input data, with each device handling a portion of the data storage and computation, and synchronizing model gradients across devices.
Model parallelism <cit.> partitions model parameters, with each device responsible for storing and computing a segment of the model. Model parallelisms can be categorized into two popular types: tensor parallelism (TP) partitions the model vertically <cit.>, while pipeline parallelism (PP) <cit.> splits the model horizontally, organizing model computations into a pipeline.
Contemporary distributed training systems, such as Megatron-LM <cit.> and DeepSpeed <cit.>, leverage multiple parallelisms and implement a hybrid parallelism approach for model training. For example, Megatron-LM introduces 3D parallelism, which concurrently utilizes DP, TP, and PP, enhancing training efficiency.
Researchers have also developed advanced automatic parallelism techniques facilitate the tuning of optimal parallelism combinations.
These automatic parallelism <cit.> approaches integrate multiple parallelism dimensions, employ sophisticated optimization workflows, and automatically determine the most efficient hybrid parallelism strategy, significantly reducing the reliance on human effort.
However, these existing training system are mainly deigned for single task and single model training, with limited performance on the complex scenario of training MT MM models.
§.§.§ Foundation Models
The advent of foundation models <cit.> has revolutionized deep learning.
Beginning with the birth of BERT <cit.> and GPT <cit.> based on Transformer structure <cit.>, subsequent iterations such as the GPT series <cit.>, RoBERTa <cit.>, T5 <cit.>, OPT <cit.>, and the LLAMA series <cit.> have set new benchmarks across a range of language tasks.
Foundation models have also been successfully adapted for a variety of applications, including computer vision <cit.>, audio processing <cit.>, video analysis <cit.>.
§.§.§ Multi-Modal Learning
Transformer models have also been applied to multi-modal learning.
Researchers leverage the robust understanding capabilities of pretrained language models to learn information from multiple modalities simultaneously <cit.>.
Based on the modality feature fusion approach and model architecture, multi-modal models can be categorized into two types.
The first category fuses modality information via contrastive learning objectives <cit.>, with CLIP <cit.> being a notable example.
These models typically have a multi-tower structure, where each modality has its own encoder, such as ViT <cit.> for vision. They take paired modality data (e.g., image-text pairs for CLIP), extract features via modality-specific encoders, and perform cross-modal alignment using contrastive objectives, enabling zero-shot capabilities.
Extensions like <cit.> broaden CLIP's application to more modalities.
ImageBind <cit.>, introducing a model capable of handling six modalities, aligns various modalities with vision using image-paired data and contrastive objectives.
The second category merges modality features through a language model's generative loss <cit.>. These models usually consist of multi-tower modality encoders and a cross-modal encoder. Modality encoders extract features from each modality, which are then concatenated and fed into the cross-modal encoder for feature fusion.
OFA <cit.> and others <cit.> use lightweight encoders for all modalities and a single unified Transformer-based cross-modal encoder.
ALBEF <cit.>, BLIP <cit.>, OFASys <cit.> and others <cit.> retain parameterized modality-specific encoders (e.g., ViT) and a cross-modal encoder, enabling better learning of intra-modal and inter-modal interactions.
Recently, with the success of open-sourced large language models (LLMs) <cit.>, researchers have started to enhance multi-modal models with powerful pretrained LLMs <cit.>.
These multi-modal LLMs, also falling into the second category, utilize multi-tower pretrained modality encoders to extract modality features, mapping them into the LLM feature space through modality adaptors, and then fusing modal information with pretrained LLMs.
§.§.§ Multi-Task Multi-Modal Learning
Multi-task learning is a natural aspect of multi-modal learning, where a single modality encompasses various tasks. For instance, the visual modality includes tasks such as image captioning, image classification, and visual question answering.
A single task often involves multiple modalities as well; for example, the image captioning task involves both visual and textual modalities.
Consequently, researchers have begun constructing generalist models through multi-task multi-modal learning approaches <cit.>, enabling these models to handle diverse multi-modal tasks within a single framework.
This endeavor reflects researchers' aspiration towards a general-purpose AI.
Flamingo <cit.> was among the first to utilize interleaved vision-language data, adopting an in-context few-shot learning approach for mastering multiple vision-language tasks.
Gato <cit.>, OFA <cit.>, and Unified-IO <cit.> employ lightweight modality encoders to transform multi-modal and multi-task data into sequences, learning multi-task abilities using a unified Transformer encoder or an encoder-decoder architecture.
OFASys <cit.> proposes a general multi-task multi-modal learning paradigm, designing distinct modality encoders and cross-modal encoders for different tasks and modalities, allowing the activation of different components as required by the task and modality at hand. Such approach aligns with the design of Google's Pathways <cit.>, where different tasks have their own data flow and activate distinct parts of the model.
In multi-task multi-modal learning, the data flow and model computations for different modalities and tasks vary, resulting in heterogeneous workloads. Moreover, the dynamic nature of tasks and modalities during training presents challenges for efficient multi-task multi-modal learning.
§.§ Parallelisms in Distributed Training
§.§.§ Data & Model Parallelsim
As model sizes and training data volumes grow, modern DL systems commonly utilize clusters of multiple GPUs for distributed training, thereby enhancing efficiency.
Various parallelisms are employed to manage model parameters or training data in a distributed manner.
Data parallelism (DP) <cit.> splits the input data, with each device handling a portion of the data storage and computation, and synchronizing model gradients across devices.
Model parallelism <cit.> partitions model parameters, with each device responsible for storing and computing a segment of the model. Model parallelisms can be categorized into two popular types: tensor parallelism (TP) partitions the model vertically <cit.>, while pipeline parallelism (PP) <cit.> splits the model horizontally, organizing model computations into a pipeline.
§.§.§ Hybrid Parallelism and Automatic Parallelism
Contemporary distributed training systems, such as Megatron-LM <cit.> and DeepSpeed <cit.>, leverage multiple parallelisms and implement a hybrid parallelism approach for model training. For example, Megatron-LM introduces 3D parallelism, which concurrently utilizes DP, TP, and PP, enhancing training efficiency. However, these systems, resembling toolboxes, necessitate manual tuning of parallelism combinations to achieve optimal efficiency, which is both challenging and labor-intensive. Therefore, researchers have developed advanced automatic parallelism techniques to minimize the need for manual intervention. These automatic parallelism <cit.> approaches integrate multiple parallelism dimensions, employ sophisticated optimization workflows, and automatically determine the most efficient hybrid parallelism strategy, significantly reducing the reliance on human effort.
§.§ Cluster Scheduling for DL Jobs
Multi-task multi-modal learning prompts consideration of a related field: cluster scheduling for DL (Deep Learning) jobs. This subsection outlines job scheduling research and contrasts it with multi-task multi-modal learning, aiming for clarity in explaining their differences.
GPU clusters typically provide training services for multiple DL jobs from numerous users. To enhance cluster utilization and the training efficiency, GPU clusters often design cluster schedulers to coordinate resource allocation and the execution order among multiple jobs.
Some cluster schedulers <cit.> allocate resources to jobs based directly on the requirements users specify.
There are also cluster schedulers <cit.> automatically allocating resources to each job based on the job scalability to the computing resource. Many of these schedulers aims to minimize job completion time (JCT). For instance, Optimus <cit.> leverages a performance model to predict the job training speed as a function of allocated resources, and introduces the concept of marginal gain to guide resource allocation, aiming to minimize job completion time.
Here, we highlight that the multi-task multi-modal learning scenario targeted by and cluster scheduling represent two distinct scopes of problems:
(1) Despite the superficial similarity, multi-tasks and multi-jobs are quite different concepts. In cluster scheduling, there is no inherent connection or dependency between jobs, no need for model sharing, nor any data dependency. In contrast, multi-task multi-modal learning involves dependencies among multiple tasks, where tasks may share and simultaneous update the same part of a model within a single iteration. An example is when a image captioning task and an image classification task share and both update the same vision encoder.
This necessitates a unique approach to scheduling multi-tasks, focusing on iteration-level management.
(2) Multi-task multi-modal learning is characterized by a diverse range of tasks and modalities, each activating different parts of the model, leading to heterogeneous workload distributions both across and within tasks. On the other hand, cluster scheduling typically deals with resource allocation at a broader job level. This coarse-grained method of allocating resources, when applied to the more complex scenario of multi-task multi-modal learning, may result in sub-optimal efficiency.
Nonetheless, even with these differences, we can still draw inspiration from some of the coarse-grained resource allocation strategies in cluster scheduling as guidance.
§ MOTIVATION
To be written...Maybe motivation and challenge can be illustrated in Introduction?
§.§ System Challenge of Multi-task Learning
* Heterogeneity: Workload heterogeneity among tasks and inside task. Need some profiling results.
* Dynamicity: Tasks may change dynamically along training.
* Dependency: Model sharing dependency. Op dependency, operation topological order.
Why current system cannot handle these challenges?
What's the difference and relationship between MTP and other lines of work:
Multi-task multi-modal training system is an unexplored dimension.
Mainstream training system: Megatron-LM Deepspeed
1) Parallelism optimization works, Megatron-LM Deepspeed, MTP multi-task, resource allocation;
2) Job scheduling works, job are independent, doesn't require model synchronization. These works usually schedule the jobs/tasks at task level, ignoring the heterogeneous workload between the computation Ops inside each task.
§ SYSTEM OVERVIEW
is a highly efficient and scalable training framework designed for MT MM models.
It facilitates effective operator-level resource allocation and sophisticated distributed training scheduling for MT MM training.
(check this summary later)
Fig. <ref> depicts the system architecture of , comprising the execution planner and the training framework. Given user-defined multi-modal tasks and the GPU cluster, the execution planner initiates a graph contraction process (<ref>), contracting the original computation graph into a MetaGraph composed of (Fig. <ref>), with each characterizing a unique workload. Subsequently, the scalability estimator (<ref>) estimates the execution time and resource scalability for each , producing scaling curves (Fig. <ref>). Following this, the resource allocator (<ref>) allocates appropriate amount of resources for each , and the resulting allocation plan (Fig. <ref>) is processed by the stage scheduler (<ref>) to develop a stage-based execution schedule. Device placement (<ref>) strategies are then employed to assign to appropriate devices. This results in the execution plan (Fig. <ref>). Finally, the runtime engine (<ref>) utilizes this plan to instantiate the model on each device and facilitate an efficient multi-task multi-modal training process.
§ SYSTEM DESIGN
is a highly efficient and scalable training framework designed for MT MM models.
Fig. <ref> depicts its system architecture, comprising the execution planner and the training framework.
Given the diverse user-defined training tasks and the GPU cluster, the goal of is to devise the most efficient execution plan to facilitate effective MT MM training.
§.§.§ Problem Formulation
We formalize the optimization problem of as follows.
Firstly, interprets the input tasks as a unified directed acyclic computation graph 𝒢 = (𝒱, ℰ), where each node i ∈𝒱 represents a computational operator and each edge ⟨ i,j ⟩∈ℰ denotes the data flow from operator i to j.
Each task activates specific operators and parameters with unique data flows.
For instance, a vision-related task activates a vision Transformer layer as an operator, with image features serving as the data flow.
The left side of Fig. <ref> displays an example of a computation graph.
Then, given the computation graph 𝒢 and the GPU cluster with N devices, aims to minimize the maximal operator completion time C.
Specifically, we need to find an execution plan P, which assigns each operator i ∈𝒱 with an AS-tuple ⟨ n_i, s_i ⟩∈𝒰, such that the operator i is Allocated n_i devices and is Scheduled to execute from time s_i.
Here the set 𝒰 = {⟨ n, s ⟩ | n ∈ℕ, s ≥ 0} is formed by all valid AS-tuples. We further denote the execution time of operator i when allocated n_i devices as t_i=T_i(n_i).
Then, the optimization problem is formulated as follows:
_P={i →⟨ n_i,s_i ⟩ |
i ∈𝒱, ⟨ n_i,s_i ⟩∈𝒰} C max_i ∈𝒱{s_i + t_i}
s.t. ∑_t ∈ (s_i, s_i+t_i), i ∈𝒱 n_i ≤ N for ∀ t ∈ℝ^+
s_i+t_i ≤ s_j for ∀⟨ i,j ⟩∈ℰ
Here (<ref>) is the allocation capacity constraint for any time t, and (<ref>) is the operator dependency constraint.
§.§.§ Sketch of Solution
Before stepping into the solution of , we would like to first present an overview for better readability.
First, initiates a graph contraction process (<ref>), contracting the original graph 𝒢 into a MetaGraph 𝒢_M composed of (Fig. <ref>), where each characterizes a unique workload.
This process further decouples into different , ensuring that there are no dependencies among within the same .
Second, the scalability estimator (<ref>) estimates the execution time and resource scalability for each , producing scaling curves (Fig. <ref>).
Following this, the resource allocator (<ref>) deduces the allocation plan for each individually (Fig. <ref>).
Given the allocation plan, the stage scheduler (<ref>) slices the and organizes them into , and produces the -based schedule for execution.
Subsequently, device placement (<ref>) strategies are then employed to assign to appropriate devices, resulting in the execution plan (Fig. <ref>).
Finally, the runtime engine (<ref>) utilizes this plan to instantiate the model on each device and facilitate an efficient MT MM training process.
§.§ Graph Contraction
§.§.§ Depicting Workload Heterogeneity with
is designed to minimize the execution time by optimizing resource allocation and scheduling for each operator within 𝒢.
This optimization process necessitates an understanding of the workload characteristics for each operator i ∈𝒱, which can be reflected by its execution time function t_i = T_i(n_i), which varies with the device allocation amount n_i.
Given that 𝒢 typically includes a large number of operators while many of them share similar workload characteristics (such as stacked Transformer layers), initiates a graph contraction process to streamline the complicated graph. It categorizes operators based on their computational workload characteristics, as illustrated in Fig. <ref>.
In this process, operators are contracted into a if they meet the following criteria:
* There is a data flow between operator i and j, i.e., ⟨ i,j ⟩∈ℰ, and both the out-degree of operator i and the in-degree of operator j are 1, ensuring that they are direct predecessors and successors to each other.
* Operator i and j share the same computational operator type, parameter size, and input data size, confirming identical computational workloads.
During the graph contraction procedure, we traverse the original graph 𝒢 in topological order, contracting operators based on the specified criteria until no further pairs of operators meeting these conditions remain. This results in a contracted MetaGraph 𝒢_M = (𝒱_M, ℰ_M), with each node m ∈𝒱_M representing a that consists of L_m consecutive operators in 𝒢.
Since operators in the same share the same workload, we slightly abuse the notation and denote the execution time function for each operator in m as T_m(n).
§.§.§ Disentangling Dependency with
To facilitate operator-level resource allocation and scheduling, we further introduce an abstraction called , which signifies the level of dependency. at the same level are independent to each other.
The level of each can be derived by a Breadth-First-Search (BFS), with the level assigned based on the search depth, which inherently ensures no dependency among the of same level.
By doing so, the problem (<ref>) can be dissected into several simplified sub-problems for different .
Next, we introduce how derives the allocation and scheduling for each individually, and merges them into the final plan.
§.§ Scalability Estimator
As differ in operator types and/or input data sizes, they characterize heterogeneous workloads and thus necessitate different amount of resources.
Furthermore, there's no doubt that these have distinct resource scalability (i.e., how its execution time varies w.r.t. the amount of allocated resources).
For instance, the left side of Fig. <ref> shows the execution time of different , T_m(n), in Multitask-CLIP (an multi-task extension of CLIP <cit.>, refer to <ref> for details).
Some show almost linear decreases in execution time as resources increase (e.g., Task2-Vision), while others decrease much more slowly (e.g., Task1-Text).
The right side of Fig. <ref> further shows the value of ς_m(n)=T_m(1)/T_m(n), which measures how much the operator accelerates when using more GPUs, and a value of ς_m(n) closer to n signifies better resource scalability.
As can be seen, different not only have varying execution time, but also exhibit different resource scalability, posing a significant challenge for resource allocation.
In response to this issue, employs a scalability estimator to accurately capture the execution time and the resource scalability of each .
Previous works <cit.> have designed effective estimation methods for distributed training, commonly utilizing the αβ modelling <cit.>.
However, although this may work well for homogeneous workloads (e.g., large language models with homogeneous layers), we find that it does not fit the workload heterogeneous nature of MT MM models.
This is because different have distinct workload and resource scalability, and the invoked kernels may vary across different per-device workloads, therefore causing distinct performance.
In a nutshell, our scalability estimator adopts the piecewise αβ modelling for more accurate estimation of heterogeneous MT MM workloads.
Given the target MT MM model, it profiles several discrete data points (n_i, T_m(n_i)) for each under different parallel configurations, and then fits the curve of piecewise αβ function.
To estimate the execution time T_m(n), it locates the range that n falls into, and returns the estimated time according to the corresponding piecewise function.
In practice, the profiling and estimating process for each MT MM model takes within 5 minutes, which is negligible compared to the massive training time.
In Fig. <ref>, the scatter points represent empirical measurements, while the curves depict the function estimated by our scalability estimator, which we denote as scaling curves.
As can be seen, our scalability estimator effectively and accurately estimates the execution time T_m(n) for each .
More details are illustrated in Appendix <ref>.
[ffc 8.31 version]
As differ in operator types and/or input data sizes, they characterize heterogeneous workloads and thus necessitate different amount of resources.
To estimate the execution time of each operator in m over n devices, i.e., T_m(n), prior works generally utilize the αβ modelling <cit.>.
The rationale behind is that the total workload is evenly distributed across n devices.
However, although this may work well for homogeneous workloads (e.g., large language models with homogeneous layers), we find that it does not fit the workload heterogeneous nature of MT MM models.
This is because different have distinct resource scalability (i.e., how its execution time varies w.r.t. the amount of allocated resources).
To elaborate, Fig. <ref> shows the execution time and scalability of different in Multitask-CLIP (an multi-task extension of CLIP <cit.>, refer to <ref> for details).
As can be seen, different not only have varying execution time, but also exhibit different resource scalability. Some show almost linear decreases in execution time as resources increase (e.g., Task2-Vision), while others decrease much more slowly (e.g., Task1-Text).
Undoubtedly, this poses a significant challenge for resource allocation of MT MM models.
In response to this issue, employs a scalability estimator to accurately capture the execution time of each .
In a nutshell, scalability estimator adopts the piecewise αβ modelling.
Given the target MT MM model, it profiles several discrete data points (n_i, T_m(n_i)) for each under different parallel configurations, and then fits the curve of αβ function between any two neighboring points.
To estimate the execution time T_m(n), it locates the range that n falls into, and returns the estimated time according to the corresponding piecewise function.
In practice, the profiling and estimating process for each MT MM model takes within 5 minutes, which is negligible compared to the massive training time.
In Fig. <ref>, the scatter points represent empirical measurements, while the curves depict the function estimated by our scalability estimator, which we denote as scaling curves.
As can be seen, our scalability estimator effectively and accurately estimates the execution time T_m(n) for each .
[wyj 8.31 version]
As differ in operator types and/or input data sizes, they characterize heterogeneous workloads and thus necessitate different amount of resources.
Furthermore, there's no doubt that these have distinct resource scalability (i.e., how its execution time varies w.r.t. the amount of allocated resources), posing a significant challenge for resource allocation.
For instance, the left side of Fig. <ref> shows the execution time of different , T_m(n), in Multitask-CLIP (an multi-task extension of CLIP <cit.>, refer to <ref> for details).
Here T_m(n) denotes the execution time of each operator in m over n devices.
Some show almost linear decreases in execution time as resources increase (e.g., Task2-Vision), while others decrease much more slowly (e.g., Task1-Text).
The right side of Fig. <ref> further shows the value of ς_m(n)=T_m(1)/T_m(n), which measures how much the operator accelerates when using more GPUs, and a value of ς_m(n) closer to n signifies better resource scalability.
As can be seen, different not only have varying execution time, but also exhibit different resource scalability.
In response to this issue, employs a scalability estimator to accurately capture the execution time and the resource scalability of each .
Previous works <cit.> have provided effective estimation methods for distributed training.
For instance, communication cost can be estimated via bandwidth and the communication volume specified by parallel configurations , and the αβ modeling <cit.> is commonly used for estimation.
Ideally speaking, the total workload is evenly distributed across n devices.
However, our empirical observations indicate that the invoked kernels may vary across different per-device workloads.
Therefore, we further apply a piecewise αβ function for more accurate modeling. Refer to Appendix <ref> for details.
Ideally speaking, the total workload is evenly distributed across n devices.
However, our empirical observations indicate that the execution time T_m(n) only scales linearly within a range.
Thus, we characterize T_m(n) by a generalized piecewise αβ function:
T_m(n) = α_m,i + β_m,i×c_m + β_m,i'×w_m/n,∀ n ∈ [n_i-1, n_i], i = 1,…,k
where k is the number of pieces, α_m,i represents the coefficient of fixed overheads (e.g., kernel launch costs), β_m,i and β_m,i' represent the reciprocal of execution efficiency (e.g., GPU computation speed and network bandwidth), w_m/n denotes the distributed workload of m across n devices (e.g., computational workload), and c_m denotes the workload that doesn't scale with n (e.g., communication volume of data parallelism).
Such piecewise function indicates that under varying resource scales, due to changes in the per-device workload, coefficients such as
α, β and β' might differ, as the invoked kernels may vary across different workloads.
Our scalability estimator adopts a profiling-based method to model this function. It begins by profiling several discrete data points (n_i, T_m(n_i)) for each under different parallel configurations.
Subsequently, it fits the curve of αβ function to model the execution time T_m(n) as introduced above.
In practice, the profiling and estimating process for each MT MM model takes within 5 minutes, which is negligible compared to the massive training time.
In Fig. <ref>, the scatter points represent empirical measurements, while the curves depict the function estimated by our scalability estimator, which we denote as scaling curves.
As can be seen, our scalability estimator effectively and accurately estimates the execution time T_m(n) and resource scalability ς_m(n) for each .
§.§ Resource Allocator
In this subsection, we introduce the resource allocator of , which allocates appropriate computational resources to each .
We begin by transitioning the problem (<ref>) into the sub-problem on one .
We then detail our allocation strategies, which first relax constraints and optimize the continuous problem, and then discretize the optimal solution to obtain practical allocation plans.
§.§.§ Problem Formulation on
We first re-formulate the problem (<ref>) on one with a set of denoted by 𝒱_M.
In this formulation, we split each into different execution part, by assigning it with several ASL-tuples ⟨ n,s,l ⟩∈𝒰_M, such that l consecutive operators of this are scheduled to execute from time s with n devices.
Here 𝒰_M = {⟨ n, s, l ⟩ | n,l ∈ℕ, s ≥ 0} is formed by all valid ASL-tuples.
For each m ∈𝒱_M, its execution plan is a set of ASL-tuples P_m.
For a , the execution plan P consists of P_m for all m ∈𝒱_M, i.e., P={m → P_m}.
Given m ∈𝒱_M and one ASL-tuple p=⟨ n_m^(p),s_m^(p),l_m^(p)⟩∈ P_m, we denote the execution time span, end time, and time interval by
t_m^(p) = T_m(n_m^(p)) · l_m^(p),
e_m^(p) = s_m^(p)+t_m^(p),
and I_m^(p)=(s_m^(p),e_m^(p)),
respectively.
The problem can be re-written as:
_P={m → P_m | m ∈𝒱_M, P_m ⊂ 2^𝒰_M}Cmax_m ∈𝒱_M, p ∈ P_m{e_m^(p)}
s.t. ∑_t∈ I_m^(p), m ∈𝒱_M, p ∈ P_m n_m^(p)≤ N for ∀ t ∈ℝ^+
I_m^(p_1)∩ I_m^(p_2) = ∅ for ∀ m∈𝒱_M, p_1,p_2∈ P_m
∑_p ∈ P_m l_m^(p) = L_m for ∀ m ∈𝒱_M
Compared with the original problem (<ref>), the sub-problem (<ref>) on gets rid of the dependency constraint, while the constraint (<ref>) enforces the execution intervals of ASL-tuples in P_m to be pairwise disjoint, because operators within the same cannot execute simultaneously, and (<ref>) ensures all operators are executed for each .
§.§.§ Optimum of the Continuous Problem
If we relax the constraints, allowing GPU resources and operators to be continuously divisible (i.e., n and l in ASL-tuples are not limited to integers), the problem is transformed into a well-established problem, malleable project scheduling problem (MPSP), with malleable projects and continuously divisible resources <cit.>.
We denote the optimal solution of this relaxed problem by .
Prior works <cit.> have given the following theorem.
If the execution time functions T_m(n), n∈ℝ^+, are positive and non-increasing for every m∈𝒱_M, then = {m → P_m} satisfies that P_m = {⟨ n_m^∗, 0, L_m⟩}, ∀ m ∈𝒱_M, where the optimum objective C^∗ and allocations n_m^∗ can be found from
T_m(n_m^∗) · L_m = C^∗ for ∀ m ∈𝒱_M
and ∑_m ∈𝒱_M n_m^∗ = N.
From Theorem <ref>, it follows that in the optimal situation, all start simultaneously, execute all their operators, and finish together.
They share an identical end time e_m = C^∗, which is exactly the minimized operator completion time.
To achieve , our allocator utilizes the scaling curves from <ref> to acquire an estimation of T_m(n), and performs a bisection search procedure over C^∗ with the following equation. The details are illustrated in Appendix <ref>.
∑_m∈𝒱_M T_m^(C^∗/L_m) = N.
§.§.§ Bi-point Discretized Allocation
From the continuous problem, we've determined the optimal time C^∗, as well as the optimal allocations for each , n_m^∗, which is a real number.
To reinstate n's as integers, our allocator computes each 's proper discrete allocations individually.
For every m, it uses two discrete ASL-tuples ⟨n_m, ·, l_m⟩, ⟨n_m, ·, l_m⟩ to linearly represent the continuous, optimal solution ⟨ n_m^∗, 0, L_m ⟩ in . To preserve the optimum property of , we require the discretized allocation plan to satisfy the following two conditions:
0.35
L_m = l_m + l_m
0.05
0.58
C^∗ = T_m(n_m) ·l_m + T_m(n_m) ·l_m
1em
Cond. (<ref>) ensures these two discrete ASL-tuples complete the workload of m, and Cond. (<ref>) ensures their total execution time is exactly equal to the minimum operator completion time C^∗ in , thus perserving the optimum property.
Here we first select n_m, n_m as the closest valid integer numbers such that n_m^∗∈ [n_m, n_m], and l_m,l_m∈ℝ^+ are derived naturally.
For instance, as shown in Fig. <ref>, 2 with n_2^∗=1.5, L_2=12 in is discretized as n_2=2, n_2=1 and l_2=8.4, l_2=3.6 in this step.
Here we impose the valid constraint on the allocation n for m for practical reasons.
For instance, if an is applied data parallelism, its allocation n is supposed to divide its global batch size B_m to avoid resource under-utilization due to uneven partition of samples.
For another example, if an is applied tensor parallelism or sequence parallelism with degree 2, its allocation n is supposed to be divisible by this degree, thus n=3, 5, 7 as invalid.
Such valid constraint ensures the allocation plan for each is practical.
Specially, allocation with n_m=0 is treated as a dummy allocation (e.g., 3 in Fig. <ref>), which preserves the optimum property of Cond. (<ref>) but will then be ignored.
Then, we reinstates l's as integers by rounding l_m, l_m to the nearest integers.
If the rounded l equals 0, this ASL-tuple will be ignored.
This rounding procedure preserves the integrity of Cond. (<ref>) and introduces only minor bias to Cond. (<ref>).
Finally, the discretized ASL-tuples of all form the allocation plan.
Note that the allocation plan only ensures the longest execution time among all is approximately C^∗, yet it does not specify the start time for each ASL-tuple, which is determined by stage scheduler in <ref>.
§.§ Stage Scheduler
In this subsection, we describe how schedules the execution of guided by the allocation plan generated by the resource allocator.
We first introduce the concept of , which is a scheduling unit of .
Then we introduce our -greedy scheduling algorithm, which schedules the execution of greedily for each .
Finally, the operator dependencies among are reinstated by merging the -based schedules together.
§.§.§ Definition of
It is worthy to note that, although Theorem <ref> implies that all share the same start and end time in the continuous form, this property does not hold after the discretization process.
The reason is that the execution time of ASL-tuples may vary, or the resources are insufficient to execute all tuples concurrently.
To cope with this problem, we devise a fine-grained scheduler that slices the and selects a few of them to execute concurrently, where the slicing and selection aim to achieve that (1) the devices are occupied as many as possible, and (2) their execution time are as close as possible.
To ease the description, we define as the scheduling unit, which corresponds to one concurrent execution as aforementioned.
Next, we introduce our greedy algorithm that crafts the to form the scheduling plan.
§.§.§ -greedy Scheduling
As outlined in Alg. <ref>, the scheduler iteratively crafts in a greedy manner.
Below we introduce how one is crafted with Fig. <ref> as an example.
* First, the scheduler greedily proposes ASL-tuples to form a candidate set, aiming to utilize as many devices as possible (line 3). For instance with Fig. <ref>, the scheduler proposes the first ASL-tuple of 1 to craft 1 since it occupies all devices. Similarly, for 2, it proposes the ASL-tuples of 1, 2, and 4, which correspond to 4, 2, 2 devices, respectively, in order to make full use of all devices.
* If the candidate set fails to occupy all devices, the cluster resources will be underutilized. To address this issue, we extend the allocated resources in specific tuples to ensure all devices are utilized (line 4).
For instance, in 4 of Fig. <ref>, the allocation of 4 is extended from 1 device to 2 devices.
Resource extension is prioritized for with larger remaining execution time, with the hope of balancing the remaining workload among the .
* In most cases, the proposed ASL-tuples differ in execution time. If we directly craft a with them, it would be inefficient since there must be idle devices.
Fortunately, this can be avoided by dissecting the ASL-tuples to align their time span (i.e., only a few number of operators in the are scheduled in this ).
For instance, in 2 of Fig. <ref>, the proposed ASL-tuples for 1, 2, and 4 correspond to 9, 14, and 3 operators, respectively. To align the execution time, the ASL-tuples for 1 and 2 are dissected, with only 1 and 2 operators of them being scheduled, while the remaining 8 and 13 operators left to be scheduled in subsequent .
Our scheduler simply aligns the time span w.r.t. the ASL-tuple with shortest execution time (e.g., the one for 4 in the previous example), and computes the aligned time span as the duration of current stage (line 5).
* After the time span alignment, the scheduler concludes the current (lines 6-7), including specifying the start time for operators that are scheduled in this , and removing them from the remaining set.
The scheduler aims to schedule the execution start time s for each ASL-tuple of in the allocation plan generated in <ref>.
performs scheduling stage by stage, as outlined in Alg. <ref>.
We first initialize a candidate set cand with all ASL-tuples in the allocation plan.
For each stage, we select a subset of tuples from cand to create a working set work (lines 3-4), aiming to utilize all N devices as fully as possible, and schedule the corresponding for execution in current stage.
However, as the execution time of these ASL-tuples vary, when some complete, the resource allocation must be dynamically adjusted to prevent idle devices, thereby avoiding resource wastage.
Therefore, we define the duration of the current stage T_stage by the shortest execution time among the ASL-tuples in work; the stage concludes once any tuple finishes (line 5).
All the tuples in work are split along operators to form a scheduled tuple, which forms sched and aligns the execution time to T_stage, and the unscheduld tuples, which form unexe, is returned to cand for scheduling in subsequent stages (line 10).
For instance, in Fig. <ref>, when scheduling stage 2, tuple ⟨ 2, ·, 3⟩ for 4 determines the stage duration, and the tuple ⟨ 2, ·, 14⟩ for 2 is split to ⟨ 2, ·, 2⟩, which aligns the current stage duration, and ⟨ 2, ·, 12⟩, which will be scheduled later.
This mechanism ensures that all the scheduled ASL-tuples in each stage stops simultaneously as much as possible to maintain workload balance.
Notably, if the working set fails to fully occupy all N devices, the cluster resources will be underutilized. Therefore, we apply a greedy approach when selecting work, prioritizing tuples with larger allocation n (line 3), which allows more efficient use of device fragments left by larger tuples by those with smaller n.
Furthermore, we also perform a resource extension procedure to deal with the device fragments, which extends the allocation amount for certain ASL-tuples to fully utilize all devices (line 6-7).
For example, in Fig. <ref> stage 4, the allocation of tuple of 4 is extended from 1 GPU to 2 GPUs.
Specifically, we greedily select the tuples in sched with larger normalized remaining time (line 6), NormRT_m = T_m(n_m^∗) · l_remain, for resource extension, which keeps the balance of the remaining workload of each to avoid resource wastage.
Here n_m^∗ is the optimal allocation for m in , and l_remain is the number of its remaining operators.
Refer to Appendix <ref> for details.
During the extension of tuple p, if necessary, additional operators can be transferred from the same ’s ASL-tuples in unexe to align its execution time with T_stage.
After resource extension, the schedule k in current stage is finalized (line 8-9), and the scheduler updates the candidate set and the start time for the next stage (line 10-11).
When the cand is empty, the scheduler returns the stage-based schedule as output, consisting of multiple consecutive stages.
§.§.§ Merging
As stated in <ref>, are decoupled into to disentangle operator dependencies. invokes the aforementioned allocation and scheduling for each individually, and merges their -based schedules together as the final execution schedule.
§.§ Device Placement
Given the -based schedule, which consists of the allocation amount and the execution time of each , we now discuss how determines the specific devices to allocate to each , known as device placement.
Device placement affects the inter-communication overhead, as well as the memory consumption of each device.
employs several guidelines based on empirical insights or observations to optimize device placement for , as detailed below.
§.§.§ Intra-Device-Island Placement
Placement within a device island is always preferred for each and each data flow between .
A device island consists of a group of devices connected by high-bandwidth interconnects (e.g., NVLink, PCIe), typically comprising adjacent devices, such as adjacent GPUs within one node.
For , prioritizing placement within the device island reduces the potential intra-communication costs.
For example, on a cluster with two GPUs per node, it's more efficient to place a to a contiguous device group like GPU 0 and 1 within one node rather than a scattered group across two nodes.
For data flow between across , intra-island placement reduces transmission costs leveraging the high intra-island bandwidth or even faster intra-device copying.
For example, if data flow exists between m and m^', and they are assigned 1 and 2 GPUs, respectively, strives to place m on GPU 1 and m^' on GPU 0 and 1.
This arrangement allows data flow via intra-island communication (i.e., 1 to 0) or intra-device copying (i.e., 1 to 1), avoiding the inter-island communication costs that would occur if m were on device 1 and m^' on device 2 and 3 on the other node.
§.§.§ Prioritizing High Communication Workloads
When the ideal scenarios outlined above are not achievable — that is, when it's infeasible to place all within the device island nor to align all data flows on the same device group — and data flows with higher communication volumes should be prioritized.
estimates the communication volume of each and each data flow to prioritize placing those with higher volumes within a device island and aligning high-volume data flows on the same device group.
For instance, in Fig. <ref>, the data flow volumes between red and blue are significantly higher than that between yellow ones. Therefore, prefer to place the data flow between red and blue ones within the device island, while place the data flow between yellow ones across the island.
This guideline ensures that the most communication-intensive components receive the most efficient hardware configuration to minimize communication overhead.
§.§.§ Device Memory Balance
As each device holds heterogeneous , the memory overhead varies across devices.
Placing too many memory-intensive on a single device may cause out-of-memory errors.
Therefore, actively strives to balance the memory load across all devices during device placement.
Specifically, estimates the memory consumption of each , record the available memory capacity of each device during placement, and prioritizes placing to the device with the highest available memory capacity.
Besides, for sharing the same parameters, we prioritize placing them on the same device to minimize redundant storage.
Based on these guidelines, performs device placement by greedily, prioritizing the minimization of communication overhead, such as inter-transmission, while simultaneously maintaining device memory balance.
When out-of-memory occurs due to imbalanced placement, will consider alternative placements with sub-optimal communication costs and better memory balance. If necessary, backtracking is employed to adjust the placements from earlier to effectively address the out-of-memory issues.
§.§ Runtime Engine
The runtime engine is responsible for running the execution plan to facilitate efficient multi-task multi-modal training.
This process is more complex than conventional single-task training, as each device handles heterogeneous and local computation graphs.
The runtime engine operates in four main steps:
* Localization.
Initially, localizes the execution plan to each device.
Specifically, each device instantiates the corresponding of each locally, and initializes the required model components and parameters.
* Intra-task Data Dependency.
Secondly, inserts transmission operators to connect the across to handle the data flow dependencies, including activations from the forward pass and gradients from the backward pass.
According to the device groups of and data format requirements, operations such as copy, shard, concat, send, and receive are used to transmit data flows with minimal overhead.
For example, a simple copy is sufficient for that share the same device group.
However, for complicated cases, more complex send and receive operations are necessary to transmit the data appropriately.
This step not only correctly handles data flow dependencies between but also links the on each device into a complete local computation graph ready for execution.
* Inter-task Model Dependency.
Then, manages parameter device groups for synchronization among various tasks by maintaining a global parameter device group pool.
Specifically, during each iteration, for each parameter W_j, all tasks or modalities that activate it on different devices contribute to its gradient computation.
These gradients need to be accumulated and synchronized to facilitate parameter sharing.
Therefore, before the training process, scans all devices to determine the device group D_i for each parameter W_j, which represents W_j is shared and should be synchronized within group D_i.
For efficiency, manages parameters with the same device group collectively and maintains a global parameter device group pool {D_i→{W_j}}, where each device group D_i corresponds to a set of parameters {W_j}.
* Training Step.
After the first three steps, the training process is ready to begin. In each iteration of , each device executes the forward and backward propagation of the local computation graph in a -by-manner, which is comprised of the interleaved execution of and transmission of data flow.
Following the forward and backward phases, performs group-wise parameter synchronization to maintain the parameter consistency.
Specifically, each parameter set {W_j} is synchronized within its corresponding device group D_i in the parameter device group pool.
§ IMPLEMENTATION
is an efficient and scalable MT MM training system built on PyTorch with 10K Loc in Python: 2.1K LoC for the execution planner and 7.9K LoC for the runtime framework.
We implement the data flow transmission with NCCL batched P2P primitives and the parameter device groups with NCCL communication groups.
provides the users with simple, user-friendly and flexible API for defining MT MM training workloads.
Specifically, training tasks in are represented as Task, and users can define various multi-modal tasks by customizing PyTorch modules and connecting them flexibly through the add_flow API in .
For example, a user can create a vision task by linking a vision encoder with a language model, or an audio task by linking an audio encoder with a language model.
Alternatively, user can also define different computational logic for various tasks implicitly within a single unified model.
can automatically split the modules and construct Tasks via PyTorch FX Tracer, streamlining the process of task definition.
After the definition of multi-modal tasks, conducts the optimization workflow automatically, as illustrated in Fig. <ref>, and the runtime engine provides efficient and scalable model training process.
§ EXPERIMENTS
§.§ Experimental Setups
§.§.§ Competitors.
We evaluate the efficiency of by comparing it with state-of-the-art distributed training systems, Megatron-LM <cit.> and DeepSpeed <cit.>.
As discussed in <ref>, these systems are primarily developed for single-task training and do not cater specifically to the complexities of multi-task multi-modal training scenarios.
To further explore the advantages of 's flexible resource allocation and scheduling capabilities, we introduce several baselines implemented on that represent typical strategies for multi-task training.
The features of these competitors are summarized in Table <ref>.
(1) Megatron-LM:
Megatron-LM <cit.> is a widely used system that employs hybrid parallelism schemes, such as 3D parallelism, tailored for single-task training.
The out-of-the-box approach to multi-task multi-modal training on Megatron-LM involves executing each task sequentially in each iteration, with each task utilizing the entire cluster.
Need to rewrite as decoupled training of multiple tasks
(2) DeepSpeed:
DeepSpeed <cit.> is another popular distributed training system leveraging sharded data parallelism (ZeRO <cit.>) and, similar to Megatron-LM, is typically optimized for single-task training.
Its out-of-the-box paradigm for multi-task multi-modal training is also sequential task execution, with each task consuming all cluster resources.
(1)&(2) Megatron-LM & DeepSpeed:
Megatron-LM <cit.> and DeepSpeed <cit.> are widely used state-of-the-art training systems tailored for single-task training.
The naïve approach to train MT MM models on these systems is to decouple all sub-models on separate devices (<ref>), which requires plenty of resources and is impractical.
Therefore, we decouple sub-models on temporal dimension within each iteration, where each sub-model takes up the whole cluster within a short time period, and is dependently and sequentially executed.
(3) -Seq:
This baseline on allocates all available devices to each task and execute tasks sequentially within each iteration, similar to Megatron-LM and DeepSpeed.
It reflects the performance of our system without specific optimizations for MT MM workloads.
(4) -Uniform:
This baseline demonstrates a basic, workload-unaware task-level resource allocation strategy for multi-task multi-modal training.
It allocates available devices uniformly to each task, and executes each task in parallel within each iteration.
(5) -Optimus:
This baseline represents a workload-aware task-level resource allocation strategy, which adapts allocations according to the workload at the task level granularity.
It's inspired by Optimus <cit.>, an effective cluster job scheduling system which proposes a greedy resource allocation scheme and iteratively assigns devices to the job that has the largest marginal gain.
Despite differences between job scheduling and multi-task training (<ref>), we apply a similar principle and devise the marginal gain as (T^_m(n) - T^_m(n')/(n'-n), i.e., the task completion time reduction scaled by the allocation increment from n to n'.
Here n' is the next valid allocation number larger than n.
This baseline is aware of inter-task heterogeneity, whereas unaware that of intra-task.
(6) -STMM:
This baseline represents a naïve multi-task (MT) extension of single-task (ST) multi-modal (MM) model training systems.
It decouples multi-tasks, and for each single MM task it allocates resources to different modality encoders, akin to DistMM <cit.>, a recent system designed for ST MM models.
Then it executes tasks sequentially.
Contrary to -Optimus, -STMM is aware of intra-task workload heterogeneity, whereas unaware that of inter-task.
§.§.§ Experimental Workloads
We conduct experiments on three different workloads of MT MM models, namely Multitask-CLIP <cit.>, OFASys <cit.>, QWen-VAL <cit.>.
The configuration of these models are summarized in Table <ref>.
(1) Multitask-CLIP:
Multitask-CLIP is a generalized version of CLIP <cit.>, which extends CLIP to 6 modalities and multiple contrastive learning tasks of paired data modalities.
We utilize the same model structure and configuration of ImageBind <cit.>.
We select 10 different contrastive learning tasks for evaluation, each with distinct workloads.
(2) OFASys:
OFASys<cit.> is a more general MT MM training workload, allowing modalities and tasks to activate the model components flexibly as needed.
OFASys utilizes modality-specific adaptors for different modalities, e.g., ViT for vision data, and adopts a unified encoder-decoder LM with generative loss.
We select 7 different multi-modal tasks for evaluation.
(3) QWen-VAL:
QWen-VAL is a larger-scale MT MM model with up to 9.25 billion parameters, supporting three modalities, including text, vision, and audio.
It adopts the same structure and configuration of the popular open-sourced multi-modal LLMs, QWen-VL <cit.> and QWen-Audio <cit.>.
It has modality encoders for audio and vision, and the extracted modality-specific features are combined with text tokens and together fed into the unified LLM, QWen <cit.>.
We select three tasks for evaluation, i.e., vision-language (VL) task, audio-language (AL) task, and vision-audio-language (VAL) task, representing different combinations of modalities.
§.§.§ Protocols
We conduct all the experiments on an 8-node GPU cluster. Each node consists of 8 NVIDIA A800 80 GB GPUs equipped with NVLink, and the nodes are interconnected by 400 Gbps InfiniBand network.
Since the baseline systems do not support automatic planning given a targeted MT MM model training workload, to achieve a fair comparison, we manually tune their parallel configurations and memory optimization techniques (e.g., data parallelism degree, tensor parallelism degree, ZeRO stage, activation checkpointing, and etc.) to achieve the best performance.
For each system on each workload, we evaluate the system performance on different cluster sizes and report the iteration time averaged over 100 iterations.
§.§ End-to-End Performance
Fig. <ref> displays end-to-end comparisons between and baseline systems across various model workloads, multi-modal task configurations, and cluster sizes.
§.§.§ Comparison with SOTA systems.
In general, compared to state-of-the-art (SOTA) training systems, i.e., Megatron-LM and DeepSpeed, achieves speedup ratios of up to 67% and 71%, respectively.
Below we delve into the performance advantages of .
To begin with, consistently outperforms the competitors across different task configurations and numbers of tasks. Notably, excels when handling a larger number of tasks.
When comparing with Megatron-LM and DeepSpeed on the 10-task Multitask-CLIP workloads, achieves speedup ratios ranging from 31% to 63% and 33% to 71% compared to Megatron-LM and DeepSpeed, respectively.
Similar results are shown for the 7-task OFASys workloads, with the speedup ratios ranging from 31% to 67% and 33% to 71%, respectively.
This underscores 's excellent scalability with increasing task counts.
In addition, consistently achieves optimal performance across various cluster sizes.
For instance, on Multitask-CLIP, compared to SOTA systems, achieves the highest speedup ratios of 37%, 33%, and 71% on 8, 16, and 32 GPUs, respectively. Similarly, on OFASys, achieves acceleration ratios up to 71%, 46%, and 51% on 8, 16, and 32 GPUs, respectively. These results highlight 's excellent scalability w.r.t. cluster size. Notably, maintains high efficiency even when the scalability of SOTA systems begins to diminish — that is, when the increase in resources does not correspond to significant speed improvements. For example, in the 4-task Multitask-CLIP scenario, expanding the cluster size from 16 to 32 GPUs results in only modest speedup of 1.21× and 1.17× for Megatron-LM and DeepSpeed, respectively, whereas still achieves a 1.45× speedup. This efficiency stems from ’s carefully designed resource allocation and scheduling mechanisms.
Unlike existing systems that naively allocate all resources across all operators and tasks, ensures that each operator is allocated suitable resources when the cluster size increases, to maintain high computational efficiency.
More importantly, also exhibits excellent scalability w.r.t. model size.
On larger models QWen-VAL with 9.25 billion parameters, achieves a maximum speedup of 1.16× on 32 GPUs and 1.63× on 64 GPUs, compared to SOTA systems.
Notably, when training the QWen-VAL over 64 GPUs, shows remarkable scalability: it achieves a 1.78× speedup when scaling from 32 to 64 GPUs, whereas Megatron-LM and DeepSpeed only achieve 1.27× and 1.26× speedups, respectively.
This is unsurprising since allocates cluster resources across different operators more flexibly, thereby avoiding the unsatisfactory scalability of with light workloads, as discussed in <ref>.
§.§.§ Comparison with other baselines
Next, we discuss the performance of the variants of .
Since -Seq has a comparable performance against Megatron-LM and DeepSpeed in most cases — which is reasonable as all three counterparts execute tasks sequentially — we focus on the comparison with task-level resource allocation strategies, i.e., -Uniform and -Optimus, as well as the single-task strategy, i.e., -STMM.
We find that the workload-unaware uniform allocation of -Uniform performs well only in limited scenarios, achieving a maximum speedup ratio of 28% over DeepSpeed.
This suggests that resource allocation can enhance computational efficiency to some extent.
However, it generally underperforms compared to SOTA systems due to its tendency to distribute resources evenly, leading to unbalanced workloads across tasks and system performance being constrained by the most resource-intensive tasks.
In contrast, -Optimus, which allocates resources based on task workloads, shows better performance, especially in larger-scale cluster scenarios, with the speedup ratio up to 44% compared to DeepSpeed.
However, there are still many scenarios where -Optimus underperforms, sometimes even falling behind DeepSpeed.
This is because -Optimus’s task-level resource allocation overlooks the workload heterogeneity within tasks, thereby limiting training efficiency.
Moreover, its coarse granularity of task-level allocation can sometimes fail to achieve ideal load balancing among tasks, often resulting in performance being constrained by the slowest task.
In comparison, the operator-level strategy employed by enables finer-grained resource allocation and load balancing, consistently achieving higher efficiency compared to task-level strategies.
We find that even in scenarios where -Optimus has already surpassed the performance of SOTA systems, still manages to achieve a speedup ratio of up to 45% over -Optimus (4-task Multitask-CLIP on 8GPUs), verifying its superiority.
As for -STMM, we find it perform better than SOTA systems in most cases, with the speedup ratio up to 20%, benefiting from its intra-task workload awareness and resource allocation.
However, it's designed for single-task (ST) multi-modal (MM) models, which decouples tasks and optimizes each task separately, and such single-task strategy is not the global optimum for multi-task cases.
The lack of awareness of inter-task heterogeneity limits its performance, causing it to underperform compared to the task-level strategy -Optimus in many cases.
For OFASys, -STMM shows almost similar performance to SOTA systems.
This is because -STMM gains acceleration by parallelizing sub-models of the multi-tower structure.
In contrast, OFASys utilizes a lightweight text adaptor, so most tasks that pair a modality with text are dominated by the other modality, making the intra-task parallelization of sub-models ineffective.
Compared to -STMM, jointly optimizes the allocation and scheduling of all tasks and operators, taking into account both intra-task and inter-task workload heterogeneity. This enables to consistently outperform the single-task strategy of -STMM, achieving a speedup ratio of up to 59%.
§.§ Case Study
To better understand the advantages and performance gain of over the other competitors, we further conduct an in-depth case study of Multitask-CLIP (4 tasks, 16 GPUs). Fig. <ref> presents system performance considering three key metrics: cluster average utilization over time, average utilization per device, and computational utilization of each .
Firstly, -Seq, which executes the tasks sequentially with all resources, experiences fluctuating utilization due to the workload heterogeneity, leading to generally low overall utilization.
-Uniform, which allocates resources uniformly at the task level, improves cluster utilization to some extent at the iteration beginning,
but as tasks with light workloads finish, more devices become idle, declining overall utilization.
-Optimus partially mitigates this imbalance with workload-aware allocation, though it still suffers from utilization drops due to its coarse granularity of task-level allocation.
-STMM manages to enhance utilization via intra-task resource allocation for each task compared to -Seq, but the ignorance of inter-task heterogeneity limits its utilization.
In contrast, maintains consistently high utilization and the shortest iteration times thanks to its joint optimization of the unified computation graph of all tasks and operators, which addresses the heterogeneity both within and among tasks.
Furthermore, significantly elevates the utilization of all devices and all , showcasing its superior handling of workload balance through operator-level strategies.
In contrast, -Seq shows lower utilization across all devices and .
Although task-level strategies can enhance the computational efficiency of certain devices, the coarse granularity of allocation inevitably leads to workload imbalances, leaving many devices underutilized, sometimes even worse than -Seq, resulting in poor average cluster utilization.
-STMM also improves the utilization of certain devices and , but the results are still unsatisfactory as it fails to capture the workload differences among tasks, and fails to reach the global optimal allocation and scheduling plan for multi-tasks.
Overall, 's unified optimization of MT MM models captures both intra-task and inter-task heterogeneity, and effectively balances workloads.
Thus, it consistently enhances utilization across all operators and devices, and maintains high computational efficiency across the cluster.
§.§ Time Breakdown
Fig. <ref> shows the runtime breakdown for and -Seq across various workloads, primarily consisting of forward and backward propagation, parameter synchronization, and inter-send and receive. We've isolated parameter synchronization from the backward phase for individual analysis.
In MT MM training, we find that forward and backward propagation dominate the runtime, typically accounting for 80%-95% due to the large number of tasks and computational demands. focuses on reducing this significant time component through flexible resource allocation and scheduling.
Parameter synchronization usually consumes a small fraction of the time, about 5%-15%, since it only occurs after accumulating gradients from multiple tasks.
Notably, consistently achieves equal or lower synchronization cost compared to -Seq. For instance, on 32 GPUs with QWen-VAL, cuts synchronization time to just 55% of that of -Seq.
Although not the primary optimization focus, 's operator-level design inherently reduces synchronization overhead. This is achieved by synchronizing each parameter only within the device group that activates it and leveraging 's device island placement to convert potentially inter-island synchronizations into more local communications within device islands.
Furthermore, we find that while introduces extra overhead for inter-send and receive, this overhead remains minimal, typically not exceeding 6%, thanks to the device placement mechanism that avoids unnecessary communications.
Detailed ablation study of device placement is in <ref>.
§.§ Component Analysis
§.§.§ Optimality Analysis of Execution Planner
We analyze the optimality of execution planner in Fig. <ref>.
Specifically, we compare the iteration time of to the theoretical optimal time C^∗ derived from Theorem <ref> in <ref>.
As discussed in <ref>, when relaxing the constraints of the optimization problem (<ref>) and allowing the continuous divisibility of GPU resources n and operator number l, Theorem <ref> offers the theoretical optimum and corresponding optimal time C^∗.
Such a solution is unachievable due to these relaxed constraints, but serves as a theoretical upper bound of performance.
The execution planner preserves most of the optimum property when finding the practical solution (e.g., Cond. (<ref>) and Cond. (<ref>) in <ref>), but still introducing minor biases (e.g., reinstating l's to integers in <ref>, resource extension in in <ref>).
In Fig. <ref>, we calculate and estimate the theoretical optimum C^∗ according to Theorem <ref>, and compare it with 's iteration time.
We find that across various task configurations and cluster sizes, the deviation between and theoretical optimum is consistently low, below 7%.
This observation underscores the effectiveness of in offering a practical and near-optimal execution plan for MT MM models.
Besides, efficiently generates the execution plans within 3 seconds across all experiments, which is negligible compared to model training time.
§.§.§ Ablation on Device Placement
We conduct an ablation study on the device placement strategy in <ref>, focusing on its impact on inter-communication overhead, which is the extra overhead introduced by our system.
Specifically, we compare 's device placement strategy with a sequential placement strategy, which naïvely assigns each with consecutive devices.
Our results indicate that the inter-communication overhead of the sequential placement strategy is approximately 3-6 times greater than that of , taking up to 27% of the end-to-end training time, which is considerably high.
However, with 's placement strategies, this overhead only takes up to 6%.
This demonstrates the effectiveness of our locality-aware placement, which significantly reduces the extra communication overhead.
§.§ Dynamicity Performance
We evaluate the performance of various systems during dynamic changes of the multi-task workloads, a common occurrence in MT MM training. For instance, tasks with fewer training data may exit early, and new tasks may join partway through training. We simulate these dynamic changes by altering the training task set.
When the multi-task workloads change, the current model is first saved, and the new set of tasks and the saved model is loaded to continue training.
Fig. <ref> illustrates the performance of each system under such conditions. consistently achieves optimal training efficiency and the shortest overall training time. This advantage is due to 's adaptability to dynamically changing workloads, enabling it to adopt an appropriate execution plan for the efficient training of MT MM models.
§ RELATED WORKS
§.§.§ Cluster Scheduling for DL Jobs
GPU clusters often design cluster schedulers to coordinate resource allocation and the execution order among multiple DL jobs.
Some cluster schedulers <cit.> allocate resources to jobs based directly on user-specified requirements.
Others <cit.> automatically allocate resources to each job based on the job scalability to the computing resource. Many of these schedulers aim to minimize job completion time (JCT).
For instance, Optimus <cit.> introduces the concept of marginal gain to guide resource allocation, aiming to minimize job completion time.
Here, we highlight the difference of these works and MT MM model training.
Unlike the independence among jobs in cluster scheduling, MT MM training involves execution dependencies among tasks.
Furthermore, while traditional scheduling focuses on job-level allocation, MT MM training requires finer-grained strategies to address intra-task workload heterogeneity.
§.§.§ Training Optimization on Heterogeneous Cluster
This line of research focuses on optimizing the distributed training efficiency of DL models on heterogeneous GPU clusters <cit.>.
While these works primarily concentrate on optimizing single model training and address hardware heterogeneity, mainly focus on more complex MT MM models, and addresses the challenges posed by the workload heterogeneity of MT MM models.
§.§.§ Data and Model Management Optimization
The data management community has developed effective systems for managing data and models in specific domains, such as graph-structured data and models <cit.>, recommendation system data <cit.>, tabular data <cit.>, and video data <cit.>, etc. However, no existing work focuses on optimizing multi-task (MT) multi-modal (MM) data and model management, which is the key problem that addresses.
§ CONCLUSION
Efficient training of MT MM models faces significant system challenges due to the workload heterogeneity and complex execution dependency.
In this paper, we propose to facilitate efficient training of MT MM models via data heterogeneity-aware model management optimization, which jointly optimizes heterogeneity-aware workload parallelization and dependency-driven execution scheduling.
Extensive experiments demonstrate the consistent superior performance of , outperforming existing state-of-the-art training systems with speedup ratio up to 71%.
ACM-Reference-Format
§ DETAILS OF SCALABILITY ESTIMATOR
characterizes the execution time of m over n devices, T_m(n), by a generalized piecewise αβ function:
T_m(n) = α_m,i + β_m,i×c_m + β_m,i'×w_m/n,∀ n ∈ [n_i-1, n_i], i = 1,…,k
where k is the number of pieces, α_m,i represents the coefficient of fixed overheads (e.g., kernel launch costs), β_m,i and β_m,i' represent the reciprocal of execution efficiency (e.g., GPU computation speed and network bandwidth), w_m/n denotes the distributed workload of m across n devices (e.g., computational workload), and c_m denotes the workload that doesn't scale with n (e.g., communication volume of data parallelism).
Such piecewise function indicates that under varying resource scales, due to changes in the per-device workload, coefficients such as
α, β and β' might differ, as the invoked kernels may vary across different workloads.
§ DETAILS OF BISECTION SEARCH FOR OPTIMUM OF CONTINUOUS PROBLEM
Alg. <ref> illustrates our bisection search algorithm to solve the optimum of malleable project scheduling problem, MPSP.
The function finds the value of T_m^ (C).
It first finds the closest valid allocations of m, denoted as n_m and n_m, such that C∈ [T_m(n_m),T_m(n_m)].
It then returns
n_m = (C - T_m(n_m)) ·n_m + (T_m(n_m) - C) ·n_m/T_m(n_m) - T_m(n_m),
which is the linear combination of n_m, n_m
such that T_m(n_m) = C.
§ COMPARISON ON SINGLE-TASK MULTI-MODAL WORKLOAD
We also compare with baseline systems on single-task (ST) multi-modal (MT) scenario, which is a special case of MT MM training, as shown in Fig. <ref>. We are pleased to observe that even in the single-task scenario, outperformed SOTA systems by up to 48%. This is attributed to 's fine-grained, operator-level resource allocation and scheduling, which recognize not only the inter-task workload heterogeneity but also intra-task operator workload variations — a capability beyond the reach of task-level strategies as well as SOTA systems.
It's worth noting that -STMM has similar performance to on ST MM scenario, which is reasonable.
§ MEMORY CONSUMPTION
We also conduct a comparative analysis of memory consumption between and the other competitors. Fig. <ref> depicts the memory usage for each device in the scenario of Multitask-CLIP (4 tasks, 16 GPUs). Our findings indicate that generally exhibits lower memory consumption than SOTA systems such as Megatron-LM and DeepSpeed. This efficiency stems from 's operator-level strategy and selective parameter storage feature, where only devices that activate a specific operator need to maintain its corresponding parameters, thereby minimizing redundant storage. Additionally, we've observed that task-level strategies, e.g., -Optimus, experience significant memory imbalances. This issue is even more pronounced with -Uniform, which we have omitted in Figure <ref> for clarity.
In contrast, maintains an excellent balance of memory consumption across devices, a success that is attributed to our device placement strategies.
|
http://arxiv.org/abs/2409.03691v1 | 20240905164436 | Physics case for quarkonium studies at the Electron Ion Collider | [
"Daniël Boer",
"Chris A. Flett",
"Carlo Flore",
"Daniel Kikoła",
"Jean-Philippe Lansberg",
"Maxim Nefedov",
"Charlotte Van Hulse",
"Shohini Bhattacharya",
"Jelle Bor",
"Mathias Butenschoen",
"Federico Ceccopieri",
"Longjie Chen",
"Vincent Cheung",
"Umberto D'Alesio",
"Miguel Echevarria",
"Yoshitaka Hatta",
"Charles E. Hyde",
"Raj Kishore",
"Leszek Kosarzewski",
"Cédric Lorcé",
"Wenliang Li",
"Xuan Li",
"Luca Maxia",
"Andreas Metz",
"Asmita Mukherjee",
"Carlos Muñoz Camacho",
"Francesco Murgia",
"Pawel Nadel-Turonski",
"Cristian Pisano",
"Jian-Wei Qiu",
"Sangem Rajesh",
"Matteo Rinaldi",
"Jennifer Rittenhouse West",
"Vladimir Saleev",
"Nathaly Santiesteban",
"Chalis Setyadi",
"Pieter Taels",
"Zhoudunmin Tu",
"Ivan Vitev",
"Ramona Vogt",
"Kazuhiro Watanabe",
"Xiaojun Yao",
"Yelyzaveta Yedelkina",
"Shinsuke Yoshida"
] | hep-ph | [
"hep-ph",
"hep-ex",
"nucl-ex",
"nucl-th"
] |
1]Daniël BoerEditor
2]Chris A. FlettEditor
2,3,4]Carlo FloreEditor
5]Daniel KikołaEditor
2]Jean-Philippe LansbergEditor
2]Maxim NefedovEditor
2,6]Charlotte Van HulseEditor
[Editor]Editor
7]Shohini Bhattacharya
1,2]Jelle Bor
7b]Mathias Butenschoen
2,8]Federico Ceccopieri
9,10]Longjie Chen
11]Vincent Cheung
4]Umberto D'Alesio
12]Miguel Echevarria
7,13]Yoshitaka Hatta
14]Charles E. Hyde
12,15]Raj Kishore
16]Leszek Kosarzewski
17]Cédric Lorcé
15,18]Wenliang Li
19]Xuan Li
1,4]Luca Maxia
20]Andreas Metz
21]Asmita Mukherjee
2]Carlos Muñoz Camacho
4]Francesco Murgia
17,22]Pawel Nadel-Turonski
4]Cristian Pisano
23]Jian-Wei Qiu
24]Sangem Rajesh
25]Matteo Rinaldi
26,27]Jennifer Rittenhouse West
28]Vladimir Saleev
29]Nathaly Santiesteban
1,30]Chalis Setyadi
31]Pieter Taels
7]Zhoudunmin Tu
19]Ivan Vitev
11,32]Ramona Vogt
33,34]Kazuhiro Watanabe
35,36]Xiaojun Yao
2,37]Yelyzaveta Yedelkina
9,10]Shinsuke Yoshida
[1]Van Swinderen Institute for Particle Physics and Gravity, University of Groningen, 9747 AG Groningen, The Netherlands
[2]Université Paris-Saclay, CNRS, IJCLab, 91405 Orsay, France
[3]Dipartimento di Fisica, Università di Torino, and INFN Sezione di Torino, Via P. Giuria 1, I-10125 Torino, Italy
[4]Dipartimento di Fisica, Università di Cagliari, and INFN Sezione di Cagliari, Cittadella Univ., I-09042 Monserrato (CA), Italy
[5]Faculty of Physics, Warsaw University of Technology, plac Politechniki 1,00-661, Warszawa, Poland
[6]University of Alcalá, Alcalá de Henares (Madrid), Spain
[7]Physics Department, Brookhaven National Laboratory, Bldg. 510A, Upton, NY 11973, USA
[7b]II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany
[8]Université de Liège, B4000, Liège, Belgium
[9]Key Laboratory of Atomic and Subatomic Structure and Quantum Control (MOE), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China
[10]Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Guangdong Provincial Key Laboratory of Nuclear Science, Southern Nuclear Science Computing Center, South China Normal University, Guangzhou 510006, China
[11]Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94551 USA
[12]Department of Physics & EHU Quantum Center, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain
[13]RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA
[14]Department of Physics, Old Dominion University, Norfolk, VA 23529, USA
[15]Center for Frontiers in Nuclear Science, Stony Brook University, Stony Brook, NY 11794, USA
[16]Department of Physics, The Ohio State University, Columbus, Ohio 43210, USA
[17]CPHT, CNRS, Ecole polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France
[18]Stony Brook University, Stony Brook, NY 11794, USA
[19]Los Alamos National Laboratory, Los Alamos, NM 87545, USA
[20]Department of Physics, Temple University, Philadelphia, PA 19122, USA
[21]Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India
[22]University of South Carolina, Columbia, SC 29208, USA
[23]Theory Center, Jefferson Laboratory, Newport News, VA 23606, USA
[24]Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
[25]Dipartimento di Fisica. Università degli studi di Perugia, and INFN Sezione di Perugia. Via A. Pascoli, Perugia, 06123, Italy
[26]Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
[27]University of California at Berkeley, Berkeley, CA 94709, USA
[28]Joint Institute for Nuclear Research, Dubna, Russia
[29]University of New Hampshire, Durham, NH 03824, USA
[30]Department of Physics, Universitas Gadjah Mada, BLS 21 Yogyakarta, Indonesia
[31]Department of Physics, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerpen, Belgium
[32]Department of Physics and Astronomy, University of California at Davis, Davis, CA 95616, USA
[33]SUBATECH UMR 6457 (IMT Atlantique, Université de Nantes, IN2P3/CNRS), 4 rue Alfred Kastler, 44307 Nantes, France
[34]Faculty of Science and Technology, Seikei University, Musashino, Tokyo 180-8633, Japan
[35]Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
[36]InQubator for Quantum Simulation, Department of Physics, University of Washington, Seattle, Washington 98195, USA
[37]School of Physics, University College Dublin, Dublin 4, Ireland
§ ABSTRACT
The physics case for quarkonium-production studies
accessible at the US Electron Ion Collider is described.
§ INTRODUCTION
The Electron-Ion Collider (EIC) accelerator and detector systems are currently designed following the elaboration of an outstanding physics case aimed at further exploring the nucleon and nucleus partonic structure. The interested reader will find it useful to consult
reviews of the EIC <cit.>.
Bound states of heavy quark-antiquark pairs, QQ̅, i.e. quarkonia, allow for a detailed study of basic properties of quantum chromodynamics (QCD), the theory of the strong interaction.
Indeed, charmonia and bottomonia have played a crucial role in the establishment of QCD as the theory of the strong interaction, given the clean signature they provide in different observables.
On the theory side, the main origin of the simplifications is the hierarchy m_Q≫Λ_ QCD, with m_Q the mass of a heavy quark, meaning that for processes occuring at this scale (or higher), a perturbative expansion in α_s of QCD
is allowed.
In parallel, the non-perturbative effects associated with the formation of the bound state can be factorised.
Heavy quarkonia are multiscale systems. Besides m_Q and Λ_QCD, one needs to consider, in addition, the scale of the typical momentum transfer between heavy constituent quarks (m_Q v), v being the velocity of the heavy quarks in the rest frame of the bound state, and the scale of their binding energy (m_Q v^2), all of which become widely separated in the limit m_Q→∞.
At this point, the non-relativistic nature of the system comes into play, allowing for the development of different effective theories of QCD that attempt to more adequately describe the production of the bound state in the presence of different relevant scales as well as models such as the Colour-Singlet (CS) Model <cit.> or Colour-Evaporation Model (CEM) <cit.>.
The multitude of existing theoretical approaches to describe quarkonium production reflects the fact that unfortunately, up to now, there is no universal physics picture of this process accepted by the community that would provide a satisfactory description of all available experimental data <cit.>. This complicates the
use of quarkonia as tools for precision studies. Heavy quarkonia nevertheless remain useful to uncover new facets of the structure of nucleons and nuclei which we review in this document.
In this context, measurements of various quarkonium-production observables in electron-proton (ep) and electron-nucleus (eA) collisions at the EIC could provide crucial experimental clues to finally settle the quarkonium-production-mechanism debate.
Important targets for the EIC experimental programme are vector-quarkonium-polarisation observables and cross-section measurements of C=+1 states, like the η_c and χ_c,b. These play a central role in the current debate about the heavy-quarkonium-production mechanism and yet corresponding precise data from ep collisions at HERA are simply lacking.
Such measurements would hopefully clear up the quarkonium-production debate and allow one to fully employ quarkonium data at the EIC as tools.
Before discussing how quarkonia can be used as tools to study nucleons, let us recall that the multi-dimensional structure of nucleons is parameterised by different hadronic functions, which encode the dynamics of partons at different levels of complexity. These span from the one-dimensional (1D) parton distribution functions (PDFs), to the five-dimensional (5D) Wigner distributions –or generalised transverse-momentum-dependent distributions (GTMDs)–, to mention a few. These also incorporate a variety of spin and momentum correlations between the parton (or partons) participating in the hard subprocess and its (their) parent hadron.
Depending on the considered scattering process and the measured kinematics, different hadronic functions enter the relevant cross sections.
Among them, let us cite the transverse-momentum-dependent PDFs (TMD PDFs or TMDs), arising from TMD factorisation <cit.> and which provide information on the distribution of partons inside the nucleon as a function of both their longitudinal
and transverse momentum.
In the case of quarkonium production at transverse momenta, , small compared to their mass and for specific other kinematical end-point regions, new TMD functions related to the produced quarkonium and referred to as shape functions
, are expected to enter the cross-section formula besides the TMD PDFs of the initial-state hadron(s). This reflects the interplay between radiation of soft gluons and effects of the formation of the QQ̅ bound state.
Their impact on the phenomenology remains at present unknown.
Much progress has been made in the determination of the above-mentioned PDFs, achieving different levels of success. Currently, the gluon distributions in general remain
much less explored than their quark analogues. In this context, quarkonia arise as a powerful handle to remedy this situation since, in the vast majority of
situations, the QQ̅ pair at the origin of the quarkonium comes from photon-gluon (gluon-gluon) fusion in ep (resp. pp) collisions, whereas deep inelastic scattering (DIS) or Drell-Yan-pair production are sensitive to gluon only through radiative QCD corrections.
However, it has been shown <cit.> that factorisation of observables –cross sections, angular modulations, spin asymmetries,…– in terms of TMD PDFs is less universal than that in terms of
standard (collinear)
PDFs and that consequently such a factorisation could be violated in back-to-back-hadron production in proton-proton (pp) collisions. In ep and eA collisions, there is no anticipated violation of TMD factorisation, at least for
inclusive single-hadron production, so quarkonium measurements will likely be easier to interpret in terms of gluon TMDs at the EIC
rather than at hadron colliders.
Quarkonia are also key players in exclusive reactions. This is not surprising as exclusive meson production involving a hard scale
is one of the main processes to access Generalised Parton Distributions (GPDs).
Gluon GPDs are in particular accessible via exclusive
heavy-quarkonium production <cit.>.
These GPDs provide information on the distribution of gluons inside the nucleon simultaneously as a function of their longitudinal momentum and their transverse position. They also provide information on the angular momentum of the gluons inside the nucleon, about which very little is known to date. Furthermore, exclusive heavy-quarkonium production near the production threshold was suggested <cit.> as a tool for constraining the gluon condensate in the nucleon, itself linked to the nucleon mass, albeit with some unavoidable model dependence.
At small momentum fraction, x, the differential exclusive electro- and photoproduction cross sections of quarkonia
can be expressed in terms of particular products of integrals of GTMDs.
In single-quarkonium production, when a collinear expansion is applied, the cross section reduces to expressions in terms of GPDs, see for example <cit.>. However, in general, especially beyond single-particle production, it
provides additional information on GTMDs and offers an opportunity to learn more about the combined three-momentum and spatial distributions of gluons inside a nucleon.
Moreover, while there is a direct relation between
exclusive photoproduction case in ep collisions and
in ultra-peripheral
pp and pPb collisions (UPCs),
studies at the EIC would allow one to probe in more details the transverse-momentum dependence of the GTMDs.
To date, the detector simulations for the EIC physics case connected to quarkonium physics has been limited to and Υ exclusive production
as reported in the EIC Yellow report <cit.>. Whereas, as we discussed above, quarkonium production is still the object of intense debates within the community[We guide interested readers to the following reviews <cit.> which address HERA and Tevatron results, to more recent ones <cit.> addressing progress made thanks to the RHIC and LHC data and to to the HEPData database (https://www.hepdata.net/https://www.hepdata.net/), a dedicated repository of quarkonium measurements up to 2012 (http://hepdata.cedar.ac.uk/review/quarkonii/http://hepdata.cedar.ac.uk/review/quarkonii/ to <cit.> and to <cit.> for experimental quarkonium data.], there is no doubt that it can play a crucial role in the scientific success of the EIC. As
was recently done for the High Luminosity LHC phase <cit.>, we gather in this review what we believe to be the most complete list of quarkonium studies that can be carried out at the EIC along with their motivation.
The document is organised as follows. In Section <ref>, the EIC accelerator system and the first EIC detector, ePIC, as currently envisioned are presented. After a description of the kinematics of lepton-hadron collisions, the importance and a theoretical treatment of QED radiation are discussed, to then end with a note on feed-down from b-quark production in the study of charmonium. In Section <ref>,
the different theoretical descriptions and measurements related to the production mechanism of quarkonia are presented. First, the various existing theoretical formalisms in collinear factorisation are discussed. Then, the legacy of existing measurements
and the potential of future measurements at the EIC in constraining these formalisms are presented. Subsequently, it is shown that TMD observables can also teach us about quarkonium formation and polarisation and predictions for the EIC are made. Finally, the effect of final-state interactions on the production of quarkonium in lepton-nucleus colissions is touched upon. Section <ref> focuses on the studies accessible in electron-proton collisions in order to advance our knowledge of the nucleon partonic structure
and then moves on to studies with nuclear beams, which is a unique feature of the EIC, and which will allow us to make a giant leap forward into a new precision era of the partonic structure of nuclei.
We underscore throughout this comprehensive review the diverse ways in which the EIC will utilise quarkonia to probe hadronic and nuclear physics and, conversely, will itself be a powerful tool for probing quarkonia.
§ GENERALITIES ABOUT QUARKONIUM STUDIES AT THE EIC
§.§ The proposed EIC accelerator system
The EIC
is an upcoming particle accelerator that will deliver intense beams of longitudinally polarised electrons and polarised light nuclei (p, d, ^3He) as well as unpolarised heavier ions, ranging up to uranium. It will produce electron-ion collisions at the highest energy and at the highest rate
ever achieved.
The EIC will be constructed in Brookhaven National Laboratory
using a few key elements of the currently operating Relativistic Heavy Ion Collider (RHIC) <cit.>, such as the hadron ring
and the RHIC Electron-Beam-Ion-Source (EBIS) <cit.>.
The collider will be supplemented with a
new electron ring, which will contain continuously injected, polarised electrons with an energy from 5 GeV up to 18 GeV.
The coverage in centre-of-mass energy will range from
28 GeV to 141 GeV for
lepton-proton collisions, while for lepton-ion collisions an upper energy of 89 GeV/nucleon will be reached.
The
expected instantaneous luminosity depends on the centre-of-mass energy and will range from 10^33 to 10^34 cm^-2s^-1 for electron-proton collisions, with a maximum value expected for = 105 GeV. For electron-ion reactions, it will be on the order of 10^34 cm^-2s^-1. Such figures will correspond to integrated luminosities
of the order of 10 to 100 fb^-1 per year. The designed average polarisation of electron, proton and ^3He beams is
of the order of 70%.
At present, the installation of a first EIC detector is foreseen
at interaction point 6 (IP6).
A second interaction point (IP8) can, at
any stage, host a second and complementary detector.
The second interaction point could accommodate a design with a secondary focus, which in combination with forward spectrometry would
allow for an extension of the acceptance towards the detection of particles at very small polar angles.
The interaction points will re-use the existing large detector halls, currently occupied by the STAR and sPHENIX experiments.
The first collisions at the EIC are expected in the early 2030s.
§.§ The proposed EIC detector
§.§.§ Requirements for an EIC detector in the context of quarkonium studies
The specification of an EIC detector is determined by the kinematics of the electron-ion scattering (see fig:eic:generic:det) and the observables and processes of interest. It should address the full range of physics outlined in the
EIC White Paper <cit.>, the NAS report <cit.> and the EIC Yellow Report <cit.>.
The basic requirements include <cit.> 4π hermeticity with large
acceptance in pseudorapidity, η, of about -4 < η < 4, very good momentum resolution both in the central, forward and backward regions,
very good energy resolution in electromagnetic calorimeters and particle identification capabilities up to 50 GeV
in momentum. Such a setup allows
one to study processes over a wider range of four-momentum transfer Q. In addition, measurements of heavy-flavour hadron production demand a microvertexing detector that provides good impact-parameter resolution.
The detector technologies and configuration implementation will be known once the detector design is finalised.
However, existing high-energy experiments (for example ALICE at the LHC and STAR at RHIC) indicate that an EIC detector that fulfils the aforementioned requirements will have capabilities for and Υ(nS) measurements via their e^+e^- decay channel <cit.>.
The precision of quarkonium reconstruction will strongly depend on the hardware configuration. For example, an internal silicon tracker could generate additional combinatorial background arising from conversions γ→ e^+e^-, limiting precision for low-mass quarkonia at low . Moreover, the energy loss of electrons due to Bremsstrahlung in the detector material deteriorates the mass resolution. It may complicate, if not make impossible, separation of the ,
and states. Measurements of other quarkonium states (for instance χ_c or χ_b) add constraints for the experimental apparatus. Studies of decays involving photon radiation (e.g., χ_c(1P) → + γ) would require an electromagnetic calorimeter able to isolate a soft photon and measure its energy with appropriate resolution. In addition, a muon detector would significantly extend capabilities for quarkonium studies. This is
briefly discussed in Sec. <ref>.
Three different designs, ATHENA <cit.>, CORE <cit.> and ECCE <cit.>, were proposed.
The main difference between the ATHENA and ECCE design consists of the magnet, providing respectively a 3.0 T and 1.4 T magnetic field.
The distinguishing characteristic of the CORE detector is the compactness of the detector, obtained through exploitation of technological advances.
From the proposed designs, the ECCE proposal was selected as baseline for the first EIC detector,
with improvements to the proposal
at present under development.
This first EIC detector received the name electron-proton/ion collider (ePIC) detector.
A description of the ePIC detector in its current design state is given below.
§.§.§ The ePIC detector
The central barrel of the ePIC detector, as currently
envisioned, is depicted in figure <ref>. Here, the hadron
beam comes in from the left and defines the forward-going direction.
The central barrel is around
10 m long and
5 m in diameter, providing a full coverage
in azimuthal angle and a coverage in polar angle between 0^∘ and 178^∘, corresponding to a pseudorapidity coverage
between -4 and 4.
In addition to the barrel detector, detectors in the far-forward and far-backward regions are foreseen.
The far-backward region will contain a luminosity monitor and two detectors to tag
low-Q^2 events. The far-forward region will contain a series of detectors aimed at detecting particles produced close to the beam line and as such will be instrumental
to the reconstruction of an extensive set of diffractive processes and tagged measurements, such as proton reconstruction in exclusive processes, tagging of the two spectator protons when investigating
the neutron structure through lepton-^3He interactions and tagging of respectively the neutron and Λ-baryon decay particles when probing the pion and kaon
structure
in lepton-proton interactions.
The far-forward system will consist of a B0 spectrometer,
containing an electromagnetic calorimeter and trackers for respectively the tagging of photons and reconstruction of charged particles,
Roman Pots
and off-momentum detectors,
performing charged-particle reconstruction,
as well as
Zero-Degree Calorimeters,
capable of detecting photons and neutrons.
In the central barrel, track and vertex reconstruction will be performed by
silicon monolithic active pixel sensors placed close to the beam line and interaction point, while at a further distance
micro-pattern gaseous detectors (
micro-Resistive Well
and Micro-Mesh Gaseous Structure) and AC-coupled low-gain
avalanche diodes will contribute to track reconstruction.
The tracking detectors will be embedded into a 1.7 T magnetic field.
Such a setup will provide the momentum resolution needed to
fulfil the EIC physics programme.
Electromagnetic calorimeters cover the backward, central and forward regions of the central barrel, providing
electron and photon detection as well as hadron suppression.
In the backward region, a high-precision lead-tungstate calorimeter read out by silicon photo-multipliers is foreseen.
The detector will be
critical to the reconstruction of (scattered) electrons,
improving the reconstruction precision over that obtained from tracking detectors only,
and in the identification of these electrons, by suppressing the background contribution strongly.
This contribution originates mostly from charged pions.
In the central region,
a lead-scintillator imaging calorimeter is foreseen.
For the forward region, an electromagnetic calorimeter will be integrated with the forward hadronic calorimeter.
The system
focuses on the containment of high-energetic particle showers while at the same time providing a
good energy resolution for lower-energetic particles.
Particle identification requires
a good position resolution, in particular in the electromagnetic calorimeter. This will be provided by constructing the electromagnetic calorimeter
out of segments, of scintillating fibres embedded in tungsten powder,
smaller than the Molière radius. This will also result in a good shower
separation at high pseudorapidity.
In the central region, a hadronic calorimeter
will allow for the detection of neutral hadrons and as such will improve the resolution of jet reconstruction.
Given the good momentum resolution of the central trackers, the central hadronic calorimeter system will not have
an impact on the reconstruction of charged particles.
The forward hadronic calorimeter, which forms an integrated system with the electromagnetic calorimeter, will
consist of layers of alternating tiles of scintillating material and steel, while towards the end of the detector
the steel is replaced by tungsten in order to serve as
tail catcher of the shower and thus
maximise the interaction length within the available space.
Also in the backward region, an hadronic calorimeter will be installed, with the aim to serve as tail catcher of particle signals.
Detectors based on the detection of Cherenkov light will be used for the identification of
charged pions, kaons and protons, while also contributing to electron identification.
In addition, the
aforementioned AC-coupled low-gain
avalanche diodes will provide particle identification in the low-momentum region, below ∼ 2 GeV, based on the detection of the
time of flight of a particle.
In the backward region, a proximity-focusing ring-imaging Cherenkov (RICH) detector with aerogel as radiator
will be used.
Because of the tight space constraints, a
DIRC – detection of internally reflected Cherenkov light – detector will be incorporated in the central region.
The forward region will contain a dual RICH detector, with
an aerogel radiator for the low-momentum particles
and C_2F_6 for the high-energetic ones, covering the momentum range up to 50 GeV.
No muon detectors are foreseen for the ePIC detector. While first studies, performed for the ATHENA and ECCE proposals, indicate that the reconstruction of
mesons from exclusive processes through their e^+e^- decay should be possible with the ePIC detector, there are neither studies for other quarkonium states nor for inclusive or semi-inclusive processes.
Here, dedicated muon detectors might be needed. This is discussed in the following sub-section.
§.§.§ The case for a muon detector for quarkonium studies at the EIC
Measurement of vector-quarkonium production using their di-muon decay provides significant benefits. The energy loss of muons due to interactions with detector material is much smaller than that of electrons. This leads typically to a better momentum resolution
of the muons than of the electrons, and
therefore the resolution of the quarkonium mass reconstructed in the μ^+μ^- channel is better compared to the e^+e^- one. The LHCb and CMS experiments provide a case in point as the performance of their muon detectors facilitated a rich and fruitful quarkonium physics program, which included that of
, , and other quarkonium states such as the χ_c and χ_b via their radiative decays into vector quarkonia. Additional measurements via the μ^+μ^- decay channel would also essentially double the available statistics as the branching ratios into μ^+ μ^- and e^+e^- are
nearly the same and enable analyses of rare decays (for example, χ_c →μμ). With a proper design,
studies via the di-muon channel benefit from a lower combinatorial background, thus improving the statistical precision of the measurement. In addition, they provide a cross check
of the e^+e^- results, which should in turn reduce systematic uncertainties.
In summary, a muon detector would significantly extend capabilities for quarkonium studies at the EIC. The present ePIC design does not consider muon-identification instrumentation, but
possibilities for an enhanced muon identification can be investigated for ePIC. Moreover,
the incorporation of dedicated muon-identification detectors in
the design phase of the 2^ nd EIC detector can
vastly extend quarkonium measurement capacities in the manner described above.
§.§ Kinematics and QED radiative corrections
§.§.§ Kinematics of electron-hadron reactions
In this section, we collect
basic kinematical definitions useful for the description of lepton-hadron reactions. The next section is devoted to how QED radiative corrections on the lepton side can affect the resolution on various kinematic variables and to possible ways to address this problem.
Let us consider the inclusive production of an identified hadron H, which in the context of this review is most likely to be a quarkonium, in electron-nucleon (eN) scattering:
e(ℓ) + N(P_N) → e(ℓ^') + H(P_ H) + X,
For electron-nucleus (eA) scattering, the momentum P_N usually denotes the average momentum of a single nucleon.
Depending on the experimental possibilities, one can
tag the outgoing electron with the momentum ℓ^' or
consider the reaction inclusive w.r.t. the final-state electron. If the momentum ℓ^' has been measured,
one can define the momentum transfer q=ℓ-ℓ^' with q^2 = -Q^2 and the following Lorentz-invariant kinematic variables become experimentally accessible:
= Q^2/2P_N· q , y = P_N· q/P_N·ℓ , z = P_N· P_ H/P_N· q ,
where z is referred to as the elasticity and y as the inelasticity
that should not be confused with the rapidity.
Among frame-dependent variables, one usually distinguishes the transverse momentum of the hadron P_T H in the laboratory frame (see
fig:eP_frames (left)), in which the initial electron e(ℓ) and
nucleon N(P_N) collide head on, defining the Z (collision) axis, from the transverse momentum P_T H^* of the hadron H in the photon-hadron frame (see
fig:eP_frames (right)), where three-momenta q and P_N are aligned with the Z axis of this frame[Different photon-hadron frames are related by
a boost along the Z axis. In particular, one can adopt the photon-nucleon centre-of-momentum frame where q+ P_N=0. The transverse components of the momenta are the same in all photon-hadron frames.]. The word “photon” in the frame name
specifically refers to the one-photon-exchange approximation between the electron and the hadronic part of the process. In this review, we will often use the simplified notation for the absolute value of the transverse momentum of the produced hadron: =| P_T H| or ^*=| P^*_T H|.
Even if the colliding particles are
unpolarised, there could always be some dependence of the cross section on the azimuthal angle ϕ_T (or ϕ^*_T) formed
by the vector
P_T H (or P^*_T H) and the plane spanned by the initial () and final (') lepton three-momenta (fig:eP_frames), due to the exchanged-photon polarisation. If the initial
nucleon and/or electron have transverse polarisation, additional angular modulations of the cross section, related to the direction(s) of the transverse spin vector(s) of the colliding particles, can be generated. The transverse polarisation vector of the initial nucleon is denoted as S_T and the angle of this vector with respect to the lepton plane in the photon-hadron (resp. laboratory) frame is generally indicated as ϕ^*_S (resp. ϕ_S).
If the recoil effects of the photons which can be emitted by the initial and final electrons during the scattering process (QED radiative corrections)
are neglected, then the four-momentum of the exchanged photon is simply q=ℓ-ℓ^' as stated above. In such an approximation, the variables of eq:SIDIS:kin-var as well as the frame-dependent variables, such as P_T H^*, can be directly computed from the measured energy and momentum of the scattered electron. However, such a QED Born approximation
might be insufficient for precision studies.
Section <ref> is devoted to this issue.
The regime of the process of eq:process:e-pA, when the quasi-real-photon approximation can be applied to the exchanged photon,
when Q is negligible compared to the hard scale (m_Q, P_T, P_T^*, ...), is commonly referred to as photoproduction, while the regime with Q being the hard scale, or among the potential hard scales, is called leptoproduction or (semi-inclusive) deep inelastic scattering (SIDIS). Experimentally, photoproduction is usually defined by a fixed cut on the photon virtuality, Q < 1 GeV.
Beside the
well-known regimes of photoproduction and leptoproduction (or SIDIS)
, which a priori require
setting some constraints on Q^2, it appears very valuable for quarkonium studies to consider measurements where Q^2 is fully integrated over. Such yields then contain the contributions from both quasi-real and off-shell photons. This proposal is described in
section <ref>.
As it was mentioned in the introduction, polarisation observables play an important role in quarkonium physics. The polarisation parameters of a spin-1 heavy quarkonium λ_θ, μ_θϕ and ν_θϕ parametrise the angular distribution of decay leptons in the quarkonium rest frame:
dσ dΩ∝ 1 + λ_θcos^2θ + μ_θϕsin 2θcosϕ + ν_θϕ 2sin^2 θcos 2ϕ.
These parameters depend on the orientation of the axes of the coordinate system chosen in the quarkonium rest frame with popular frame choices such as the Helicity, Collins-Soper, Gottfried-Jackson and target frames (see Section 2.3 of <cit.>). The same definition of polarisation parameters holds for the case of exclusive production of a vector quarkonium.
§.§.§ On the importance of QED corrections
The possibility to make a distinction between the photoproduction and
electroproduction (or SIDIS)
regimes, together with the rich phenomenology provided by measurements differential in the variables , y, z as well as
^* and ϕ^*_T, has
always been considered as an advantage of lepton-hadron reactions over hadron-hadron ones.
However, the emission of photons by initial- and final-state leptons modifies the relation between the momentum ℓ^' of the final-state lepton measured in the detector and the four-momentum q of the photon exchanged between the leptonic and hadronic parts of the process in eq:process:e-pA (see the Fig. <ref>) which in turn modifies the Lorentz-invariant variables
(eq:SIDIS:kin-var) as well as P_T H^* and ϕ^*_T. Beyond the Born approximation of QED, this relation is no longer simply q=ℓ-ℓ^' but includes the recoil from emitted photons. For strictly inclusive DIS measurements, as opposed to SIDIS, the application of QED radiative corrections boils down to an overall radiative correction factor to the cross section differential in x_B and the inelasticity y <cit.>. In the SIDIS case, fully differential Monte-Carlo computations have to be performed, using dedicated tools such as DJANGOH <cit.>.
Recently, it has been shown <cit.> that QED radiative corrections fundamentally limit the accuracy of
SIDIS measurements, in particular for the kinematic regime where the TMD factorisation
is needed. In this context, a new approach to their treatment has been proposed. The QCD factorisation for the SIDIS cross section was historically discussed in the photon-hadron frame. However, as it was mentioned above, the collision-induced photon radiations change both the direction and magnitude of the exchanged virtual photon, making the photon-hadron frame and the quantities related to it only approximately defined.
The ambiguities in the definition of kinematic variables in photon-hadron frame can impact our ability to extract the TMDs and, in particular, to use the angular modulation in ϕ_T^* to separate contributions of different TMD PDFs and fragmentation functions (FFs). Since the QED radiations differently affect the determination of the angles ϕ^*_T and ϕ^*_S (fig:eP_frames), this can affect the determination of various azimuthal (spin) asymmetries <cit.>. In addition to the uncertainty of the “photon-hadron” frame, the collision-induced photon radiations also change
the true values of x_B and Q^2.
Although the effects of the QED radiations could be calculated perturbatively, the main point of concern are those QED radiative correction effects
which are logarithmically enhanced due to the collinear and infrared sensitivity coming from the smallness of the electron mass m_e
compared to all the other scales of the process. Omitting these effects may lead to significant uncertainties in some kinematic regimes
where
a wide phase space is available for collision-induced radiations, such as those relevant to the study of small-x physics or for
two-scale observables described by TMD factorisation.
In Ref. <cit.>, it has been argued that a combined QCD+QED factorisation can be performed such that the exchanged photon momentum q is not fixed by the measured ℓ-ℓ', but rather has a range of values
to be integrated over. The range is determined by the observed momentum of the scattered lepton for inclusive DIS and the momenta of both the scattered lepton and the observed final-state hadron for
SIDIS.
The approach consists
of using collinear factorisation to take into account the collision-induced-QED-radiation effects which are enhanced by large logarithms of either Q/m_e, | P_T H|/m_e or |_T'|/m_e, while either collinear or TMD factorisation can be used
to account for QCD contributions depending on the hierarchy between the | P_T H-_T'| and the hard scale Q. For the SIDIS process
of eq:process:e-pA on a proton target, the hybrid factorisation formula is given by <cit.>:
E_ℓ^' E_P_ Hdσ_ SIDIS/d^3^' d^3 P_ H ≈ ∑_a,b ∫_ζ_ min^1dζ/ζ^2 D_e(ℓ^')/b(k')(ζ,μ_F^2)
∫_ξ_ min^1 dξ f_a(k)/e(ℓ)(ξ,μ_F^2)
× [E_k^'E_p_ Hd σ^ap[a(k)+p(P) → b(k^')+ H(p_ H)+X]/d^3 k^'d^3 P_ H]_k=ξℓ,k^'=ℓ^'/ζ,
where a,b=e, e̅, γ
, and where the active lepton/photon momenta entering or leaving the hard collision
are defined as k=ξℓ and k'=ℓ'/ζ with collinear momentum fractions ξ and ζ, and μ_F is the factorisation scale. The process-independent lepton distribution functions (LDFs) f_a/e(ξ) and lepton fragmentation functions (LFFs) D_e/b(ζ)
in Eq. (<ref>) resum logarithmically-enhanced QED contributions in the limit when the hard scale, max (Q,| P_T H|,|_T'|), is much larger than m_e. The non-logarithmically-enhanced part of QED radiative corrections can be included into dσ^ap
order by order in powers of α_ em.
The differential cross section
dσ^ap in the second line of eq:SIDIS-factor-QED can be further factorised by TMD or collinear factorisation in QCD depending on if the observed lepton and hadron are in the back-to-back regime or not. As it has been demonstrated in Refs. <cit.>, the transverse-momentum broadening from the collision-induced QED radiations is much smaller than the TMD effects from QCD. Factorising out QED radiations using collinear LDFs and LFFs as done in the eq:SIDIS-factor-QED is therefore a good approximation. eq:SIDIS-factor-QED is valid up to Leading Power (LP), that is up to
power corrections
scaling as the inverse of the hard scale. Note that the same kind of equation holds in the case of e-A collisions. Note also that eq:SIDIS-factor-QED does not account for possible hadronic/resolved contributions from the photon.
Due to the smallness of α_ em, σ^ap in eq:SIDIS-factor-QED can be approximated by its QED Born
order, σ^ap,(0)
with a=b=e. This lowest order cross section is
the same as the SIDIS cross section without QED radiation which can be parametrised in terms of the usual SIDIS structure functions <cit.> but with different kinematics: ℓ→ k=ξℓ and ℓ'→ k'=ℓ'/ζ. Consequently, the exchanged-virtual-photon momentum between the scattered lepton and the colliding hadron is modified as q=ℓ-ℓ' → k-k'=ξℓ - ℓ'/ζ.
By neglecting higher order QED
contributions to σ^ap in eq:SIDIS-factor-QED, the SIDIS cross section with the
collision-induced QED radiation can thus be obtained from the same SIDIS cross section without QED radiation plus the knowledge of the universal LDFs and LFFs.
§.§ On the importance of b feed down
An important and subtle
concept
needed to understand the quarkonium-production mechanism is the knowledge of feed downs. For instance, as shown in Ref. <cit.>, in the case of J/ψ photoproduction at HERA, not all the J/ψ are
produced by the hard scattering. Indeed, a non-negligible fraction of the J/ψ mesons produced at large P_T comes from the decay of a b quark.
fig:H1-data-ratio-b-FD shows the fraction of J/ψ coming from such a b feed down (also referred to as non-prompt yield) as a function of P_T^2 in the H1 kinematics. We guide the reader to Appendix A of Ref. <cit.> for more information about how it was estimated.
One sees that the fraction of non-prompt J/ψ steadily grows to reach over 40% of the J/ψ yield at the highest reachable P_T ≲ 10 GeV.
Although the top energy of the EIC will be at most at √(s_ep) = 140 GeV,
given the much higher luminosity of the EIC compared to HERA, the W_γ p reach[
W_γ p = √(s_γ p) designates the energy in the centre-of-mass of the photon-proton system.] might be such that, at high P_T, similarly large non-prompt fractions could be observed.
With this respect, further dedicated studies are necessary.
§ EIC TOOLS FOR QUARKONIUM STUDIES
§.§ Quarkonium-production mechanisms
As aforementioned, to justify the application of perturbative QCD to the studies of identified hadron production, the observables
should
involve some scale μ≫Λ_ QCD, such that α_s(μ)≪ 1.
In such cases, the
cross section can
be factorised (up to power-suppressed corrections in μ) into a product or convolution of a short-distance part, which is meant to be computed perturbatively as a series in α_s(μ) and long-distance factors. The latter
comprise (TMD-) PDFs of incoming hadrons and non-perturbative quantities which describe the hadronisation of partons produced at the short-distance/perturbative stage of the process into an observed final-state hadron.
The treatment of hadronisation differs for hadrons containing light quarks in the “naive” quark-model picture of these states as opposed to quarkonia, the primary component of which is expected to be a
QQ̅ Fock state with the same quantum numbers as
quarkonium. In the case of hadrons composed of light quarks or heavy-flavoured hadrons like D and B mesons, commonly denoted H_Q in this review, in which relativistic (“light”) degrees of freedom play an important role, the hard-scale μ is ∼ p_T ≫ m_ H_Q and the “final-state” long-distance part of the cross section is usually
encapsulated in a fragmentation function (FF). Due to the importance of light degrees of freedom, the FFs of such hadrons can not be computed perturbatively and they are parametrised at some starting scale
μ_0, on the order of 1 GeV, with parameters fitted to reproduce experimental data, see e.g.
<cit.> and <cit.> for fits of respectively light and heavy-flavoured hadrons.
For hadrons containing two tightly-bound heavy quarks, such as “standard” charmonia (η_c, J/ψ, χ_c, ψ(2S), …) and bottomonia (η_b, Υ(nS), χ_b,…), denoted hereafter by , a deeper understanding of hadronisation is believed to be possible.
The overall success of non-relativistic potential models in the description of the mass spectrum of these states implies
that the contributions of QCD Fock states containing gluons or light quarks is suppressed by the powers of the average velocity v of the heavy quarks in the bound state
compared to that of the simplest Fock state with only one heavy QQ̅
pair.
The typical squared velocity v^2 is estimated in potential models to be ∼ 0.3 for charmonia and ∼ 0.1 for bottomonia, which turns it into an no additional n here useful small parameter with respect to which the observables can be expanded.
The different existing models of quarkonium production <cit.> follow more or less closely the above observation which yields to somehow disparate predictions for some production observables. We review below the main features of three of the most popular ones which will follow us throughout this review.
§.§.§ NRQCD & CSM
In the non-relativistic QCD (NRQCD) factorisation formalism <cit.>, the cross sections and decay rates are expanded in powers of α_s(μ) and v^2. At each order of the v^2 expansion, the short-distance part of the observable describes the production or annihilation of the QQ̅-pair in a colour-singlet or colour-octet state with a particular value of spin, orbital and total angular momentum.
The hard scale, μ, for the short-distance part
can be the heavy-quark mass m_Q,
or any other larger scale not comparable to Λ_ QCD, justifying the perturbative calculation of this factor. The corresponding long-distance part
of the cross section is
a number called the Long-Distance Matrix Element (LDME)
which, for the production case, can be written up to conventional colour and spin normalisation factors, which we omit for the sake of clarity, as:
⟨ O^ [i] ⟩∝∑_X_s 0( O_i^† Y_n^†)^ab(0)+X_s+X_s( Y_n O_i)^ba(0)0,
where it is implied that any final state X_s containing light quarks and gluons can be produced together with the quarkonium .
The factors Y_n in
eq:LDME_def contain Wilson lines along the light-like direction n needed for the gauge invariance of the Colour-Octet (CO) LDMEs. The structure of the colour indices, ab, connecting the amplitude and complex-conjugate amplitude in eq:LDME_def reflects the process-dependent configuration of the Wilson lines in the factors Y_n. The local NRQCD operators O_i contain heavy-quark and antiquark fields[Denoted as χ and ψ in NRQCD.] and are labelled in the same way as the simplest Fock state | QQ̅[i] + X_s ⟩ which this operator can excite from the vacuum. The spectroscopic notation of the label i=^2S+1L_J^[1,8] is used to denote the total spin S, the orbital angular momentum L, the total angular momentum J and the singlet (CS, ^[1]) or octet (CO, ^[8]) colour quantum numbers of the heavy-quark pair. With these conventions, the complete traditional notation for the LDME becomes: ⟨ O^[ ^2S+1L_J^[1,8]] ⟩.
NRQCD velocity-scaling rules <cit.> lead to the assignment of the O(v^m) suppression to LDMEs, thus allowing us to truncate the velocity
expansion at some fixed order in v^2. Usually the contributions associated with the LDMEs up to Next-to-Next-to-Leading Order (NNLO)
in v^2 (O(v^4) relative to the LDME of the ^3S_1^[1] state) are taken into account in phenomenological studies.
This means that, besides the colour-singlet QQ̅ states,
the colour-octet states ^1S_0^[8], ^3S_1^[8] and ^3P_J^[8] can contribute to J/ψ production, for example.
For S-wave quarkonia,
the expansion limited to the leading order of v^2 corresponds to the colour-singlet QQ̅-state with the same quantum numbers as those of .
The colour-singlet model (CSM)
<cit.> for the production of these states is nothing but the truncation of the v^2 expansion at this order. The CS LDMEs can be estimated from potential-model wave functions <cit.>, while their accurate estimation from
ℓ^+ℓ^- decay rates of is rendered complicated by large NNLO QCD corrections <cit.> to the decay width. However, the CSM is not sufficient theoretically <cit.>
for the description of the production of the P-wave quarkonia, such as χ_c,b, beyond LO in α_s and can not describe inclusive hadroproduction p_T
spectra of charmonia and bottomonia at high
<cit.>. Nevertheless, the NNLO corrections in α_s to the short-distance part of the CSM cross section, only partially computed so far, may decrease
the existing large discrepancy between the CSM and the data from Tevatron and the LHC <cit.>. This point is still under debate <cit.>.
In contrast to the hadroproduction case described above, in
(prompt) inclusive photo- and electroproduction of heavy quarkonia, which are relevant for the EIC experimental program, the CSM has been expected <cit.> and proven to be able to account
for a large fraction of the observed cross section <cit.> even up to the highest reachable . Estimates varying from 50% <cit.> to almost 100% <cit.> can be found in literature.
For the exclusive photo- and electroproduction of single J/ψ or Υ(nS), the CS contribution is
also expected to be strongly dominating. In
such exclusive reactions, no final-state radiation (X_s) is allowed and the NRQCD operators containing a CO QQ̅ pair can only couple
to the higher Fock-state contributions in the expansion of the physical quarkonium eigenstate, which are velocity-suppressed, e.g. |J/ψ⟩= O(1)|cc̅[^3S^[1]_1]⟩ + O(v)|cc̅[^3P^[8]_J]+g⟩+… The matrix elements of the gauge-invariant CO operators which in principle can contribute to exclusive photoproduction, e.g. ψ^† (g_s E· D) χ where E is the chromoelectric field and D is the QCD covariant derivative, can be estimated[The scaling for D is O(v) and the scaling for g_s E is O(v^3) so together with the O(v) suppression of the |cc̅[^3P^[8]_J]+g⟩ component of |J/ψ⟩, one obtains O(v^5).] to scale at least as O(v^5) at the level of the amplitude using the velocity scaling rules <cit.>. In Ref. <cit.>, the same conclusion
has been made about the CO contributions to the matrix elements of the operator ψ^† D^2 χ = ψ^†∇^2 χ + ψ^† (g_s A·∇) χ + …, which are more suppressed
than the CS relativistic corrections ⟨ J/ψ | ψ^†∇^2 χ |0⟩∼∇^2 Ψ(0)∼ O(v^2). Therefore, taking into account CS relativistic corrections to exclusive vector-quarkonium photoproduction is currently considered to be more important <cit.> than taking into account the CO corrections.
Another success <cit.> of the CSM at NLO in α_s is the description of the prompt η_c hadroproduction, measured by LHCb <cit.>.
However, such a success of the CSM to describe this data set, both at moderate ∼ m_η_c and for ≫ m_η_c is problematic for NRQCD. Indeed, from heavy-quark-spin-symmetry (HQSS) arguments, one expects the CO contributions to η_c cross section at ≫ m_η_c to be on the same order of magnitude as that previously found to describe J/ψ data at similar .
As
aforementioned, at higher orders in the v^2 expansion, the CO LDMEs contribute, but at present they are treated as free parameters and are adjusted to describe experimental data. Besides order-of-magnitude constraints from O(v^n) scaling and HQSS constraints, the progress on their theoretical calculation has been limited so far. Recently new expressions for LDMEs in terms of potential-model
quarkonium wave functions and certain chromoelectric-field correlators have been proposed in the potential-NRQCD (pNRQCD) formalism in the strongly coupled regime <cit.>. These relations can be used to reduce number of free parameters in the fit under the assumption m_Q v^2 ≪Λ_ QCD. Currently, the advantage of using pNRQCD compared to conventional NRQCD fits is still under debate as well as its applicability, since m_Q v^2 is naively not much smaller than Λ_ QCD .
In
Section <ref>, we describe existing phenomenological fits of LDMEs within collinear factorisation, commenting on their successes and shortcomings in more details. Unfortunately at present time there is no single set of LDMEs which can
satisfactorily describe the charmonium e^+e^- annihilation, γγ fusion, hadro- and photoproduction data together with polarisation observables in the framework of NRQCD factorisation at NLO in α_s, which is a serious problem for the NRQCD factorisation approach. For the case of bottomonia, we lack
photoproduction, e^+e^- annihilation and γγ fusion data, which prevents us from checking the process-independence of LDMEs for the bb̅ family. Another important task for the EIC,
in connection with the clarification of the quarkonium-production mechanism, is to perform the first measurement of χ_c0,1,2 and η_c inclusive photoproduction cross sections.
In this context, we discuss corresponding phenomenological predictions in Section <ref>. Such measurements will be complementary to
those of χ_c and η_c hadroproduction to check the process-independence of the corresponding LDMEs.
Data at high ≫ m_, where CS and CO contributions behave differently, are potentially very discriminant for LDME fits. This calls for improvement of the perturbative accuracy of the short-distance part
since, at large ,
terms proportional to α_s^n+kln^n
/m_ H appearing in the perturbative series for the short-distance part of the cross section both at LP in
and in power-suppressed corrections at ≫ m_ need to be tackled.
These potentially large terms can be resummed using the formalism of FFs, perturbatively evolving with the scale μ∼. At LP, this formalism is analogous to the FFs for light hadrons mentioned in the beginning of this section, with
a sole but important difference, namely that at the starting scale μ_0∼ m_ the FF
is assumed to be factorised into a short distance part and a LDME. We refer to e.g. <cit.> as an example of the NLO study of this type as well as Refs. <cit.> at LO. At Next-to-Leading Power (NLP), new contributions with the QQ̅ pair as a whole participating in the fragmentation process
appear <cit.>. These corrections
seem to influence not only the cross section
but also the evolution of leading-power FFs <cit.>. However, the effect of this corrections on cross sections and the polarisation is still under investigation in particular for the EIC phenomenology where the reach, limited to roughly 15-20 GeV, might not be large enough for these to be relevant.
§.§.§ CEM & ICEM
Given the above mentioned phenomenological problems along with others which we review later, NRQCD factorisation at fixed order in v^2 and α_s is not completely satisfactory. Due to its simplicity, the Colour Evaporation Model (CEM), introduced in Refs. <cit.> remains an attractive alternative mechanism to explain the formation of quarkonium. As the CEM is inspired from quark-hadron duality, one postulates that any QQ̅ pair produced at short distance with invariant mass M_QQ̅ less than the invariant mass of a pair of lightest mesons (H_Q) with open-heavy flavour Q (e.g. D^0 mesons in the case of charmonia) has to hadronise into one of the quarkonia below this heavy-flavour-production threshold with some universal probability. In the CEM, this probability, commonly denoted as F_ for the quarkonium state , is taken to be independent of spin, orbital momentum and colour quantum numbers of the pair, and is fit as a free parameter.
In the improved CEM (ICEM) <cit.>, the kinematic effects arising from the mass difference between the QQ̅-pair produced at short distance and the final-state quarkonium is taken into account, which roughly models the effects of soft-gluon emissions at hadronisation stage. This is done through the rescaling of the three-momentum of the pair by the mass ratio, so that the direct quarkonium-production cross section in collisions in the ICEM is given by <cit.>:
σ = F_∑_i,j∫^2m( H_Q)_M_dM_QQ̅ dx_i dx_j f_i(x_i,μ_F)f_j(x_j,μ_F)·σ̂_ij→ QQ̅(x_i,x_j, p_QQ̅,μ_R,μ_F) |_ p_QQ̅ = M_QQ̅/M_ P_ ,
where i and j are q, q̅ and g such that ij = qq̅, qg, q̅g or gg,
x_i,j is the momentum fraction of the parton, f(x_i,j,μ_F) is the parton distribution function (PDF) in the proton as a function of x_i,j at the factorisation scale μ_F. Finally, σ̂_ij→ QQ̅ are the parton-level cross sections for the initial states ij to produce a QQ̅ pair of momentum p_QQ̅ at the renormalisation scale μ_R.
In the ICEM, the invariant mass of the QQ̅ pair, M_QQ̅, is integrated from the physical mass of quarkonium M_ to two times the mass of the lightest open heavy Q-flavour meson m( H_Q). In the traditional CEM, see e.g. <cit.>, the value of 2m_Q is used as the lower limit of mass-integration instead of M_ and the momentum-shift due to the mass-difference between the QQ̅-pair and the quarkonium is neglected.
We emphasise that the physical picture of the (I)CEM is opposite to NRQCD in the sense that the CS contributions play no special role at all. This assumption makes CEM incapable of describing observables where CS states are clearly dominating, e.g. the prompt hadroproduction of J/ψ pairs <cit.> and the e^+e^-→ J/ψ + cc̅ cross section <cit.>.
However, the (I)CEM still provides a reasonable description of single inclusive prompt quarkonium hadroproduction <cit.> although the model is not capable to describe ∼ m_ and ≫ m_ simultaneously even at NLO <cit.>.
Recent ICEM calculations <cit.> have considered the polarisation in hadroproduction. Polarised production of quarkonium in these calculations restricts the final state quark-antiquark pair to be in the desired spin state, thus implicitly assuming that soft gluons are decoupled from heavy-quark spin. The polarisation parameters are then calculated in terms of the spin matrix elements σ_i_z,j_z. In these matrix elements, the quarkonium is assumed to have J_z = i_z when calculating the scattering matrix element, ℳ. The quarkonium is assumed to take J_z=j_z in calculating the conjugate, ℳ^*. The polar anisotropy (λ_θ), defined in the Eq. (<ref>), is given in this model by <cit.>
λ_θ = σ_+1,+1-σ_0,0/σ_+1,+1+σ_0,0 .
As the ICEM is an alternative to NRQCD in hadroproduction, developments to extend it into other collision systems are still in progress.
The authors of <cit.> anticipate that the value of λ_θ for production in ep collisions will also be very similar to the pp case, which they found to be compatible with the existing Tevatron and LHC data.
In addition, they also find the free parameter F_𝒬 in photoproduction to be consistent with that in hadroproduction. The description of HERA H1 data <cit.> on photoproduction in the ICEM <cit.> is illustrated in Fig. <ref>. However, the (I)CEM prediction introduces a parameter to keep the propagator at some minimum distance of M_ψ^2 from the pole. Thus, its prediction of the z-differential spectrum in photoproduction is likely to be complicated by large radiative corrections at 1-z≪ 1 if the parameter is removed, which was seen already in the LO analysis of Ref. <cit.> where the agreement with data at z→ 1 was reached only after introduction of an ad-hoc cut |t̂|>4m_c^2 on the partonic t̂ variable.
The observation that the CEM leads to unpolarised heavy-quarkonium hadroproduction at high- <cit.>, a result which is non-trivial to achieve with NRQCD fits, perhaps means that, in cases where CO LDMEs dominate, the dynamics of soft-gluon emissions should be taken into account more accurately than it is done in the fixed-order NRQCD factorisation. The recently proposed soft-gluon factorisation approach represents a progress in this direction <cit.>, whose phenomenological implications, however, remain to be investigated.
§.§ Legacy from HERA, the Tevatron and the LHC, and predictions for the EIC for cross-section and polarisation observables
§.§.§ Status of NRQCD LDME fits
A side note on the positivity of the LDMEs beyond LO.
Before discussing the NLO LDME fits, let us make a comment about the positivity of LDMEs. At LO in α_s, the LDMEs have a simple interpretation as “probabilities” of the transition of the QQ̅-pair in a certain colour, spin and angular-momentum state into an observed quarkonium. This physical interpretation follows from the operator definition of LDMEs (<ref>) in terms of “bare” fields <cit.> if QCD loop corrections are not taken into account and Wilson-line factors are ignored. Consequently, in LO calculations, LDMEs are typically assumed to be positive-definite. This is similar to the situation with LO (TMD)PDFs.
Already at NLO in α_s, both ultraviolet (UV) and infrared (IR)
divergences appear in the operator definitions of LDMEs, see e.g. Appendix B of Ref. <cit.> as well as Section 6 of Ref. <cit.> and references therein. If NRQCD factorisation holds – which is yet to be proven beyond NNLO in α_s <cit.> – the IR divergences of the hard-scattering coefficients should cancel against the corresponding IR divergences of the LDMEs at all orders of the v^2 and α_s expansions, while the UV divergences appearing in LDMEs are removed by the operator renormalisation. The renormalised LDMEs then become non-perturbative fit parameters. Therefore, these parameters do not necessarily have to be positive. Their definition involves the subtraction of the divergent part. In addition, the finite renormalised LDMEs are scheme- and scale-dependent, and mix with each
other due to the NRQCD-scale evolution. The relation between the short-distance cross section and LDMEs, described above, is similar to the relation between NLO short-distance cross sections and QCD PDFs and/or fragmentation functions, which are also not necessarily positive-definite, at least if the calculation is truncated to a fixed order in α_s. This is the reason why there are usually no positivity constraints imposed in NLO LDME fits. One of the consequences of this is that the numerical values of LDMEs obtained in fits at NLO in α_s have limited physical significance outside the NLO context and should only cautiously be used in LO calculations, because this could create unjustifiable cancellation between some contributions.
In general though, it is not clear that negative NLO LDMEs would yield positive NLO cross sections for all possible measurable processes one could think of. Let us for instance mention the case of quarkonium-photon associated production for which it was shown <cit.> that some of the NLO LDME fits which we discuss below would yield negative NLO cross sections. Such a physical constraint on LDMEs at NLO has however not been systematically investigated as it requires the complete NLO computation of the hard scatterings for all the processes one wishes to consider.
Survey of existing NLO LDME fits.
Several groups have performed fits of CO LDMEs for charmonia <cit.> and bottomonia <cit.> at NLO in α_s for the short-distance parts. We emphasise that the computation at NLO in α_s of short-distance cross sections for the production of NRQCD states (QQ̅[i]) is done in exactly the same framework of collinear factorisation by most of the groups with the exception of the fit of Bodwin et al. <cit.>. The latter computation includes, beside corrections at NLO in α_s, the resummation of logarithms of P_T/m_ which become important at P_T≫ m_. Therefore the difference of the fits boils down mostly to the choice of different experimental data to fit and approximate (up to higher-orders in v^2) relations between different LDMEs which are assumed or not to hold exactly in the fitting procedure. For a detailed discussion, we refer to the recent review <cit.>. tab:LDME-fits_comp briefly compares phenomenological results of
each fit for the case of charmonia using benchmark observables such as the
cross sections and polarisation of inclusive prompt J/ψ produced in pp collisions as a function of as well as
photoproduction in ep collisions and the total cross section of charmonium production in e^+e^- annihilation
We also
indicate in tab:LDME-fits_comp whether the corresponding set of LDMEs for J/ψ allows one to describe the prompt η_c hadroproduction
-spectrum measured by LHCb <cit.> using
heavy-quark-spin-symmetry relations between η_c and J/ψ LDMEs which hold up to v^2 corrections.
The
spectra of the prompt inclusive
quarkonia produced in pp and pp̅ collisions at mid and large at the Tevatron and the LHC are well described by all the fits mentioned in tab:LDME-fits_comp; this is
the major phenomenological success of NRQCD factorisation at NLO. Note, however, that hadroproduction data with ≲ m_ (or integrated in <cit.>) can not be simultaneously described by NLO NRQCD fits of large data.
In fact, most of the fits have been performed with even stronger cuts, as indicated in tab:LDME-fits_comp.
The only existing global NLO LDME fit <cit.>, BK11, beyond hadroproduction, also provides a reasonable description of unpolarised charmonium production cross sections in e^+e^-, pp, pp̅ and ep collisions. The description of HERA H1 data <cit.> on J/ψ photoproduction by the BK11 fit is illustrated in the fig:LDME-fits_vs_H1-data_pT(a) and fig:LDME-fits_vs_H1-data_z(a). However, this fit is not able <cit.> to describe charmonium-polarisation observables, measured in hadroproduction at high-, see e.g. Ref. <cit.> for a global survey of heavy-quarkonium-polarisation data.
This situation is often referred to as the “heavy-quarkonium-polarisation puzzle” in the literature. Polarisation observables relevant for J/ψ production at the EIC will be discussed in Section <ref>.
Two of the fits in tab:LDME-fits_comp, H14 and Z14,
turned out to be able to simultaneously describe J/ψ and η_c hadroproduction data using heavy-quark-spin-symmetry
relations between LDMEs. Remarkably, the J/ψ-polarisation observables in hadroproduction are also reasonably well reproduced by these fits but they significantly overestimate the HERA photoproduction cross section as can be seen in fig:LDME-fits_vs_H1-data_pT(b,c). The same holds for all the other LDME fits (with the exception of BK11 discussed above), see fig:LDME-fits_vs_H1-data_pT(d-g). The discrepancies between the NRQCD NLO predictions with these fits range from 2 at ≃ 10 GeV up to 10 at ≃ 1-2 GeV in the case of pNRQCD and B14. This means that the yield predictions at the EIC using these LDMEs can be overestimated by up to one order of magnitude. Since the discrepancies remain at =10 GeV, which roughly corresponds to the maximum values which would be reached at the EIC, this should be kept in mind when considering predictions with CO contributions (except for the BK11 LDMEs) for the EIC case at any .
As one can seen from fig:LDME-fits_vs_H1-data_z, all LDME fits except BK11 also strongly overestimate the z-differential cross section for z>0.6. The BK11 fit is consistent with the photoproduction data due to the cancellation between ^1S_0^[8] and ^3P_J^[8] channels. Other fits use this degree of freedom to accommodate the polarisation and/or η_c production data and therefore lose flexibility which is needed to achieve a global fit across different collision systems.
In a recent study <cit.>, the NRQCD cross sections of J/ψ+Z and J/ψ+W hadroproduction have been completely calculated at NLO. Interestingly, the only set of LDMEs found to be marginally capable of reproducing the J/ψ+Z hadroproduction data from the LHC is the set of
Refs. <cit.>, referred to in the tab:LDME-fits_comp as “pNRQCD”. This fit uses potential-NRQCD relations between LDMEs to reduce the number of free parameters in the fit of the J/ψ
spectrum in hadroproduction and which also describes polarisation observables. However, this set of LDMEs is not able to describe
J/ψ photoproduction and e^+e^- annihilation data and is consistent with η_c hadroproduction data only within large uncertainties and with a threshold for the data large than for the η_c data. As just discussed, the pNRQCD fit, like all the hadroproduction, badly fails to account for the J/ψ-photoproduction data from the H1 collaboration at HERA as shown on fig:LDME-fits_vs_H1-data_pT(g) and fig:LDME-fits_vs_H1-data_z(g) which cast doubts on its relevant for EIC predictions.
Several fits of CO LDMEs for bottomonia have also been performed at
NLO <cit.>. Only the most recent one <cit.> considered the Υ(1,2,3S) and χ_bJ(1,2P) LDMEs independently and systematically included the feed-down contributions from Υ(nS) and χ_bJ(nP) states with larger masses. These feed-down contributions constitute ∼ 40% of the Υ(1S, 2S) cross section, which is significant. In the case of Υ(3S), the feed down from χ_b(3P) states, which were discovered by ATLAS <cit.> and which lie just below the BB̅-threshold, also turns out to be significant (see <cit.> for a more detailed discussion of the feed-down impact). This was, however, not taken into account in <cit.>. This may explain the difficulties of the corresponding fit to account for the Υ(3S) polarisation. The polarisation observables for Υ(1S,2S) states came out to be about consistent with data in this fit. We guide the reader to the recent Ref. <cit.> for a detailed discussion of the agreement with various polarisation observables. Note that there is no bottomonium data from inelastic photoproduction nor from e^+e^- annihilation. Hence, future measurements of Υ(nS) inclusive electro- and photoproduction at the EIC will serve as an excellent test of the LDME process-independence in the b-quark case, where it has more chances to hold due to smaller O(v^2) corrections.
§.§.§ Recent developments regarding inclusive J/ψ photoproduction within the CSM
New P_T-enhanced contributions.
The recent study of Ref. <cit.>, performed within the CSM, is interesting regarding corrections which
were not included in the NLO NRQCD analyses presented above, although they could become important at ≫ M_. The study focused on the leading- leading-v next-to-leading-α_s corrections, within the NLO^⋆ approximation <cit.>.
The latest HERA data from the H1 Collaboration <cit.> was first revisited, by including new contributions such as the pure QED one (γ + q →γ^⋆ + q→ + q at 𝒪(α^3) where the off-shell photon γ^⋆ fluctuates into a )
and the associated + charm production (γ + g → + c + c̅ and γ +{c,c̅}→+ {c,c̅}). The former involves quark PDFs in the initial state, while the latter is described within a LO Variable Flavour Number Scheme (LO-VFNS) <cit.>. It was shown that the CSM at 𝒪(αα_s^3) and 𝒪(α^3) is able to describe the latest HERA data at large . The NLO corrections to γ + g → + c + c̅ were recently computed <cit.> and were found to increase the cross section a factor close to 2 in the HERA kinematics.
The corresponding predictions for the () spectrum in photoproduction at the EIC
are shown in fig:EIC-CT14NLO-NLOstar
with kinematical cuts on Q^2, the elasticity, z, and W_γ p≡√(s_γ p) inspired from the latest H1 measurements.
The CT14NLO proton PDF set <cit.> was used. The factorisation and renormalisation scales were taken to be μ_F = μ_R = m_T = √(M_^2 + P_T^2), the transverse mass of the , later this is called m_T J/ψ and the corresponding uncertainties were evaluated by varying them in the interval μ_F, μ_R ∈ [1/2, 2]× m_T. The charm mass m_c was set to 1.5 GeV and the corresponding mass uncertainty was evaluated by varying it by ± 0.1 GeV. Moreover, the CS LDME ⟨𝒪^[^3S_1^[1]] ⟩ was taken to be 1.45 GeV^3. Finally, a 20% feed-down ψ^'→ was taken into account.
In fig:EIC-CT14NLO-NLOstar, predictions for two energy configurations are presented. At √(s_ep) = 45 GeV (fig:EIC-CT14NLO-NLOstar, left), as increases, one enters the valence region. This makes the QED contribution become the dominant one at the largest measurable ≃ 11 GeV, with an integrated luminosity of ℒ = 100 fb^-1. Furthermore, γ + q fusion contributes more than 30% for > 8 GeV and the + unidentified charm contribution is comparable to the γ + g(q) fusion subprocesses. Hence, these so far overlooked contributions are going to be relevant at the EIC. At √(s_ep) = 140 GeV (right panel in fig:EIC-CT14NLO-NLOstar), the yield is measurable up to ∼ 18 GeV. The QED contribution is the leading one at the largest reachable , while γ + g fusion is the dominant contribution up to ∼ 15 GeV. More generally, it turns out that the production of +2 hard partons (+{gg, qg, cc̅}) is dominant for ∼ 8 - 15 GeV. This could lead to the observation of + 2 jets with moderate , with the leading jet_1 recoiling on the + jet_2 pair.
High-energy-enhanced contributions.
In order to study the possible effects of higher-order QCD corrections enhanced by logarithms of the partonic centre-of-mass energy (ŝ), the Leading-Twist (LT) High-Energy Factorisation <cit.> (HEF) can be used. In many phenomenological studies, it is generalised to include, not only the resummation of ln (ŝ/M^2_)-enhanced effects in the leading-logarithmic approximation, but also the resummation of the “Sudakov” ln (M_/P_T) large logarithms at P_T≪M_ in the next-to-leading logarithmic approximation, assuming CS state production, through the use of the Kimber-Martin-Ryskin-Watt (KMRW formula) <cit.>. However, the systematic study of the overlap between LT HE factorisation and the TMD factorisation
usually employed to resum such transverse-momentum logarithms has been initiated only very recently <cit.>. The KMRW formula converts the set of usual collinear PDFs to the so-called unintegrated PDFs (uPDFs) of the LT HEF formalism. uPDFs depend not only on the longitudinal momentum fraction, x, but also on the transverse momentum of the parton. These objects can yield transverse momenta comparable to, or even larger than, M_ to the final state. This is indeed possible in the Regge limit ŝ≫M_T. For a more detailed review of the LT HEF and its connection to quarkonium physics, see
Section 4.3 of
Ref. <cit.>.
It has been shown earlier <cit.> that the phenomenological framework based on HEF with KMRW uPDF is capable of reproducing the J/ψ photoproduction data from HERA. This is already the case with the HEF coefficient function computed at LO in α_s and in the CS approximation of NRQCD, as illustrated by the left panel of
fig:HEF_H1-2010_EIC obtained with the version of KMRW uPDF introduced in the Ref. <cit.>. We note that the transverse-momentum integral of the uPDF exactly reproduces the input gluon PDF. The
precise
fulfilment of this normalisation condition both at x≪ 1 and x∼ 1 is important to avoid contradictions between LT HEF and NLO
Collinear Factorisation (CF)
predictions for the J/ψ prompt hadroproduction spectrum in pp collisions at low energies, in particular at √(s_pp)=24 GeV for the planned Spin-Physics-Detector experiment at the NICA facility <cit.>.
From fig:HEF_H1-2010_EIC (left), one can see that there is still some room for additional contributions on top of the LO CS contribution from the fusion of a photon and a Reggeon γ(q)+R(x_1, q_T1)→ cc̅[^3S_1^[1]]+g
. These could be from cc̅[^3S_1^[1]]+c considered above (see fig:EIC-CT14NLO-NLOstar) and from CO contributions. The large scale uncertainty of the LO HEF prediction, shown in the fig:HEF_H1-2010_EIC, comes from the variation of μ_R and μ_F around their default value of M_TJ/ψ. Clearly, the uncertainty
has to be reduced via the inclusion of the NLO corrections to make such predictions more precise.
The comparison between LT HEF predictions for the EIC energy √(s_ep)=140 GeV
and the full NLO CF
CSM predictions (computed using FDC <cit.>) is shown in fig:HEF_H1-2010_EIC (right). The latter prediction is evaluated at
a special value of the factorisation scale, μ_F=1.7 m_c, chosen <cit.> to minimise the NLO correction coming from the region of ŝ≫M_^2 (cfr. Section <ref>). There is
a good agreement between these NLO predictions at the optimal scale and LO HEF predictions at the default scale μ_R=μ_F=M_T,J/ψ
: this indicates that the effects of the ln(ŝ/M_^2)
resummation can be reproduced by the optimal factorisation scale choice at EIC energies and that the NLO CF prediction with the optimal scale is robust. At higher photon-nucleon collision energies, a matched calculation between LL HEF and NLO CF predictions, similar to that done in Ref. <cit.>, is necessary <cit.> to correctly capture the high-energy resummation effects at ŝ≫M_^2 while staying at NLO accuracy for ŝ∼M_^2.
§.§.§ Testing NRQCD factorisation at the EIC
Prompt J/ψ yields in photoproduction.
We plot in Figs. <ref> and <ref> the NLO NRQCD factorisation predictions for the
- and z-differential photoproduction cross section of prompt J/ψ mesons in the EIC kinematic conditions. These predictions have been calculated using the short-distance cross sections of Refs. <cit.> and the LDME sets listed in tab:LDME-fits_comp. All LDME sets fitted only
to
the hadroproduction data predict a significantly (factor 3 to 6) higher J/ψ photoproduction cross section than the LDME set of
Table 1 of
Ref. <cit.>, denoted as “LDMEs Kniehl, Butenschön, fit # 1”
in Figs. <ref> and <ref>, which includes the photoproduction data from HERA. We also plot the predictions performed with another set of LDMEs from the same paper, denoted as “LDMEs Kniehl, Butenschön, fit # 2”. The latter set of LDMEs had been fitted
to
the prompt J/ψ hadro-
and photoproduction data corrected approximately for feed-down contributions from heavier charmonium states using constant feed-down fractions. For this fit, we calculate the feed-down contributions from χ_c0,1,2 and ψ(2S) decays to J/ψ using the χ_c LDMEs from Ref. <cit.>
and the fit for ψ(2S) LDMEs performed in
Ref. <cit.>. Calculating the feed-down contribution in this way is consistent with the treatment of feed-down in
Ref. <cit.>.
As expected, the predictions from both Kniehl-Butenschön LDMEs are reasonably close to each other. Yet, they differ from those obtained with the other LDME sets fit to hadroproduction data.
This is mostly because
the latter sets predict a more pronounced z→ 1 growth of the cross section (see fig:Jpsi-photoprod-z_LDMEs) than the global fit LDME sets of Ref. <cit.> which, when integrated over z, translates into larger differential cross sections. This increase is due to both the ^1S_0^[8] and ^3P_J^[8] CO states.
It is important to note that such a rapid increase of the spectrum towards z→ 1
is not a feature of the HERA data. Including these data in LDME fits calls for a
compensation between contributions of ⟨ O^J/ψ[ ^1S_0^[8]] ⟩ and ⟨ O^J/ψ[ ^3P_J^[8]] ⟩ LDMEs
resulting in different signs for these as in the LDME sets of Ref. <cit.>. Therefore the photoproduction data essentially fix the latter LDMEs and do not allow anymore to adjust them to describe the polarisation observables in hadroproduction, which leads to the polarisation puzzle discussed above. The EIC measurements will allow us to check the robustness of this feature of NRQCD predictions against variation of collision energy, since larger radiative corrections at z→ 1 could be expected at higher energies of the HERA collider.
The resolved-photon contribution manifests itself in the opposite region z≪ 1 (fig:Jpsi-photoprod-z_LDMEs) and EIC data are less sensitive to it than HERA data, again due to lower collision energies. Therefore the cleaner test of process-independence of LDMEs can be performed with EIC photoproduction data rather than with HERA data.
Prompt yields in Q^2-integrated lepton-nucleon interactions.
Another possibility to study the contributions of various LDMEs is to consider single-inclusive production of J/ψ in ep collisions, without detecting the final-state electron, as was pioneered recently in Ref. <cit.>: e(ℓ)+h(p)→ J/ψ(P)+X. The rapidity (y) and transverse momentum () distributions of inclusive
production at the EIC are promising observables for both studying the production mechanism of heavy quarkonia and extracting
PDFs, in particular, the gluon PDF, complementary to other observables described in Section <ref>.
When the transverse momentum of defined relatively to the lepton-hadron collision axis
≫ m_c
the perturbative hard coefficient functions for producing the pair receive large higher-order QCD corrections that are enhanced by powers of ln(P_T^2/m_c^2). Such logarithmically-enhanced higher-order corrections can be systematically resummed and factorised into
FFs <cit.>.
On the other hand, when ≳ m_c, the perturbative hard coefficients at a fixed order in α_s should be sufficient.
In addition, the occurrence of a hard partonic collision producing the with large transverse momentum ≫ m_e
necessarily induces multiple photon emissions from the incoming lepton, leading to large higher-order QED corrections enhanced by powers of ln(^2/m_e^2). As we discussed in Section <ref>, these QED corrections can also be systematically factorised and resummed into universal
LDFs <cit.>.
In order to predict the production rate of at the EIC, a new factorisation formalism, which takes into account both collision-induced QCD and QED radiation and provides a systematic transition from ≳ m_c to ≫ m_c, was introduced <cit.>.
The
factorisation formula for the inclusive
production cross section is given by:
E_J/ψdσ_eh→ J/ψ(P_)X/d^3 P_ = ∑_a,b ∫ dx_a f_a/e(x_a,μ_F^2)
∫ dx_b f_b/h(x_b,μ_F^2)
×[
E_J/ψdσ̃^ Resum_ab→ J/ψ(P_)X/d^3 P_
+
E_J/ψdσ̃^ NRQCD_ab→ J/ψ(P_)X/d^3 P_
- E_J/ψdσ̃^ Asym_ab→ J/ψ(P_)X/d^3 P_] ,
where indices a,b, in principle, run, respectively, over all lepton and parton flavors, but in practice, as an approximation, a takes into account only (e,γ,e̅). The functions f_a/e(x_a,μ_F^2) and f_b/h(x_b,μ_F^2) are the LDFs of an electron and the usual parton PDFs
respectively, depending on partonic momentum fractions, x_a and x_b. The LDFs satisfy the DGLAP-like μ_F-evolution equations mixing the QED and QCD splittings <cit.>. In Eq. (<ref>), the partonic cross sections σ̃_ab→ J/ψ(P_)X are computed with all the perturbative collinear singularities along the direction of colliding lepton (a) and parton (b) removed. These singularities are absorbed into f_a/e and f_b/h, respectively.
The cross section dσ̃^ Resum in eq:jpsi-lp-fac represents the partonic cross section with the ln(P_T^2/m_c^2) contributions being resummed to describe the production rate for ≫ m_c, as we have mentioned above. In
σ̃^ NRQCD, the production of cc̅[^2S+1L^[1,8]_J]-state at the perturbative stage is computed at fixed order in α_s and the corresponding non-perturbative formation of a from a produced pair is taken care
using the NRQCD velocity expansion and universal NRQCD LDMEs. This part of the cross section should provide a good description of the production rate when ∼ m_c. Finally, σ̃^ Asym is equal to a fixed-order expansion of σ̃^ Resum to the same order in α_s as in σ̃^ NRQCD. The latter part is needed to remove the double counting between σ̃^ Resum and σ̃^ NRQCD.
By including all these three terms, this factorisation formalism
can be applied to both lepton-hadron and hadron-hadron collisions, as well as e^+e^- collisions <cit.>, providing a smooth transition when observed ∼ m_c increases to ≫ m_c.
The predictive power of eq:jpsi-lp-fac relies on the factorisation of each term and our ability to calculate them. Up to next-to-leading power corrections in m_c/, the σ̃^ Resum can be factorised as <cit.>,
E_J/ψdσ̃^ Resum_ab→ J/ψ(P_)X/d^3 P_ ≈ ∑_k∫dz/z^2 D_k→ J/ψ(z,μ_F^2)
E_kdσ̂_ab→ k(p_k)X/d^3 p_k(z,p_k=P_/z,μ_F^2)
+
∑_κ∫dz/z^2D_[(κ)]→ J/ψ(z,μ_F^2)
E_kdσ̂_ab→ [(κ)](p_k)X/d^3 p_c(z,p_k=P_/z,μ_F^2)
,
where k=q,g,q̅ and κ=v,a,t
for pairs respectively in a vector, axial-vector or tensor spin state <cit.>. The first and second terms are the factorised leading power (LP) and next-to-leading power (NLP) contributions to the cross section in its 1/ expansion. The corrections to eq:jpsi-lp-resum are suppressed by 1/^4 and cannot be further factorised <cit.>. The universal single-parton and double-parton () FFs, D_c→ J/ψ(z,μ_F^2) and D_[(κ)]→ J/ψ(z,μ_F^2), respectively, satisfy a closed set of evolution equations with respect to
changes of the factorisation scale μ_F <cit.>. Solving these evolution equations one resums the logarithmic contributions scaling like ln(P_T^2/m_c^2) to these FFs. The universal FFs at an input scale μ_F = μ_0≃ 2m_c can be calculated assuming NRQCD factorisation <cit.> in terms of universal NRQCD LDMEs,
D_c→ J/ψ(z,μ_0^2)
≈ ∑_[^2S+1L_J]d̂_c→[^2S+1L_J](z,μ_0^2) ⟨ O_[^2S+1L_J]^(0)⟩ ,
D_[(κ)]→ J/ψ(z,μ_0^2)
≈ ∫_-1^1 du ∫_-1^1 dv D_[(κ)]→ J/ψ(z,u,v,μ_0^2)
≈ ∑_[^2S+1L_J]d̂_[(κ)]→[^2S+1L_J](z,μ_0^2) ⟨ O_[^2S+1L_J]^(0)⟩ .
eq:jpsi-ffs-cc involves further approximations, neglecting possible differences between the momentum fractions carried by the pair in the amplitude, u, and and its complex-conjugate, v, which can be taken into account through the more general FF D_[(κ)]→ J/ψ, defined in <cit.>. The approximation in the second line of eq:jpsi-ffs-cc reflects the fact that the
integral of this function is dominated by the vicinity of u=v=1/2 <cit.>.
The formalism described above has been already tested partially in the case of pp collisions, where instead of LDFs in eq:jpsi-lp-fac one substitutes the proton PDFs. With perturbatively calculated short-distance matching coefficients for both single-parton and -pair FFs at the input scale <cit.> and solving the coupled evolution equations for these FFs, the factorised and resummed cross section in eq:jpsi-lp-resum
describes
the distribution of production at the LHC and Tevatron <cit.> for > 10 GeV, as we note in
tab:LDME-fits_comp.
At the LHC energies, the LP contributions, namely the first term in eq:jpsi-lp-resum, dominate when ≫ 20 GeV, while the NLP contributions, namely the second term in eq:jpsi-lp-resum, are comparable at ∼ 20 GeV and become dominant when further decreases, which is critically important to describe the shape of the observed P_T distribution.
Making predictions of the distribution of inclusive production at the EIC requires the knowledge of the universal LDFs. In fig:LDFs of <ref>, the scale dependence of the LDFs with and without the mixing of QED and QCD evolution is shown.
Like in any factorisation approach, the perturbatively calculated short-distance partonic cross section, such as σ̂_ab→ k(p_k)X in eq:jpsi-lp-resum, does not depend on the details of the hadronic state produced. It has been calculated for single hadron production at LO <cit.>, at NLO <cit.>, and at NNLO <cit.>. The fixed-order calculation for σ̃^ NRQCD has been carried out in NRQCD up to NLO <cit.>.
In fig:jpsi-pt-eic, we present the predictions of the distribution of inclusive production in ep collisions at the EIC for √(s_ep)=140 GeV. For these predictions, only the σ̃^ Resum term in eq:jpsi-lp-fac is used and the same LDMEs that we used for describing the production at the LHC and Tevatron energies <cit.> are taken here. These LDMEs are close to those from the Chao et al. <cit.> fit (H14) mentioned in tab:LDME-fits_comp.
The CT18ANLO PDF central set <cit.> was used for the proton PDFs. Unlike production at the LHC and the Tevatron, the reach in the defined with respect to the lepton-hadron axis, is much smaller due to the smaller collision energy. The solid line in fig:jpsi-pt-eic refers to the total contribution, which is dominated by the subprocess γ + g → [cc̅] + g (NLP Photon) and e + g → [cc̅] + e (NLP Lepton) with the pair fragmenting into J/ψ. The lepton or photon initiated LP contribution to the production cross section, namely the first term in eq:jpsi-lp-resum, is dominated by the lowest-order subprocesses, such as e+q→ e+q or γ+q→ g+q, respectively, with a produced parton fragmenting into the observed J/ψ, and is strongly suppressed by the single-parton FFs at the EIC energy. In summary, the LP contributions are essentially irrelevant in the EIC kinematics. Therefore, a matching to the fixed-order calculations (described above in this section), including the second and third terms in eq:jpsi-lp-fac, is awaited for.
Polarisation of J/ψ in photoproduction.
Since the prediction <cit.> in 1994 of a transversely-polarised hadroproduction yield at high , much hope has been put in polarisation measurements to advance our understanding of quarkonium production, with a very limited success though <cit.>. NLO CSM computations of polarisation observables in photoproduction were performed in 2009 <cit.> and subsequently completed with the COM NRQCD contributions in 2011 <cit.> without clear conclusions owing to the large uncertainties in the H1 <cit.> and ZEUS <cit.> data and in the theory.
In Figs <ref> and <ref>, we show the NLO NRQCD predictions for the and z dependence of the polarisation parameter λ_θ of promptly photoproduced J/ψ mesons in the EIC kinematic conditions. These predictions include CS and CO contributions using the LDME sets discussed in Section <ref> as well as direct and resolved-photon interaction contributions (see the Figs. <ref> and <ref> for the corresponding differential cross-section plots). As one can see from
fig:Jpsi_photoprod_lambda-pT, the
-dependent NRQCD predictions for all LDME sets are roughly consistent with unpolarised production (λ_θ=0 in all frames), unlike the predictions of the CSM, which leads to significant polarisation of photo
produced J/ψ mesons. In the z-dependent case, the region of z→ 1 has the most
discriminating power between different LDME sets. We however have reasons to doubt the relevance of these predictions given that all but the BK11 LDMEs are unable to describe the corresponding HERA data. From Figs. <ref> and <ref>, one also observes that the detailed behaviour of λ_θ for different LDME sets is significantly different for different polarisation frames, which could be an important tool for additionally constraining the theory.
Polarisation of J/ψ in electroproduction.
The HERA collider experiments provided some results on the J/ψ polarisation, mostly for photoproduction <cit.>,
but unfortunately these data do not allow to favour or disfavour different models and/or approaches.
The reasons behind this are twofold: data were not precise enough and they were collected in regions where theoretical predictions are very close to each other <cit.>. Furthermore in Ref. <cit.>, Yuan and Chao showed that the estimates for the λ_θ parameter in SIDIS, within both the CSM and NRQCD approaches, are overlapping for most of the values of the variable z.
In this respect EIC could play a crucial role: highly precise data are expected and other/extended kinematical regions could be explored.
In the following, we present some predictions at LO, both in the CSM and NRQCD frameworks, adopting different NLO LDME sets.
Some comments are therefore in order: (i) as previously discussed, the combined usage of NLO hard scattering with NLO LDMEs is subject to great caution. As of now, only the CSM part of the electroproduction cross section has been computed at NLO <cit.>. The only full NRQCD analysis has been performed at LO <cit.> and show mixed agreements between the different NRQCD predictions and HERA data; (ii) a number of quarkonium-production processes exhibit very large QCD corrections to polarisation observables <cit.>. The following LO results should therefore only be considered as a simple guidance for future measurements and certainly not as quantitative predictions to which future measurements should be confronted to. In this context, a NLO NRQCD analysis of electroproduction is eagerly awaited for.
fig: Jpsi lambda pol vs PT SIDIS shows some estimates for the λ_θ parameter at the centre-of-mass
energy √(s_ep) = 45 GeV
together with their uncertainty bands, visible mostly for the CSM and obtained by varying the factorisation scale in the range μ_0/2 < μ_F <2 μ_0, with μ_0 = √(M_^2 + Q^2). The integration regions are detailed in the legend box.
No uncertainty bands from LDMEs are included, instead predictions for different sets are presented: C12 <cit.>, BK11 <cit.> and G13 <cit.>. This illustrates their impact on the results. From fig: Jpsi lambda pol vs PT SIDIS, it is clear that the λ_θ value can be significantly different if we consider different frames. In particular, the Gottfried-Jackson frame provides the better overall separation between CSM and NRQCD curves.
Another possibility offered by the EIC experiment is the collection of data at different energies. In fig: Jpsi lambda pol vs energy SIDIS, the impact coming from the energy variation on CSM and NRQCD predictions is shown. In this case, only the central values are presented (μ_F = μ_0); for the lower energy, √(s_ep) = 45 GeV, the integration region is the same as in fig: Jpsi lambda pol vs PT SIDIS, while for √(s_ep) = 140 GeV a wider W integration is considered (see legend).
Even focusing on one specific frame, like the helicity frame in fig: Jpsi lambda pol vs energy SIDIS, one clearly sees that the CSM is more affected by the energy shift. Note that moving to higher energies allows one to access contributions with higher virtuality, with an interesting effect: in the CSM these contributions are opposite to the lower virtuality ones (reducing the size of the estimates), while in NRQCD this phenomenon is less important. It however remains to be shown that such discriminant effects remain at NLO.
Prompt η_c and χ_c yields.
As mentioned in
Section <ref>, the dominance of
the CS mechanism in prompt-η_c hadroproduction at ≳ M_η_c was not expected by NRQCD factorisation. Therefore, from the point of view of studies of the heavy-quarkonium production mechanism, it is important to understand if this feature of η_c production persists also in ep collisions. If it is indeed the case, then η_c hadro-, photo- and leptoproduction can be used as
a tool for hadron-structure studies with
a reduced uncertainty stemming from the CO mechanism compared to production of other charmonium states.
In recent works <cit.>, η_c photo- and electroproduction cross sections were computed including all the CO and CS contributions at LO in α_s. In the case of photoproduction <cit.>, both direct-photon and resolved-photon interactions were taken into account. The CS contribution had been assumed to be negligible in earlier studies <cit.>, because the corresponding direct-photon interaction subprocess appears at O(αα_s^3) due to the necessity of
two-gluon radiation in the final state to produce a cc̅[^1S_0^[1]] pair and because resolved-photon contributions were assumed to be small. However, it was found <cit.> that the resolved-photon subprocesses make the CS contribution to the photoproduction cross section non-negligible. These predictions, updated with the use of CT14LO PDFs, are shown in Figs. <ref> and <ref>. The CO contributions were computed by converting the J/ψ CO LDME sets listed in tab:LDME-fits_comp to the
η_c LDMEs through HQSS relations valid up v^2 corrections.
As one can see from these figures, the CO contributions are still important and the cross section at z>0.5 strongly depends on the LDME choice.
For
electroproduction
<cit.>, the CS contribution is also sizeable, but for a different reason, namely an additional Q^2-dependent terms appearing in the short-distance cross section. Of course, the main problem of the predictions for η_c production in
ep collisions is that they so far have been done only at
LO in α_s. The NLO corrections could be particularly important for the CS ^1S_0^[1] state whose LO contribution is highly suppressed at ≳ M_η_c
in photo- and leptoproduction in comparison to CO states, especially ^3S_1^[8]. As known from J/ψ production, this suppression will be lifted by large NLO corrections <cit.>. NLO calculation, at least in the CS channel, should be done before drawing conclusions about the importance of the CS mechanism in η_c production at the EIC.
Besides J/ψ and η_c production, it is also essential to study χ_c0,1,2 states at the EIC.
Photo- or electroproduction of these mesons has not been observed experimentally yet. NLO NRQCD predictions for the photoproduction cross sections of χ_c0,1,2 radiatively decaying to J/ψ are shown in fig:chic-photoprod. They are based on known calculations of short-distance cross sections for photoproduction <cit.> and the χ_c0 LDME values obtained in hadroproduction fits by Ma et al. <cit.>, respectively Bodwin et al. <cit.>. We remark that the former χ_c0 LDME values are also those used by Gong et al. in Ref. <cit.> and in the LDME set denoted “Kniehl, Butenschoen, fit #2” in Figs. <ref> and <ref>. We remind the reader that for P-wave production at NLO in NRQCD, one cannot make a clear distinction between CO and CS contributions as they directly depend on the NRQCD factorisation scale, μ_Λ.
It is an expected feature that resolved photon contributions dominate photoproduction at low z. Interestingly, however, the predictions of fig:chic-photoprod are dominated by the resolved-photon contribution already for z below 0.5. Moreover, it is only due to the resolved photons that the χ_c cross sections are positive at low z after all. This feature of the theoretical predictions may indicate our poor understanding of χ_c
photoproduction, but if confirmed, the photoproduction of these mesons could serve as a useful source of information about the poorly known gluon component of photon PDFs.
§.§ Learning about quarkonia from TMD observables
§.§.§ LDME constraints from TMD observables
One important reason to investigate quarkonium production at the EIC is the possibility to probe TMDs that have not been extracted from experiments yet. The semi-inclusive heavy vector quarkonium production process, e p → e' J/ψ (Υ) X at small transverse momentum, , is expected to offer a promising probe of gluon TMDs[Due to the presence of the large scale given by the quarkonium mass M_≈ 2m_Q, one can consider not only electroproduction, but in principle also the photoproduction case (Q^2≈ 0). A large photon virtuality is expected to suppress background from diffraction and higher-twist effects <cit.>. To our knowledge, at present there are no studies of the numerical impact of such background on the photoproduction process γ p → J/ψ (Υ) X in the TMD regime.
], as will be discussed extensively in section <ref>. Besides gluon TMD extractions, this process may also allow for improved determinations of certain LDMEs. In this way EIC can also improve our knowledge on NRQCD.
At small P_T,
the differential cross section is expected to be described in terms of
TMDs.
As will be discussed in detail in the next subsection, for quarkonium production, this involves
TMD shape functions <cit.>, rather than TMD FFs like for light hadron production. At the lowest order, α^2 α_s, the process e p → e' J/ψ (Υ) X at small transverse momentum is described by photon-gluon scattering producing a heavy quark-antiquark pair in the
CO state. The transition from the heavy-quark pair into the bound state is then described by a shape function. If one assumes the shape function to be a delta function in transverse momentum, one can connect to the standard NRQCD expressions for this transition.
To lowest order in the strong coupling,
but with the inclusion of the NNLO in v^2
^1S_0 and ^3P_J (J=0,1,2) CO intermediate states <cit.>, the resulting expression for the cross section involves
two of the CO LDMEs which were discussed above,
⟨ O[^1S^[8]_0]⟩ and ⟨ O[^3P^[8]_0]⟩, for which constraints from new types of observables are clearly welcome. In this way, measurements of the transverse-momentum spectrum of e p → e' J/ψ (Υ) X in the TMD regime can lead to improved determinations of these CO LDMEs. However, inclusion of higher-order corrections, in particular from the leading v CS NRQCD contributions at α^2 α_s, and the proper shape functions will be required for a robust extraction of these LDMEs.
§.§.§ TMD effects from quarkonia: shape functions
The NRQCD factorisation approach can only be applied for transverse-momentum spectra when the quarkonium state is produced with a relatively large transverse momentum compared to its mass,
≳ 2 m_Q.
This is because the emissions of soft gluons from the heavy-quark pair cannot modify the large transverse momentum of the bound state. The large
is generated in the hard process through recoil off unobserved particles, while the infrared divergences are parametrised in terms of the well-known
LDMEs,
collinear PDFs and
FFs, depending on the particular process under consideration.
On the contrary, when the quarkonium is produced with a small transverse momentum, all soft gluon effects can no longer be factorised in terms of standard TMD PDFs. In order to properly deal with soft-gluon radiation at small
in a transverse-momentum spectrum of quarkonium, it has recently been found that one needs to promote the LDMEs to so-called TMD shape functions (TMD ShFs) <cit.>.
Earlier, similar shape functions had been introduced in quarkonium photo-/leptoproduction in the endpoint region <cit.>, which however are functions of z, but a more general form was discussed in <cit.>.
The newly introduced non-perturbative TMD ShFs encode the two soft mechanisms present in the process at low
: the formation of the bound state and the radiation of soft gluons. As a consequence, they parametrise the transverse-momentum smearing of the bound state, and carry a dependence on the factorisation and rapidity scale.
Schematically, for the production of a single quarkonium state at the EIC, with mass M_, we have:
dσ ∼
F_g/P(b_T;μ,ζ) ∑_i∈{^1S_0^[1],…}
H^[i](M_, Q;μ) Δ^[i](b_T,μ,ζ)
,
where F_g/P stands for any of the eight leading-twist
gluon TMDs <cit.>, H^[i]
are the process-dependent hard scattering coefficients and Δ^[i] are the
quarkonium TMD ShFs <cit.>.
The above formula is written down in coordinate space where
b_T is Fourier-conjugate to the quarkonium transverse momentum P_T^* (to be specific, in the virtual photon-proton centre of mass frame).
Moreover, μ and ζ are the factorisation/resummation and rapidity scales, respectively.
The summation is performed over the various colour and angular-momentum configurations (i) of the QQ̅ pair.
Similarly to LDMEs, the TMD ShFs are of a specific order in the relative velocity v of the heavy quark-antiquark pair in the quarkonium rest frame.
Therefore, the factorisation formula is a simultaneous expansion in v and λ = ^*/M_.
The operator definition of a bare[It is understood that the TMD ShF in the factorised cross section in eq:TMD-shape is free from rapidity divergences, it has been divided by the relevant soft factor which has also been used to properly subtract rapidity divergences in the gluon TMD F_g/P.]
TMD ShF with NRQCD quantum numbers i is:
Δ^[i](b_T,μ,ζ) ∝ ∑_X_s 0( O_i^† Y_n^†)^ab(b_T)X_s X_s( Y_n O_i)^ba(0)0 ,
which is just the TMD generalisation of the LDME operator definition in eq:LDME_def. On the r.h.s., the usual LDME operators 𝒪 are evaluated at positions b_T and 0 and sandwiched between the vacuum |0⟩ and the state | X_s⟩ of the produced quarkonium together with possible soft radiation carrying away color.
Moreover, these operators are multiplied by
Wilson lines 𝒴_n parametrising
the resummation of gluons exchanged between the hard part and
the
state | X_s⟩.
The operator definition in eq:ShFoperator can be related to the NRQCD LDMEs by the first term in an operator product expansion (OPE) for b_T→ 0:
Δ^[i](b_T,μ,ζ)
=
∑_n
C^[i]_n(b;μ,ζ)
×⟨ O^ [n] ⟩(μ)
+𝒪(b_T)
.
In order to extend this expression to larger b_T, one can introduce a prescription like b_T → b_T^*≡ b_T/√(1+(b_T/b_T,max)^2)≤ b_T,max to ensure validity of this perturbative expression
and include a nonperturbative overall factor Δ^[i]NP:
Δ^[i](b_T,μ,ζ)
≡Δ^[i]NP(b_T) ∑_n
C^[i]_n(b_T^*;μ,ζ)
×⟨ O^ [n] ⟩(μ)
.
This expression involves the usual “collinear” LDMEs, multiplied by perturbatively calculable Wilson coefficients C_n^[i](b_T;μ,ζ) to match the expansion on pQCD, and a non-perturbative part Δ^[i]NP
that needs to be modelled or extracted from experimental data. Note that, in principle, at higher orders in α_s, there might be operator mixing: the ^1S_0^[8] TMD ShF could become dependent on the ^3P_0^[8] LDME, hence the sum over NRQCD states n in eq:ShFOPE.
In Ref. <cit.>, the OPE of eq:ShFOPE is implemented in a practical way by studying single-inclusive J/ψ electroproduction. In the regime
^*2∼ Q^2∼ M_^2,
with
^* being the transverse momentum of the quarkonium in the virtual photon-proton centre-of-mass frame
and μ either given by Q or by the
quarkonium mass M_, the cross section is computed as usual in collinear factorisation. On the other hand, when
^*2≪μ^2, TMD factorisation eq:TMD-shape applies. By comparing both cross sections in the kinematical regime Λ^2_QCD≪^*2≪μ^2, one can
match the relevant TMD ShF onto the collinear LDMEs,
confirming the need for introducing shape functions.
The analysis of Ref. <cit.> was revised in Ref. <cit.>, modifying the obtained expression for the shape function, but not its necessity.
To summarise, the factorisation theorem in eq:TMD-shape contains a convolution of two non-perturbative hadronic quantities at low transverse momenta: the gluon TMD PDFs and the TMD ShFs. It is therefore possible to perform a phenomenological extraction of gluon TMDs from quarkonium production processes.
However, to do so, one also needs to model or extract the involved TMD ShFs. This is analogous to SIDIS where one observes a light hadron, where one needs information on the light-hadron TMD FFs in order to extract quark TMD PDFs.
§.§.§ Azimuthal cos 2 ϕ_T^* modulation in electroproduction
In (semi-inclusive) quarkonium electroproduction on an unpolarised proton target,
an azimuthal cos 2 ϕ_T^* modulation (see Section <ref> for our kinematic definitions) of the differential cross section will arise from
linearly
polarised gluons inside
the unpolarised proton. These are described by the TMD h_1^⊥ g <cit.>[Note that, in the photoproduction regime, one cannot determine the angle ϕ_T^* because the lepton plane is not defined, hence, also not the cos 2 ϕ_T^* modulation. In photoproduction, azimuthal modulations can only be seen for two-particle observables.].
In many studies the shape functions of the quarkonium are assumed are not considered and then the differential cross section can be written as:
dσ= 1/2sd^3l'/(2π)^32E_l'd^3P_/(2π)^32E_P_∫ dx d^2𝐤_⊥(2π)^4δ(q+k-P_)
×1/Q^4ℒ^μμ'(l,q)Φ^νν'
(x,𝐤_⊥) ℳ_μν(ℳ_μ'ν')^∗,
where ℳ_μν is the amplitude of
production of the quarkonium in the subprocess
γ^∗+g→
, ℒ^μμ' is the leptonic tensor, and the
gluon correlator is given by <cit.>:
Φ^νν'(x,𝐤_⊥)=-1/2x{g_⊥^νν'f_1^g(x,𝐤_⊥^2)-(k_⊥^νk_⊥^ν'/M_p^2+g_⊥^νν'𝐤_⊥^2/2M_p^2)h_1^⊥ g(x,𝐤_⊥^2)}.
Here, g_⊥^νν'=g^νν'-P^νn^ν'/P· n-P^ν'n^ν/P· n, x and 𝐤_⊥ are the light-cone momentum fraction and transverse momentum of the gluon.
The asymmetry is defined as:
⟨cos(2ϕ_T^*)⟩ = ∫ dϕ_T^* cos(2ϕ_T^*)dσ/∫ dϕ_T^* dσ,
where ϕ_T^* is the azimuthal angle of the production plane of J/ψ with respect to the lepton scattering plane.
In a more complete picture of the P_T^*2≪ M_J/ψ^2 ∼ Q^2
region, as explained in <ref>, the TMD factorisation applies and LDMEs are promoted to TMD ShFs. Hence, the differential cross section for this process can be recast in the following form:
dσ^UP/d y dx_B d^2 P^*_T = N [∑_n A_UP^[n] C[f_1^g Δ^[n]] + ∑_n B_UP^[n] C[w h_1^⊥ g Δ_h^[n]] cos 2 ϕ_T^* ] ,
where the subscript UP on the amplitudes A_UP^[n] and B_UP^[n] denotes the polarisation state of the proton (U, since it is unpolarised) and of the quarkonium (P=U,L,T), respectively, and N denotes an overall normalisation factor. Here, the quarkonium polarisation is defined with respect to the direction of the quarkonium three-momentum in the virtual photon - proton
centre-of-mass frame.
Measurements of the transverse-momentum dependence of the above cross section at the EIC would
allow one to gather information on the so-far unknown quarkonium shape functions. In particular, the cos2ϕ_T^*-weighted cross section would give access to a linear combination of the convolutions C[w h_1^⊥ g Δ^[n]_h], with n=^1S_0^[8] or n=^3P_0^[8].
Here the weight in the convolution expression
C[w h_1^⊥ g Δ^[n]_h] (q_T) ≡∫ d^2p_T∫ d^2k_T δ^2(p_T+k_T-q_T) w(p_T,q_T) h_1^⊥ g(x,p_T^2) Δ^[n]_h(k_T^2) ,
is given by (in standard TMD notation, note however that q_T will correspond to P^*_T used here)
w(p_T,q_T) = 1/M_p^2q_T^2[2(p_T·q_T)^2 - p_T^2q_T^2] .
On the other hand, integrating over ϕ_T^* would single out a combination of the convolutions C[f_1^g Δ^[n]](P_T^*) for the same octet S- and P-waves, which could be in principle disentangled by looking at different values of the inelasticity y. Measurements of these observables should help to establish the relevance of smearing effects and, in case they turn out to be sizeable, to even perform a first extraction of the shape functions. In this way, it would be possible to compare Δ^[n] and Δ^[n]_h as well as determine some other properties, like their relations with the LDMEs and their dependence on n.
For unpolarised quarkonium production (P=U),
applying the above expressions gives the following normalised asymmetry ratio:
⟨cos 2ϕ_T^* ⟩≡∫ d ϕ_T^* cos 2ϕ_T^* dσ^UU/d y d x_B d^2 P^*_T/∫ d ϕ_T^* dσ^UU/d y dx_B d^2 P^*_T = 1/2∑_n B_UU^[n] C[w h_1^⊥ g Δ_h^[n]]/∑_n A_UU^[n] C[f_1^g Δ^[n]].
As the matching analysis mentioned in Section <ref> suggests, it is expected that the shape functions are proportional to the LDMEs belonging to the [n] state,
at least at LO: Δ^[n] ( k_^2;μ^2) ≃⟨ O^ [n] ⟩ Δ( k_^2;μ^2) and Δ_h^[n] ( k_^2;μ^2) ≃⟨ O^ [n] ⟩ Δ_h( k_^2;μ^2), for some
Δ( k_^2;μ^2) and Δ_h( k_^2;μ^2).
In this case the above asymmetry expression reduces to:
⟨cos 2ϕ_T^* ⟩ = 1/2B_UU/A_UU C[w h_1^⊥ g Δ_h ]/ C[f_1^g Δ],
where A_UU = ∑_n A_UU^[n] ⟨ O^ [n] ⟩ and
B_UU= ∑_n B_UU^[n] ⟨ O^ [n] ⟩.
At LO, the coefficients appearing in this expression are <cit.> :
A_UU^[^1S_0^[8]] = 1+y̅^2,
A_UU^[^3P_0^[8]] = [2 y̅7+3Q̂^2/1+Q̂^2 + y^2 7+2Q̂^2+3Q̂^4/(1+Q̂^2)^2] 1/m_Q^2,
B_UU^[^1S_0^[8]] = - y̅,
B_UU^[^3P_0^[8]] = 3-Q̂^2/1+Q̂^2y̅/m_Q^2.
Here, we defined y̅=1-y, with y being the inelasticity variable (see eq:SIDIS:kin-var), and Q̂^2≡ Q^2/(4m_Q^2) and we approximated m_≃ 2 m_Q, where m_Q denotes the heavy-quark mass.
At the EIC, one
could try to determine the LDMEs together with the gluon TMDs. The Q^2 and y dependence of the P_T^*-independent pre-factor B_UU/A_UU can then be exploited
, as it makes the observable dependent on different linear combinations of the LDMEs. This can be paralleled to the slight rapidity dependence of the LDME linear combination appearing in the polarisation of the hadroproduction yield <cit.>.
Another option is to consider ratios in which the gluon TMDs cancel out <cit.>, although that may only hold
at LO in certain cases. An example of this will be discussed in Section <ref> where the quarkonium polarisation is used to cancel out the gluon TMDs.
A further constraint on the LDMEs comes from the bound on the above asymmetry. At leading order, the bound q_T^2 | h_1^⊥ g(x, q_T^2) | / (2M_p^2) ≤ f_1^g(x, q_T^2) <cit.> and the fact that |⟨cos 2ϕ_T^* ⟩ | ≤ 1 leads to the condition |B_UU/A_UU| ≤ 1. The LDMEs that determine the ratio B_UU/A_UU will have to respect this bound. In this way one can find for instance that the CO LDMEs from Ref. <cit.> do not respect this bound at LO (and A_UU which should be positive becomes negative below the central value within the 1σ uncertainty range), but it has to be noted that these LDMEs were obtained at NLO from hadro- and photoproduction data.
The ratio B_UU/A_UU at LO is shown in fig:ratioBUUoverAUUandratioAULoverAUU (left plot) for two different CO LDME sets: one obtained at LO <cit.> (SV), which is very similar
to the NLO fit <cit.>, denoted C12 in tab:LDME-fits_comp, and another obtained at NLO with FF <cit.> (BCKL), denoted B14 in tab:LDME-fits_comp. The uncertainty bands are obtained assuming uncorrelated uncertainties on the ⟨ O^[^1S_0^[8]]⟩ and ⟨ O_8^[^3P^[8]_0]⟩ determinations. The ratio is shown as a function of Q̂^2. Here Q̂^2=0.01 is considered to be the minimum achievable value.
Indeed, in order for ϕ_T^* to be determined, one needs to
be in the electroproduction regime where Q^2 ≥ 1 GeV^2. In the bottomonium case, Q̂^2 should thus be larger than 1 GeV^2/(4 m_b^2) ≈ 0.01.
The figure indicates that there is much uncertainty in the LO result. It also indicates the precision needed at the EIC in order to differentiate among the various fits and to improve on them. A determination of B_UU/A_UU at the 10% level would be an improvement of the current situation. Assuming h_1^⊥ g is 10% of its maximal value at EIC energies (for a more detailed analysis, see section <ref>), this translates into a percent level accuracy requirement on the measurement of ⟨cos 2ϕ_T^* ⟩. For other y values, similar conclusions hold. Needless to say, an NLO analysis of the asymmetry will be needed in order to arrive at more accurate predictions for the EIC and for a fully coherent NLO computation with NLO LDMEs.
For these measurements, a good P_T^*-resolution at small P_T^* is an important requirement. Small P_T^* applies to the range up to a few GeV for the EIC energies. Therefore, the transverse-momentum resolution in the small transverse-momentum region should be on the order of a few hundred MeV, such that sufficient bins can be selected to map out this region. For the determination of ⟨cos 2ϕ_T^*⟩, a sufficient angular resolution is needed.
§.§.§ Quarkonium polarisation in electroproduction within TMD factorisation
If the polarisation state P (L or T) of the produced quarkonium can be determined in the semi-inclusive quarkonium production process, e p → e' J/ψ (Υ) X at small transverse momentum, P_T^*, then that may offer a further possibility to improve our knowledge on LDMEs. As an illustration, here we consider the example of the ratio of the ϕ_T^*-integrated cross sections:
∫ d ϕ_T^* dσ^UP/d y dx_B d^2 P^*_T/∫ d ϕ_T^* dσ^UU/d y dx_B d^2 P^*_T = ∑_n A_UP^[n] C[f_1^g Δ^[n]]/∑_n A_UU^[n] C[f_1^g Δ^[n]] = A_UP/A_UU.
Let us stress that eq:cos2phibis relies on the assumption that the shape functions are equal to the corresponding LDMEs times a universal shape function
that is also polarisation independent.
If so, the ratios A_UL/A_UU and A_UT/A_UU are independent of the value of
P^*_T≡ |P^*_T| to all orders and hence not affected by TMD evolution. The ratio will only receive contributions from higher orders through modification of the amplitudes. Thus far only the
LO expressions are known <cit.>: A_UU was already given in Section <ref>, and
A_UL = 1/3 [1+(1-y)^2] ⟨ O^[^1S^[8]_0]⟩ + [2(1-y) 1+10Q̂^2+Q̂^4/(1+Q̂^2)^2 + y^2 1+2Q̂^2+Q̂^4/(1+Q̂^2)^2] ⟨ O^[^3P_0^[8]]⟩/m_Q^2 ,
where A_UT= A_UU - A_UL. Compared to A_UU, the ⟨ O^[^3P_0^[8]]⟩ term in A_UL has different inelasticity y and Q̂^2 dependences. This implies that there can be a significant deviation of A_UL from A_UU/3 (and of A_UT from 2A_UU/3), signalling the production of polarised quarkonia. Likewise, one could consider the ratios B_UL/B_UU or B_UT/B_UU which are similar, but different linear combinations of LDMEs.
In fig:ratioBUUoverAUUandratioAULoverAUU (right plot) the ratio A_UL/A_UU at LO is shown for the LDME fits <cit.> at LO (SV) and <cit.> (here denoted BCKL, B14 in tab:LDME-fits_comp) at NLO, including uncertainty bands, assuming again uncorrelated uncertainties on the ⟨ O^[^1S_0^[8]]⟩ and ⟨ O^[^3P^[8]_0]⟩ determinations.
In reality the uncertainties are correlated, which means that the bands are expected to be overestimations. The difference between the central values of the two different LDME sets could be viewed as another measure for the size of the involved uncertainties.
Although both fits are compatible with unpolarised production,
both fits also allow, within their uncertainties, for values considerably different from 1/3. It is important to recall that quarkonium-polarisation observables are very sensitive to radiative corrections <cit.>. Computations of the NLO corrections to the hard parts entering the ratio A_UL/A_UU are therefore necessary to perform a reliable extraction of the LDMEs from these ratios.
In Ref. <cit.>, the fit C12 <cit.> is used to demonstrate the dominance of the ^1S_0^[8] cc̅ state in the inclusive process e h → J/ψ X (which is dominated by Q^2 ≈ 0) described in collinear factorisation. As a result, it is concluded that the J/ψ will be approximately produced in an unpolarised state. However, the above results show that due to the large uncertainties in the CO LDMEs, one cannot draw the same conclusion for semi-inclusive J/ψ electroproduction in the TMD regime (where P_T^*2 is much smaller than the two hard scales M_J/ψ^2 and Q^2).
Observation of
a non-zero polarisation of the J/ψ yield would signal the relevance of the P-wave LDME ⟨ O^[^3P^[8]_0]⟩ or of higher-order contributions.
Again a 10% level precision of the determination of the ratio A_UL/A_UU at EIC would be sufficient to improve on the present situation. For this the polarisation state L or T of the quarkonium needs to be determined with sufficient precision of course.
§.§ On the importance of final-state effects on quarkonium formation in electron-nucleus collisions
Interest in quarkonium formation
in reactions with nuclei goes back more than 30 years in the context of heavy-ion reactions. The colour interaction between heavy quarks immersed in a high temperature quark-gluon plasma (QGP), such as produced in these reactions, was predicted to be screened, preventing quarkonium states from forming as well as dissociating them <cit.>.
Excited, weakly-bound-state solutions to the Schrödinger equation, such as Υ(2S), ψ', χ_c,
were expected to melt first in the QGP and provide a “thermometer” for determining the plasma temperature. Since their introduction, these ideas have evolved significantly. It was realised that dissociation and formation suppression of hadrons in QCD matter is not limited to quarkonia. Open heavy-
flavour mesons have short formation times and can also be destroyed by collisional interactions in the nuclear medium <cit.>, reducing the experimentally measured cross sections. Importantly, the breakup of J/ψs and Υs is not exclusive to the QGP and can take place in different forms of strongly-interacting matter, for example a hadron gas or a large nucleus.
Measurements of the modification of charmonium and bottomonium production in dAu, pAu, pAl and pPb collisions at RHIC <cit.> and at the LHC <cit.>, respectively, showed that production suppression increases with the multiplicity of hadrons recorded in a reaction <cit.>.
Moreover, recent measurements of bottomonium yields in pPb collisions showed <cit.> that excited Υ states are more suppressed than the ground state, and the hierarchical pattern becomes more manifest in the negative rapidity region, which is the direction of the lead nucleus <cit.>.
Studies indicate that final-state interactions can play a significant role in reducing the rates of quarkonium production at the EIC. This quenching effect has been demonstrated for light and heavy mesons (containing a single heavy quark) <cit.> and inclusive and heavy flavor-tagged jets <cit.>. At forward rapidities and, especially at lower
centre-of-mass energies, suppression in cold nuclear matter can be as large as a factor of two and serves as strong motivation to investigate these effects for quarkonium final states.
These observations and predictions indicate that final-state effects (interaction of quarkonium with co-moving hadrons as well as the remnant of the nucleus) require careful treatment in order to extract information about nuclear PDF and transport properties of the nuclear matter. This
can be addressed from both experimental and theoretical perspectives.
Experimentally, one can approach this concern by studying femtoscopic correlations (two-particle correlations at low relative momentum) between quarkonium and hadron in ep and eA reactions. Such observables are sensitive to interactions in the final state and strong interaction parameters can be measured directly (the scattering length and effective range) <cit.>. Quarkonium-hadron elastic and inelastic scattering cross sections can be evaluated as a function of event multiplicity. Such information can be used to calculate the modification of the quarkonium yield in the hadronic environment.
From the theory point of view, in order to extract nuclear PDFs and constrain the transport properties of large nuclei using quarkonium production in eA collisions, we need to develop a theoretically well-controlled framework
capable of describing final-state interactions.
Below, we briefly present an example of such an attempt.
Since the remnant of the nucleus is a cold nuclear environment, we expect the energy transferred between the nucleus remnant and the heavy-quark pair traversing the nucleus to be small. With this assumption, one can use the open quantum system framework and the Boltzmann equation developed in Refs. <cit.> to study final-state interactions. In this approach the physical quantity that encodes the essential information of the nuclear remnant relevant for a final-state interaction is the chromoelectric field correlator, which is defined in a gauge invariant way:
g_E^>(q) = ∑_i=1^3 ∫ d^4 (y-x) e^iq · (y-x) Tr_N( E_i(y) W E_i(x) ρ_N ) ,
where ρ_N is the density matrix of the remnant nucleus and W denotes a staple-shape Wilson line in the adjoint representation that connects the spacetime points y and x such that the correlator is defined gauge invariantly. For quarkonium dissociation and formation, the two time-like Wilson lines are connected at positive and negative infinite times separately, as shown in fig:eA_wilson. The Wilson lines involved here are similar to those involved in the definition of proton TMDs, with a
difference in the orientation of the Wilson lines.
In a nutshell, quarkonium production in eA collisions involves both initial-state and final-state effects. It will be important for the community to develop strategies for how to best separate these distinct contributions. The combination of both eA and pA experimental data will be useful to determine quarkonium-hadron interaction parameters, nuclear PDFs, and properties of the remnant nucleus such as the chromoelectric correlator strength.
§ QUARKONIA AS TOOLS TO STUDY THE PARTON CONTENT OF THE NUCLEONS
The goal of the present section is to show that quarkonium production in lepton-hadron collisions can be an excellent observable to probe the partonic content of the nucleon.
First, we discuss
how quarkonium production measurements at the EIC can contribute to our knowledge of collinear PDFs of the nucleon. Section <ref> is dedicated to accessing the gluon PDF from inclusive -quarkonium photoproduction processes. In Section <ref>, we emphasise how measurements of exclusive and electroproduction at the EIC, by extension of those from the HERA collider, can be used as an indirect probe of the gluon PDF at moderate values of the momentum fraction over a wide range of scales.
Section <ref> is devoted to the sensitivity to light quark PDFs of inclusive photoproduction, while Section <ref> focuses on the charm PDF and the potential detection of intrinsic charm at the EIC.
We then move to the multidimensional imaging of the partonic structure of nucleons through quarkonium-related measurements at the EIC.
Sections <ref> and
<ref> are devoted to the possibility to extract information on TMD PDFs of unpolarised and polarised nucleons, respectively, from quarkonium electroproduction data at the EIC.
The systematic description of exclusive production processes is done in terms of GPDs (Section <ref>) and GTMDs (Section <ref>). We stress that the relation between GPDs and PDFs used in Section <ref> is only an approximation, albeit a good one at the moderate and low values of the momentum fraction that we consider here. Furthermore, in Section <ref>, we touch on the
possibility to access the QCD trace anomaly through the measurement of exclusive electroproduction at the threshold.
Finally, in Section <ref>, we concentrate on double-parton scattering (DPS), which is another interesting probe
of nucleon structure. First estimates for -pair electroproduction at the EIC, which include DPS contributions, are presented.
§.§ Unpolarised-nucleon PDFs
§.§.§ Gluon PDF from inclusive quarkonium photoproduction
Inclusive photoproduction, when an almost real photon hits and breaks the proton producing a , is a useful tool to study the quarkonium-production mechanism and to
probe the gluon PDF.
This process has been the object of several studies at HERA <cit.>, and, in the future, it could be
studied at the EIC.
In Ref. <cit.>
the inclusive photoproduction up to NLO in QCD for J/ψ and Υ(1S) at lepton-proton colliders was revisited,
focusing on the P_T- and z-integrated yields.
Like for other charmonium-production processes <cit.>,
the appearance of negative hadronic cross sections was observed at increasing energies, due to large negative partonic cross sections. There can only be two sources of negative partonic cross sections: the interference of the loop amplitude with the Born amplitude or the subtraction of the IR poles from the initial-emission collinear singularities to the real-emission amplitude.
Here, the latter subtraction is the source of the negative cross sections.
Conventionally, such divergences are removed by subtraction and included to the PDFs via Altarelli-Parisi counterterms (AP-CT). In principle, the negative term from the AP-CT should be compensated by the evolution of the PDFs according to the DGLAP equation. Yet, for the μ_F values on the order of the natural scale of these processes, the PDFs are not evolved much and can sometimes be so flat for some PDF parametrisations that the large ŝ region still significantly contributes. This results in negative values of the hadronic cross section.
To solve the negative cross-section issue,
the μ̂_F prescription proposed in <cit.> was used, which, up to NLO, corresponds to a resummation of such collinear divergences in
HEF <cit.>. According to this prescription, one needs to choose μ_F such that, for the partonic cross section σ̂_γ i (i = q, q̅, g), lim_ŝ→∞σ̂_γ i^ NLO=0
.
It was found that, for z<0.9, the optimal factorisation scale is μ̂_F=0.86 M_ <cit.> which falls well within the usual ranges of used values. Like for η_c hadroproduction, such a factorisation-scale prescription indeed allows one to avoid negative NLO cross sections, but it of course in turn prevents one from studying the corresponding factorisation-scale uncertainties.
The NLO μ_R uncertainties become reduced compared to the LO ones but slightly increase around √(s_γ p) = 50 GeV
, because of rather large (negative[Let us stress that unless μ_R is taken very small with a large α_s(μ_R), these negative contributions are not problematic, unlike the oversubtraction by the AP-CT.]) interferences between the one-loop and Born amplitudes.
At NNLO,
a further reduction of the μ_R uncertainties is expected. This is particularly relevant especially around √(s_γ p) = 50-100 GeV, which corresponds to the EIC region. This would likely allow us to better probe gluon PDFs using photoproduction data. Going further, differential measurements in the elasticity or the rapidity could provide a complementary leverage in x to fit the gluon PDF, even in the presence of the v^4
CO contributions. Indeed, these would likely exhibit a very similar dependence on x.
The possibility to constrain PDFs using future J/ψ and Υ(1S) photoproduction data <cit.>
was investigated by comparing the PDF and μ_R uncertainties.
Unsurprisingly, the PDF uncertainties get larger than the (NLO) μ_R uncertainties with the growth of the γ p centre-of-mass energy, in practice from around 300 GeV, for x below 0.01. Although this is above the reach of the EIC, with NNLO predictions at our disposal in the future, with yet smaller μ_R uncertainties, one could set novel constraints on PDFs with such EIC measurements. Following the estimated counting rates for 100 fb^-1 of ep collisions given in <cit.>,
a number of differential measurements (in , z and/or y) will be possible to reduce the impact of highly- or even partially-correlated theoretical uncertainties, including the contamination of higher-v^2 corrections, such as the
CO contributions.
tab:numb_of_particles gathers estimates of the expected number of J/ψ and Υ(1S) possibly detected at the different ep
centre-of-mass energies at the EIC.
For Υ(1S), the yields should be sufficient to extract cross sections even below
the nominal EIC luminosities.
One can also estimate the expected number of detected ψ', Υ(2S)
and Υ(3S) using the following relations
N_ψ' ≃0.08 × N_J/ψ,
N_Υ(2S) ≃0.4 × N_Υ(1S),
N_Υ(3S) ≃0.35 × N_Υ(1S),
derived from the values of[These contributions were estimated using |R_ψ'(0)|^2 = 0.8 GeV^3, |R_Υ(2S)(0)|^2 = 5.0 GeV^3 and |R_Υ(3S)(0)|^2 = 3.4 GeV^3 and the corresponding measured branching fractions to J/ψ and Υ (1S) <cit.>.] |R_(0)|^2 (the quarkonium radial wave function at the origin, that is related to the ^3S_1^[1] LDME) and of the branching fractions to leptons. Using the values in tab:numb_of_particles and eq:yield_ratio <cit.>, one can see that the yield of ψ' should be measurable and the yields of Υ(2S) and Υ(3S) are close to about half of that of Υ(1S) and should be measurable as well at the EIC.
§.§.§ Gluon PDFs from exclusive quarkonium photo- and electroproduction
The exclusive production of heavy vector mesons has long been a fascinating observable to study, functioning as an enticing avenue to unravelling the small-x behaviour of the gluon PDF from low to moderate scales. Measured in the first instance in the fixed-target mode <cit.> and in DIS events at HERA, see e.g. <cit.>, and then more recently in ultra-peripheral collisions at the LHC <cit.>, they provide a means to explore the quarkonium production mechanism and act as sensitive probes at the frontier of small-x saturation physics.
The exclusive electroproduction, γ^* p → p, has been measured via dilepton decays at HERA in a narrow range of photon virtualities, extending up to ⟨ Q^2 ⟩ = 22.4 GeV^2. The corresponding photoproduction has also been determined in ultraperipheral events at the LHC.
There are not, as of yet however, any data from HERA and the LHC for exclusive electroproduction, γ^* p → p, away from the photoproduction limit. Going forward, the EIC will extend the kinematic reach in Q^2, providing a lever arm up to larger virtualities and, moreover, allow for a measurement of the electroproduction with off-shell photon kinematics for the first time, albeit with a projected lower Q^2 + M_^2 bin coverage and event count rates due to its heavier mass <cit.>.
Recently, in Ref. <cit.>, the coefficient functions for exclusive heavy vector meson electroproduction were derived at NLO within the framework of collinear factorisation, with the transition from an open heavy quark-antiquark pair to a bound heavy vector meson made within LO NRQCD.
Based on the above derivation of the coefficient functions, predictions for the exclusive electroproduction cross section have been made. They are shown in fig:chris_1 in bins of Q^2 at a fixed centre-of-mass energy, W = 90 GeV, of the γ^* p pair.
We use the Shuvaev transform <cit.> as a reliable means to obtain the GPD from input PDFs in the kinematic regions shown. We construct GPDs in such a way using MSHT20 <cit.>, NNPDF3.1 <cit.> and CT18 <cit.> input NLO PDFs and the predictions based on the former are shown in the figure. The choice of input PDF has the largest effect at the lowest Q^2, where the choice of the initial condition of the DGLAP evolution is felt, while for larger Q^2, this effect washes out and the predictions based on each PDF set agree at or below the percent level.
The central values of the prediction for low to moderate Q^2 are in good agreement with the experimental data from H1 and ZEUS, but for larger Q^2 there appears to be a downward shift of the prediction from the data. The prediction in the highest Q^2 bin exhibits a small factorisation-scale dependency and is essentially independent of the choice of the input PDF but, as shown, the deviation from the data is sizeable.
Interestingly, in the large Q^2 limit, the gluon amplitude ∝ln(Q^2/m_Q^2)^2 while the quark amplitude ∝ln(Q^2/m_Q^2). This observation seems to necessitate a program of resummation for the exclusive electroproduction of heavy vector mesons for virtualities Q^2 ≫ m_Q^2, those relevant for EIC kinematics, and may provide for the reconciliation of the theory prediction and experimental data at large scales.
The data statistics are currently limited for larger Q^2 and, in particular, there is a wide range
where the EIC can provide a first coverage. This will help to ascertain on which front the difference between this prediction and the data at large Q^2 lies and if resummation effects are already needed. Other numerical effects in this framework such as the so-called `Q_0 subtraction' <cit.>, crucial for a fruitful description of the photoproduction data <cit.>, are not surmised to be important for electroproduction kinematics because the corresponding power correction 𝒪(Q_0^2/μ_F^2) is no longer of 𝒪(1). See also <cit.> for a recent baseline study of exclusive photoproduction in heavy-ion collisions in the collinear factorisation framework to NLO.
Simulated event count projections were given for the exclusive electroproduction of the and in bins of Q^2 + M_^2 as a function of x in <cit.>. In fig:chris_2 (left panel), we show predictions for the exclusive electroproduction cross section as a function of W at a fixed scale ⟨ Q^2 ⟩ = 16 GeV^2 using Shuvaev-transformed MSHT20 input NLO PDFs, as well as the exclusive electroproduction HERA data that lie in this bin for comparison purposes. The prediction agrees most favourably with the more up-to-date dataset, however the EIC will be able to provide more statistics and resolve the slight tension between (and discrepancies within) the datasets. In particular, the data point at W = 189 GeV is around a factor of two larger than other data lying in this bin. We also show predictions for the exclusive electroproduction cross section as a function of W (right panel) for ⟨ Q^2 ⟩ = 0.001, 16, 22.4 GeV^2 and 47.3 GeV^2, which may ultimately be compared with data from the EIC.[Admittedly, the expected event count rate is a lot lower than that of the corresponding production, even by three orders of magnitude in the photoproduction bin containing the most counts <cit.>. Any data will therefore likely be sparse and exhibit large uncertainties, but nonetheless complement those already existing from HERA and LHC, shown in the right panel of Fig. <ref>.] In each case the quark contribution to the total amplitude is negligible and so the forthcoming enhanced statistics and increased data coverage from the EIC will allow for refined and improved constraints on the gluon PDF at low to moderate scales.
§.§.§ Light quarks
At EIC energies, we also expect to be sensitive to quark-initiated partonic subprocesses. As shown in Ref. <cit.>, in inclusive quarkonium photoproduction, the quark-induced subprocesses γ + q → + q (+ g) will be a relevant contribution to the cross section. Therefore, through quarkonium photoproduction, the EIC will also
be
partially sensitive to the light-quark PDFs.
To highlight the quark-induced contribution, we show in fig:EIC-CT14NLO-NLOstar-ratio the ratio to the
CSM cross section for every partonic subprocess (up to 𝒪(αα_s^3)) depicted in fig:EIC-CT14NLO-NLOstar, at two different
centre-of-mass energies, √(s_ep) = 45 GeV (left panel) and √(s_ep) = 140 GeV (right panel), as a function of transverse momentum. It is clear that
the pure QED quark-initiated process at 𝒪(α^3) become dominant at high P_T, accounting for
over half of the
cross section at P_T ∼ 12 (16) GeV at √(s_ep) = 45 (140) GeV. The effect is larger at √(s_ep) = 45 GeV, where the valence region of the PDF is probed. The 𝒪(αα_s^3) contribution (γ + q → + q + g) is roughly
5-15 % and 10-15 % of the
cross section at √(s_ep) = 45 GeV and √(s_ep) = 140 GeV, respectively. We then expect that, in photoproduction processes at the EIC, the produced at large P_T will be recoiling off of
at least one quark jet. The significant contribution of quark-induced subprocesses at high
of the J/ψ is also observed in the NLO NRQCD calculation, as shown in
fig:quark_fractions. Moreover, this conclusion depends only mildly on the
NRQCD LDMEs that were used.
§.§.§ Charm quark and intrinsic charm
The existence of a nonperturbative charm-quark content in the proton, referred to as intrinsic charm (IC), has long been postulated <cit.>. Intrinsic charm states are a fundamental property of hadronic bound-state wave functions <cit.>. They differ from extrinsic charm in perturbative QCD that arises from gluon splitting and contributes to the heavy-quark PDFs (i.e., radiatively generated). The “intrinsic" label is due to the fact that a cc̅ pair formed by gluons from more than one quark line forces the cc̅ parameters to be dependent upon (i.e., reflective of) the hadron that creates it. Therefore the c and c̅ distributions are “intrinsic” to the identity of the proton, or the meson, or whichever hadron contains the bound quarks that emit gluons. “Extrinsic” means that the sea quark pairs come from a single quark line gluon and therefore do not reflect the bound state structure they exist in, at least not in the clear way that IC of the proton does, peaking at ∼ x_B=0.4 and imparting a difference in c and c̅ distributions, according to recent lattice calculations <cit.>.
Since extrinsic charm contributions are due to a gluon emitted by a single quark line which then splits into a cc̅ pair, these charm distributions are soft, appear at low x and depend logarithmically on the mass of the heavy quark m_Q. On the other hand, IC contributions dominate at higher x and have a 1/m_Q^2 dependence. They come from five-quark (and higher) Fock-state configurations of the proton, |uud c c⟩, and are kinematically dominated by the regime where the state is minimally off-shell, leading to equal-rapidity constituent quarks. Thus, the charm quarks are manifested at large x. When the proton in this state interacts with its collision partner, whether a hadron or a lepton, the coherence of the Fock components is broken and the fluctuations can hadronise <cit.>. In hadroproduction, the state can be broken up by a soft gluon from the target interacting with the proton. In ep interactions, instead of a soft gluon, a low-energy photon can play the same role and bring the state on mass shell.
Several formulations of intrinsic charm in the proton wave function have been proposed. The first was proposed by Brodsky and collaborators in <cit.>:
dP_ ic 5/dx_1 dx_2 dx_3 dx_c dx_c = P_ ic 5^0
N_5 ∫ dk_x 1⋯ dk_x 5∫ dk_y 1⋯ dk_y 5δ(1-∑_i=1^5 x_i)δ(∑_i=1^5 k_x i) δ(∑_i=1^5 k_y i)/(m_p^2 - ∑_i=1^5 (m_i^2/x_i) )^2 ,
where i = 1, 2, 3 are the light quarks (u, u, d) and i = 4 and 5 are the c and c quarks.
Here, N_5 normalises the |uud c c⟩ probability to unity and P_ ic 5^0 scales the unit-normalised probability to the assumed intrinsic-charm content of the proton. The delta functions conserve longitudinal and transverse momentum. The denominator of eq:icdenom is minimised when the heaviest constituents carry the dominant fraction of the longitudinal momentum, ⟨ x_Q ⟩ > ⟨ x_q ⟩.
In the
first papers, the c and c distributions were treated equally, but
later studies showed
an asymmetry in c and c distributions <cit.>. The asymmetry is caused by QCD diagrams where, for example, two gluons from two different valence quarks in the nucleon couple to a heavy-quark pair gg → QQ with charge conjugation value C=+1 <cit.>.
This amplitude interferes with QCD diagrams where an odd number of gluons attach to the heavy-quark pair, g→ QQ and ggg→ Q Q with C=-1. The interference of amplitudes with the same final state but different charge conjugation symmetry for the Q Q produces the asymmetric distribution functions. The analogous interference term is seen in the electron and positron distributions in e^+ e^- pair production <cit.>.
At leading order, the charm-quark structure function from this state can be written as
F_2c^ ic(x_c) = 8/9 x_c c(x_c) = 8/9∫ dx_1 dx_2 dx_3
dx_cdP_ ic 5/dx_1 dx_2 dx_3 dx_c dx_c .
Intrinsic-charm models Intrinsic-charm distributions in the proton have also been calculated using meson-cloud models where the proton fluctuates into a D(u c) Λ_c (udc) state <cit.>. A further development of this model examined all possible charm meson-baryon combinations in the |uud c c⟩ state <cit.>, finding that charm mesons would predominantly be produced through D^* mesons. In these models the charm sea contribution would be asymmetric xc(x) ≠ x c(x).
In both the Brodsky et al. and the meson-cloud formulations, the intrinsic-charm contributions appear as an enhancement at large x. On the other hand, a sea-like distribution <cit.> has also been considered. In this case, the intrinsic-charm distribution is represented simply as an overall enhancement to the light-quark-mass sea. These distributions are
symmetric, x c(x) = x c (x).
Intrinsic-charm distributions from these models have been included in global analyses of the parton densities <cit.>. Earlier analyses <cit.> focused specifically on the European Muon Collaboration (EMC) high-x and high-Q^2 data <cit.>. A range of values of P_ ic 5^0 were extracted, from 0.1% to 1%. For more details of these analyses, see <cit.>. See also the recent review in <cit.> for more applications of intrinsic-heavy-quark states. New evidence for a finite charm-quark asymmetry in the nucleon wave function from lattice gauge theory, consistent with intrinsic charm, was published in <cit.>. Further evidence for unequal c and c distributions in the proton has recently been presented along with proposed experimental tests with the EIC using flavour-tagged structure functions <cit.>.
Note that only the 5-particle intrinsic-charm state of the proton has been discussed. However, one can also consider higher Fock components such as |uud cc q q⟩. These will reduce the average momentum fraction of the charm quark and also have lower probability. See e.g. <cit.> for examples of charm hadron distributions from higher Fock states. Finally, the possibility for an enhanced IC component in the deuteron was studied in <cit.>.
Recent hints from the LHC
A number of experimental measurements <cit.> over the last several decades have provided tantalising hints of intrinsic charm.
Recently LHCb announced that their measurement of Z + charm jets relative to all Z + jets is consistent with an intrinsic-charm component of the proton as large as 1% at large Z rapidity <cit.>. These results were recently confirmed by a phenomenological analysis made by the NNPDF Collaboration <cit.>. Measurements at lower scales than the Z-boson mass are therefore eagerly awaited for to advance our understanding of this higher-Fock-state phenomenon.
Intrinsic charm at the EIC The
EIC will offer the possibility to probe the nonperturbative charm-quark content in the proton. Recent studies show that the EIC will be capable of precision studies of intrinsic-charm as well as gluon distribution functions in the nucleus and in the nucleon <cit.>.
The associated production of a and a charmed particle is
an additional potential probe of intrinsic-charm related effects. A leading order VFNS study, first made in <cit.> for quarkonium hadroproduction, has been extended in <cit.> to the case of photoproduction. Such a scheme allows a proper merging of different partonic contributions, namely γ + g → + c + c̅ and γ + {c,c̅}→ + {c,c̅}, respectively calculated with 3 and 4 flavours in the proton, using a counter term, dσ^ CT, that avoids double counting. When the charm-tagging efficiency ε_c is taken into account, the corresponding VFNS cross section is given by:
dσ^ VFNS = dσ^ 3FS[1-(1-ε_c)^2] + (dσ^ 4FS - dσ^ CT)ε_c.
Based on such computations, the +charm yield has been calculated for two different EIC configurations: √(s_ep) = 45 (140) GeV, taking into account a 10% charm-tagging efficiency <cit.>. The calculation has been done with the CT14NNLO PDF set <cit.>, which includes different eigensets with some IC effects: a “sea-like” (in green in the following),
a “valence-like” (in red) also called “BHPS,” and a central eigenset with no IC effects which we refer to as “no IC” (in blue).
fig:EIC-VFNS-IC shows the result for the +charm yield at the
EIC. First, we note that, at √(s_ep) = 45 GeV (left panel in fig:EIC-VFNS-IC), the yield is limited to low P_T values even with the largest estimated integrated luminosity. Nonetheless, it is clearly observable if ε_c = 0.1 with 𝒪 (5000, 500, 50) events for ℒ = (100,10,1) fb^-1. On the other hand, at √(s_ep) = 140 GeV (fig:EIC-VFNS-IC, right panel), the P_T range extends to ∼ 14 GeV and we expect 𝒪(10000) events at ℒ = 100 fb^-1. Such events could be observed by measuring a charmed jet. Finally, we note that, at √(s_ep) = 140 GeV, where the valence region
is not probed, no clear IC effect is visible, while at √(s_ep) = 45 GeV we expect a measurable effect, where the BHPS valence-like peak is visible with a yield enhancement as large as 5-6 times the “no IC” yield. The EIC at √(s_ep) = 45 GeV will thus be the place to probe the nonperturbative charm content of the proton via associated +charm production.
§.§ Unpolarised-nucleon TMDs
§.§.§ Unpolarised gluons
Quark TMDs have now been extracted from data with reasonable precision <cit.>. On the contrary, phenomenological studies of gluon TMDs are still very much at the beginning stage. In Ref. <cit.>, a gluon TMD description of the Higgs-production transverse-momentum spectrum was compared to data,
which, however, suffers from very large uncertainties. In Refs. <cit.>, a gluon TMD description of the LHCb -pair-production data <cit.> was obtained. Like for Higgs-boson production, the experimental errors are
large and require the subtraction of double parton scattering contributions (see <cit.> and Section <ref>), which adds an additional uncertainty. In Ref. <cit.>, it was discussed that back-to-back production of a heavy quarkonium, in particular of an Υ, and an isolated photon in proton-proton collisions at the LHC is a promising way to access the distribution of both the transverse momentum and the polarisation of gluons inside unpolarised protons. In a wide range of invariant masses of the quarkonium and photon system, gluon-gluon scattering into a photon plus a quarkonium in the CS state dominates.
In the aforementioned processes, one
however
probes a convolution of two gluon TMDs. At the EIC, one can probe gluon TMDs more directly through the P_T^* distribution, although upon the inclusion of
ShFs (see section <ref>); this also deals with convolutions.
At the LHC, with the consideration of such
ShFs, one even folds three transverse-momentum-dependent distributions. Another possibility to study gluon TMDs at the EIC using quarkonia is to consider the transverse-momentum imbalance between the scattered lepton and the observed J/ψ in the electron-hadron centre-of-mass frame p_T = |ℓ'_T + P_T|. If large |ℓ'_T|≃ |P_T| ≫ | p_T| determines the hard scale of the process, then
in quarkonium production at the EIC the leading subprocess is
e+g→ (cc̅)^[8]+e with the octet pair
hadronising into an observed J/ψ. Within the hybrid factorisation formalism for SIDIS discussed in Section <ref>, the p_T
should be determined by the transverse momentum k_T of the colliding gluon (or its TMD distribution) and the quarkonium TMD
ShF.
Since gluon radiation from a heavy quark should be strongly suppressed compared to a light quark or a gluon, the observed momentum imbalance p_T is expected to be dominated by the k_T of the colliding gluon <cit.>. Therefore, the p_T
-distribution of production in SIDIS could be a more direct observable for the gluon TMD <cit.>.
It would be very interesting to compare the gluon TMD obtained at EIC to that from the J/ψ+J/ψ or Υ+γ process at LHC in the future.
In principle, gluon TMDs are process dependent, even in the unpolarised case (see e.g. <cit.>).
However, provided that the CS final state dominates in
J/ψ+J/ψ and Υ+γ production at the LHC,
these processes involve the same gluon TMD. This then would provide a nice test of TMD factorisation in combination with NRQCD and of TMD evolution, if the processes are probed at different scales. Another comparison that seems worthwhile is the extraction of gluon TMDs from open heavy-quark pair production at the EIC <cit.> or from inclusive η_c or η_b production in proton-proton collisions <cit.>. Note that inclusive CS or Υ production from two gluons is forbidden by the Landau-Yang theorem, while inclusive CO or Υ production does not involve the same gluon TMD and may not even
factorise to begin with.
§.§.§ Linearly polarised gluons
As discussed in Section <ref>, linearly polarised gluons lead to a cos 2 ϕ_T^* asymmetry in semi-inclusive electroproduction of J/ψ in unpolarised ep collisions <cit.>. In this section, we present some predictions for this asymmetry
at low transverse momenta ^*.
Within NRQCD, contributions to the asymmetry
comes
through the fusion of a virtual photon and a gluon <cit.> already at Born order, i.e. α_s α, but at NNLO in v^2 since via CO contributions. Such α_s α contributions however only sit at z=1.
As soon as z≠ 1, a recoiling particle against the quarkonium is needed and Born-order contributions are at α^2_s α both from CS and CO states. From a simple counting in v^2 the CS contributions <cit.> should be dominant at z≠ 1. However, the current LDME fits seem not to obey such a simple v^2 counting and, as a matter of fact, sometimes leads to an excess[It should be clear to the reader that such computations are as of now only carried at LO whereas the LDMEs are extracted at NLO. We refer to our introductory discussion at the beginning of Section <ref> regarding potential issues in doing so.] in describing the scarce data available from HERA <cit.>.
In principle, the asymmetry thus receives contributions from both CS and CO states.
The first estimate we present here is based on
a model expression for the cross section <cit.>:
dσ= 1/2sd^3l'/(2π)^32E_l'd^3P_/(2π)^32E_P_∫d^3p_g/(2π)^32E_g∫ dx d^2𝐤_⊥(2π)^4δ(q+k-P_-p_g)
×1/Q^4ℒ^μμ'(l,q)Φ^νν'
(x,𝐤_⊥) ℳ_μν(ℳ_μ'ν')^∗.
This expression is akin to the Generalised Parton Model employed to describe single-spin asymmetries in polarised proton collisions (to be discussed in section <ref>). It is not of TMD-factorisation form and differs from Eq. (<ref>) by considering the subprocess γ^∗+g→+g, where the additional hard gluon in the final state generates larger transverse momenta and elasticity z values below 1,
while the dependence on the initial gluon transverse momentum is kept everywhere. In other words, no collinear expansion is performed and the obtained expression is thus not a CF expression either.
In fig:lin, we
show the cos 2 ϕ_T^* asymmetry as a function of P_T for =140 GeV, for fixed values of z and Q^2. Both
CS and CO contributions are included.
We show
the results for two different models for the TMDs,
the Gaussian <cit.>
and the McLerran-Venugopalan model <cit.>, and for two different sets of LDMEs, CMSWZ <cit.> and BK <cit.>. The asymmetry is small and
depends on the chosen LDME set.
Details of the calculation may be found in <cit.>.
A second estimate – only relevant for z≃ 1 – is based on the
TMD formalism involving shape functions. Although the semi-inclusive quarkonium electroproduction is naturally described in TMD factorisation at small quarkonium transverse momentum (^* ≪ M_∼ Q), there is large uncertainty due to the non-perturbative part of the TMD description and due to the lack of knowledge on the TMD shape functions. However, using the leading-order shape functions in terms of LDMEs and including leading-order TMD evolution, it is nevertheless possible to obtain rough predictions for the EIC (details on the shape function can be found in Ref. <cit.>). Using this approach, estimates for the cos 2ϕ_T^* asymmetry in J/ψ production as a function of ^* can be obtained. The results are shown in fig:EICpredictioninclTMDevolution for several LDME sets (for more predictions see Ref. <cit.>) and for kinematics similar to that of fig:lin (to be precise, for the same and Q^2, and comparable x_B, but different values of z). Despite the large uncertainties in these TMD results (the uncertainty bands reflect the uncertainty in the non-perturbative Sudakov factor), it is clear that within these uncertainties it allows for significantly (by more than an order of magnitude) larger asymmetries than in fig:lin.
Its measurement may thus be feasible at EIC such that further constraints on the LDMEs, and more generally on the TMD shape functions, can be obtained in this way.
Observing a nonzero asymmetry would be a signal of linear polarisation of the gluons inside an unpolarised proton, which is expected theoretically but not established experimentally thus far. The range of predictions is currently too large to draw a definite conclusion about its observability at EIC, but that makes it all the more important to obtain first data on the cos 2 ϕ_T^* asymmetry. It would provide information on the distribution of linearly polarised gluons as well as on LDMEs.
§.§ Polarised-nucleon TMDs
Among the observables that can be measured at the EIC to access polarised nucleon TMDs (the Sivers function),
the most common are probably the Single Transverse Spin Asymmetries (STSA), denoted A_N, or A_UT.
Two theory approaches have been pushed forward to explain STSAs observed on polarised protons <cit.>. Both of them can in principle be extended to quarkonium production.
The first approach is referred to as collinear twist-3 (CT3) formalism <cit.>
and, like CF,
applies to single-scale processes.
The STSA then arises from quark-gluon-quark or triple-gluon correlators, which are the sub-leading (in the scale) twist-3 extensions of the usual collinear PDFs (putting aside for this discussion FF contributions).
Some CT3 analyses for A_N in ep collisions have been performed in the past, see e.g. <cit.>, and only very recently this approach has been extended to STSAs in quarkonium production in polarised ep collisions <cit.>.
The second approach is TMD factorisation, thus applicable when two very different momenta are measured, or when a small (yet perturbative) momentum is measured in a process involving a large mass ((Λ_ QCD) ≲ P_T^* ≪ Q in SIDIS, where P_T^* is the transverse momentum of the hadron in the final state and Q^2 is the photon virtuality). The STSA arises from the Sivers TMD PDF f_1T^⊥ <cit.>, i.e. the distribution of unpolarised partons inside the transversely-polarised hadron. In the case of quarkonium production in ep collisions, TMD factorisation has been assumed and used to compute the Sivers asymmetry in several cases <cit.>.
In addition, a phenomenological approach, called the Generalised Parton Model (GPM) <cit.>, encapsulates the Sivers mechanism via the aforementioned TMD Sivers function, assumed to be universal, but also applied in single-scale processes. This is done by keeping track of the transverse-momentum exchanges in the partonic scattering. As such, it can be considered as a hybrid approach between strict CT3 and TMD factorisation. Its extension, called Colour Gauge Invariant GPM (CGI-GPM) <cit.>, allows one to recover the modified universality of the quark Sivers function between SIDIS and Drell-Yan <cit.>. Moreover, for the gluon Sivers effect, similarly to the CT3 approach case, two independent gluon Sivers functions (GSFs) appear <cit.>, dubbed as f- and d-type.
This approach has proven to be quite successful in phenomenological analyses <cit.>. One should however be careful if one wishes to draw any conclusion about the properties of the used TMDs and the underlying phenomena. In any case, it is useful to get estimates of STSAs in single-scale processes where a CT3 analysis becomes challenging, like for quarkonium production, due to still unconstrained twist-3 functions appearing in its computation. It has been applied to the quarkonium cases in several studies <cit.>.
Below STSAs in different quarkonium-production processes are discussed, in the context of the EIC, which could perform these measurements by polarising a target. In general, it is believed that quarkonium-related STSA would be key player to underpin the Sivers mechanism for gluons.
Experimentally, one defines the so-called transverse
STSA as
A_N = 1/ Pσ^↑ - σ^↓/σ^↑ + σ^↓ ,
where σ^↑ (↓) is the cross section of particles produced with the target nucleon spin orientation upwards (downwards), and P is the average nucleon polarisation. In what follows, we present predictions and projections for
STSA in inclusive photoproduction and for azimuthal weighted Sivers asymmetries in leptoproduction in SIDIS processes.
§.§.§ EIC reach for A^_N for inclusive photoproduction
In this section, we study how to probe the GSF via the GPM approach by measuring the STSA in inclusive photoproduction (γ + p^↑→ J/ψ+X) <cit.>. In such a process, only the f-type GSF contributes to the Sivers asymmetry.
In photoproduction, there are contributions from direct and resolved photons. Resolved photons mainly contribute in the region of low elasticity z.
At z close to unity, diffractive contributions become significant. In inclusive photoproduction, the variable z can be measured using the Jacquet-Blondel method. The differential cross section of inclusive J/ψ production in unpolarised ep collisions can be written as
E_dσ/d^3 P_= 1/2(2π)^2∫ dx_γ dx_g
d^2 k_⊥ g
f_γ/e(x_γ)f_g/p(x_g, k_⊥
g)δ(ŝ+t̂+û-M_^2)
×1/2ŝ|ℳ_γ+g→ +g|^2.
Here, x_γ and x_g are the light-cone momentum fractions of the photon and gluon, respectively; ŝ, t̂, û are the partonic Mandelstam variables; ℳ_γ+g→ +g is the matrix element for the partonic subprocess γ+g→ +g; f_g/p(x_g, k_⊥
g) is the unpolarised gluon TMD, while f_γ/e(x_γ) is the Weizsäcker-Williams distribution, giving the density of photons inside the electron <cit.>.
For theory predictions of measurements on a transversely polarised nucleon, the STSA, as introduced in eq:STSA, is generally used.
Some GPM predictions for STSA in inclusive photoproduction at the EIC for =45 (140) GeV are shown in fig:Photo_Sivers, as a function of the transverse momentum, , as well as a function of the elasticity z. The amplitude for the J/ψ production is calculated in NRQCD. Details of the calculation can be found in Ref. <cit.>.
The dominating channel of J/ψ production is γ g fusion. The contribution to the numerator of the
STSA comes mainly from the GSF <cit.>, while the linearly polarised gluons do not contribute to the denominator for this specific process. Moreover, the numerator of the asymmetry only receives contributions from CO states <cit.>, whereas in the denominator, both
CO and CS contributions are included.
We have used the GSF parametrisations
(SIDIS1, SIDIS2) from Ref. <cit.>. BV-a and BV-b are parametrisations of the GSF in terms of up and down quark Sivers functions <cit.>, where parameters from Ref. <cit.> are used. The effect of TMD evolution is not incorporated in the plot. The PDF set MSTW2008 <cit.> is used; the uncertainty bands have been obtained by varying the factorisation scale μ_F ∈[12 μ_0, 2μ_0], with μ_0 = m_T = √(M^2_ + P_T^2) being the transverse mass. The value of α_s is calculated at the scale μ_0 and is taken from the MSTW set. The used cuts are the following: Q^2<1 GeV^2 and 0.3 < z < 0.9.
Note that, in the photoproduction case, y coincides with x_γ. The corresponding cut is 0.01 < x_γ < 0.95.
As shown in fig:Photo_Sivers, we expect A_N to be small and positive in the SIDIS1 and SIDIS2 cases, while it is larger (in size) but negative when the GSF is parametrised in terms of the up- and down-quark Sivers functions.
Another estimate is shown in jpsi:AN:ep. Here, projections for statistical uncertainties for the A_N measurement as a function of transverse momentum for ep collisions at =45 GeV and = 140 GeV for an integrated luminosity = 100 are presented.
We consider the reconstruction via its electron decay channel (→, B = 5.94 ± 0.06 %)
, and we assume the single-electron measurement efficiency to be 80% and constant
with respect to its transverse momentum and in the pseudorapidity interval
|η|<2. The measurement efficiency is calculated using decay kinematics simulated with PYTHIA8 <cit.> (see <ref> for details). Based on these results, we assume the measurement efficiency to be 64%. Furthermore, we assume the signal-to-background ratio S/B = 1, and use the same method as in Ref. <cit.> to estimate statistical uncertainties on A_N.
For the expected cross section for prompt production in ep collisions at the EIC,
we consider the CSM predictions from Ref. <cit.>, which were shown to approximately reproduce HERA data.
For illustration, the projections are compared to results from collisions reported by the PHENIX experiment <cit.>. At low , the statistical precision is at the per-cent level, exceeding the quality of the corresponding pp data. In this range, the final uncertainty will be dominated by systematic effects. The uncertainties increase fast with increasing of because the spectrum is predicted to be rather steep. Nonetheless, such a measurement would be valuable for constraining gluon TMDs at low transverse momentum.
Finally, we suggest that the associated photoproduction of and a jet, having them back-to-back,
can also probe the
GSF <cit.>. In this case the produced can have large transverse momentum, and needs not be in the forward region. A wide kinematical region can be covered by varying the invariant mass of the J/ψ-jet pair.
§.§.§ Azimuthal asymmetries for production in SIDIS at the EIC
In this section we consider the Sivers effect in the SIDIS process
, e(l)+p^↑(P_N)→ e(l^')+ J/ψ (P_)+X, that represent a promising tool to probe the GSF. The weighted Sivers asymmetry for such a process is defined as
A^sin(ϕ_T^*-ϕ_S^*)_N ≡ 2∫ϕ̣_S^*ϕ̣^*_T sin(ϕ^*_T -ϕ^*_S)
(σ̣^↑ - σ̣^↓)/∫ϕ̣^*_S ϕ̣^*_T (σ̣^↑ + σ̣^↓)≡∫ϕ̣^*_S ϕ̣^*_T sin(ϕ^*_T -ϕ^*_S) Δ̣σ(ϕ^*_S, ϕ^*_T)/∫ϕ̣^*_S ϕ̣^*_T σ̣ ,
where σ̣^↑(↓)=σ̣^↑(↓)/Q̣^2 y ^̣2P_T^* ẓ is the differential cross section with the initial proton polarised along the transverse direction ↑(↓) with respect to the lepton plane in the γ^*p centre-of-mass frame (at an angle ϕ^*_S).
We start by presenting the predictions in the CT3 formalism. In Ref. <cit.>, the twist-3 contributions to the unpolarised and polarised cross sections (respectively denominator and numerator of Eq. (<ref>)) were computed in the CSM. Among the different contributions, one give access to gluon Sivers effect via the CT3 gluon Qiu-Sterman function, which at LO is realated via an integral relation to the GSF first k_⊥ moment. Predictions for the gluon Sivers asymmetry at the EIC at = 45 GeV are presented in Fig. <ref>. They are computed at Q^2 = 10 GeV^2, x_(B) = 0.005 and P_T = 2 GeV, and are presented as a function of z for two different models of the gluon Qiu-Sterman function. Both models are proportional to f_g/p(x), the unpolarised collinear gluon PDF, and read
Model 1: 0.002 x f_g/p(x) ,
Model 2: 0.0005 √(x) f_g/p(x) .
Notice that, as these CSM predictions are ratios of cross sections, they do not depend on the value of the CS LDME. Both models predict a sizeable Sivers asymmetry, with a steady increase as a function of z, reaching up to ∼ 13-14% at z = 0.8.
Another prediction for the Sivers asymmetry is performed within
the GPM at
αα_s^2.
In order to study the effects of initial- and final-state interactions (ISIs and FSIs) on the Sivers asymmetry, the
CGI-GPM approach <cit.> is employed.
In Ref. <cit.>, the same observable was studied at 𝒪(αα_s) within the GPM, which implies z=1. Here the analysis is extended to the region z<1.
Assuming TMD factorisation within the GPM framework, the unpolarised differential cross section, entering the denominator of eq:AN, can be written as
σ̣/Q̣^2 ỵ ^̣2 P_T^* ẓ=1/(4π)^4 zs∑_a∫x̣_a/x_a ^̣2 k_⊥ a δ(ŝ+t̂+û-M_^2+Q^2) ∑_n1/Q^4 f_a/p(x_a,k_⊥ a) L^μν
H^a,U_μν[n] ⟨ O^J/ψ[n] ⟩ ,
where a=g,q, q̅ and H^a,U_μν[n] is calculated at the perturbative order αα_s^2 using NRQCD. More precisely, it is the squared amplitude of the partonic process γ^∗+a→ cc̅[n] +a, averaged/summed over the spins and colours of the initial/final parton, with
n=^3S_1^[1,8], ^1S_0^[8], ^3P_J^[8], J=0,1,2.
L^μν is the standard leptonic tensor and ⟨ O^J/ψ[n] ⟩ represents the LDME of the state indicated by n.
The numerator in eq:AN is directly sensitive to the Sivers function and within the GPM reads
Δ̣σ^GPM = 1/(4π)^4 zs∑_a∫x̣_a/x_a ^̣2 k_⊥ a δ(ŝ+t̂+û-M_^2+Q^2) sin(ϕ^*_S-ϕ^*_a)
×∑_n1/Q^4 (-2 k_⊥ a/M_p) f_1T^⊥ a (x_a, k_⊥ a) L^μν
H^a,U_μν[n] ⟨ O^J/ψ[n] ⟩ ,
where f_1T^⊥ a(x_a, k_⊥ a ) is the Sivers function.
The numerator of the asymmetry in the CGI-GPM is
given by
Δ̣σ^CGI = 1/2s2/(4π)^4 z∫x̣_a/x_a ^̣2 k_⊥ a δ(ŝ+t̂+û-M_^2+Q^2)sin(ϕ^*_S-ϕ^*_a) (-2 k_⊥ a/M_p)
×∑_n1/Q^4 L^μν{∑_q f_1T^⊥ q (x_a, k_⊥ a) H^q,Inc_μν[n]+
f_1T^⊥ g(f) (x_a, k_⊥ a)
H^g,Inc(f)_μν[n]}⟨ O^J/ψ[n] ⟩ ,
where H^a,Inc_μν[n] is the perturbative square amplitude calculated by incorporating the FSIs within the CGI-GPM approach.
Note that, in num:CGI-GPM, there is no contribution from the d-type GSF. In fact, in ep collisions, ISIs are absent due to the colourless nature of the virtual photon and only the f-type GSF is contributing to the Sivers asymmetry <cit.>. This means that quarkonium production in ep collisions is a powerful tool to directly access the process-dependent f-type GSF. Moreover, the modified colour factor associated with the ^3S_1^[1] state is zero in the CGI-GPM approach, which leads to a vanishing Sivers asymmetry in the
CSM.
By adopting a Gaussian factorised form for the unpolarised TMD distribution, a Gaussian-like Sivers distribution
and by maximising the latter we can give estimates for the upper bounds of the Sivers asymmetry (eq:AN) A^sin(ϕ^*_h-ϕ^*_S)_N at the EIC.
Results are presented in fig:EIC45ptz_BK, and are computed using the following kinematical cuts: 2.5 GeV^2 < Q^2 < 100 GeV^2, 10 GeV< W_γ p < 40 GeV, 0.3 < z < 0.9 and < 5 GeV. The BK11 LDMEs set <cit.> is adopted.
The asymmetry is mostly dominated by the GSF, while the quark contribution is negligible. This indicates that such an observable is a powerful tool to probe the unknown GSF. The GPM predicts negative values around 20%. The asymmetry is drastically reduced in size in the CGI-GPM due to colour-factor relative cancellations and the absence of the ^3S_1^[1]-state contribution and is essentially driven by the f-type GSF.
§.§ Generalised Parton Distributions
Information on the three-dimensional structure of the nucleon, correlating the transverse position of partons with their longitudinal momentum,
is provided by
GPDs. Processes to access GPDs include Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Production (DVMP).
A factorisation theorem has been proven for DVCS in the Bjorken limit <cit.>. It allows one to compute the DVCS amplitude as the product of some GPDs and corresponding coefficient functions that can be calculated perturbatively. GPDs are in very solid theoretical footing: at leading-twist level, all-order QCD-factorisation theorems directly relate the GPDs to particular hard exclusive scattering processes. GPDs are thus process-independent, universal quantities.
In the case of DVMP, factorisation applies in the case of longitudinally polarised photons. The hard-scattering process includes the exchange of hard quarks and gluons, involving the strong coupling constant α_s and a meson distribution amplitude, which is not completely understood to date.
The GPDs do not uphold a probabilistic interpretation like PDFs
do, but are well-defined in quantum field theory as matrix elements of bilocal quark and gluon operators at a light-like separation. In the light-cone gauge at leading twist, the quark GPD is
F^q(x,ξ,t) = 1/2∫d z^-/2 π e^i x P^+ z^-⟨ p' | ψ̅^q (-z/2) γ^+ ψ^q (z/2) | p ⟩ |_z^+ = z_⊥ = 0
=1/2 P^+[ H^q (x,ξ,t) u̅(p') γ^+ u(p) + E^q(x,ξ,t) u̅(p') iσ^+ μΔ_μ/2m_N u(p) ]
and the gluon GPD,
F^g(x,ξ,t) = 1/P^+∫d z^-/2 π e^i x P^+ z^-⟨ p' | F^+ μ(-z/2) F^+_μ(z/2) | p ⟩ |_z^+ = z_⊥ = 0
=1/2 P^+[ H^g (x,ξ,t) u̅(p') γ^+ u(p) + E^g(x,ξ,t) u̅(p') iσ^+ μΔ_μ/2m_N u( p) ],
where z= (z^+, z_⊥, z^-) are the light-cone coordinates, P^+ is the light-cone plus-component of the average of the incoming- and outgoing-nucleon momenta, x is he fractional parton plus-component momentum of the nucleon, ξ the skewness variable and t the Mandelstam variable, which represents the four-momentum transfer squared to the nucleon. The symbols γ and σ are the Dirac matrices,
u and u̅ are nucleon spinors and m_N is the mass of the nucleon. Here, F^q and F^g are both expressed as a Fourier transform of a matrix element of a chiral-even operator formed from either quark fields ψ^q or the gluon-field strength tensor F^μ ν. The result is a decomposition into twist-2 parton-helicity conserving GPDs H and E.
GPDs cannot be directly extracted from experimental data. Indeed, in the expression of the cross section of exclusive electroproduction processes, GPDs appear in convolution integrals known as Compton Form Factors (CFFs). These CFFs are complex quantities, the real and imaginary parts of which provide complementary constraints on GPDs. The DVCS
CFF ℋ, at leading-twist and leading-order (and at fixed momentum transfer t and skewness ξ), for example, is given by
ℋ = ∫_-1^1 dx F^q(x,ξ,t)/x-ξ + iϵ = 𝒫∫_-1^1 dx F^q(x,ξ,t)/x-ξ - iπ F^q(±ξ, ξ, t),
and with
σ(γ^* p →γ p) ∝ |ℋ|^2.
In addition, there are also spin-dependent GPDs
and are probed in measurements in which the spin or polarisation state is fully defined. If the spin states are averaged over, as in the description of an unpolarised measurement, then there is no way to have a direct dependence on, or be sensitive to, these objects. Moreover, there are also parton-helicity-flip GPDs (chiral odd), in which the initial- and final-state hadrons have different polarisations.
GPDs are also connected to the distribution of pressure and shear forces inside the nucleon <cit.> and, furthermore, the second moment of a particular combination of GPDs is related to the
angular momenta of quarks and gluons via Ji's relation <cit.>. A comprehensive review on the phenomenology of GPDs in DVCS can be found in <cit.>.
§.§.§ Gluons
DVCS is sensitive to quarks and, at higher order and/or higher twist, also to gluons. On the other hand, the production of light mesons in DVMP probes quarks and gluons, depending on the energy scale at which the process is measured. However, production in exclusive photoproduction (or electroproduction) reactions is a golden channel for gluon GPDs. Indeed, in this case the quark exchange plays only a minor role and due to the large scale provided by the heavy-quark mass, perturbative calculations is expected to be applicable even for photoproduction <cit.>.
At the EIC, precise measurements of exclusive cross sections will be possible in order to map out the dependence on the squared momentum transfer to the nucleon
t=(P_N-P_N')^2 for , ϕ and K, among others. EIC will cover the region of 0 <|t| < 1.5 GeV^2, down to an impact parameter of ∼ 0.1 fm.
Figure <ref> shows the projected precision obtainable at the EIC in the exclusive electroproduction cross section as a function of the momentum transfer t to the proton, for different bins in x_=(Q^2+M_^2)/(2 p· q),
the x-Bjorken equivalent scale variable for heavy mesons. The projections are produced using the LAGER <cit.> event generator and are based on the calculations presented in <cit.>. LAGER is described as a modular accept-reject generator, capable of simulating both fixed-target and collider kinematics, and has previously been used for vector-meson studies at EIC kinematics, with significant recent developmental effort in support of DVMP studies.
The transverse spatial distribution of partons can be obtained by a Fourier-transform
of the cross section as a function of t.
The key experimental feature of hard exclusive channels such as electroproduction is the detection of the recoil protons in the far-forward detectors, in particular in the B0 spectrometer and the Roman Pots. This allows
for accurate computation of the momentum transfer t, which is the Fourier conjugate variable to the impact parameter. A wide and continuous acceptance that extends to low-t is essential for a precision extraction of transverse-position distributions of partons.
On the other hand, far-forward detectors can also help in detecting
the process where the proton does not stay intact but breaks up. The dominance of this process over exclusive production increases with increasing t. In <cit.>,
it has been shown that the cross-section measurement of dissociative diffractive photoproduction at large t as a function of the rapidity gap between the produced and the dissociated proton is possible at the EIC. The interest of this process lies in the presence of two comparable hard scales, the charm mass and the large t, and hence the possibility to probe the presence of Balitsky-Fadin-Kuraev-Lipatov (BFKL) dynamics.
§.§.§ Light quarks
In <cit.>, it was shown that the rapidity differential cross section for exclusive photoproduction in heavy-ion ultra-peripheral collisions at NLO decomposes into a complicated interplay of contributions from both the quark and gluon sectors as well as their interference, over the whole region of rapidities accessible at the LHC. In particular, at mid-rapidities the quark contribution was shown to be the dominant player. While such a picture remains in place under a conservative factorisation and renormalisation scale variation, and is reflected in the original work of Ivanov et al. <cit.> in the context of the underlying hard scattering process, γ p → p, which drives the ultra-peripheral collisions, and indeed the eA collisions at the EIC, care must be taken to interpret such results. Indeed, it was shown that such a hierarchy arises from a coincidental cancellation of LO and NLO gluon contributions together with the positive-definite quark contribution at NLO. At NNLO, when there are also interference contributions wholly within the quark sector, one may anticipate a different final picture. Υ photoproduction on the other hand, sitting at a higher scale, does not exhibit such a complicated interplay of contributions at NLO, see <cit.>, with the gluon contribution dominating over all rapidities. The results are therefore indicative of the long-standing problem of the scale dependence and perturbative instability exhibited by low-scale processes. Indeed, after the so-called `Q_0 subtraction' <cit.> discussed in Sect. <ref>, the quark contribution to the amplitude becomes negligible. A new study <cit.> which includes the high-energy resummation effects in the coefficient function of exclusive photoproduction in the HEF formulism similar to one applied in the inclusive case <cit.> supports this conclusion.
§.§ Generalised TMDs
The non-perturbative structure of the hadrons can be described in terms of parton correlation functions such as form factors, 1D PDFs and their 3D generalisations in terms of TMDs and GPDs. All these functions can be derived from more general objects called GTMDs <cit.>. Hence, GTMDs are also known as the “mother distributions". There are several compelling reasons to study GTMDs
. Firstly, GTMDs contain physics that outmatches the content encoded in the TMDs and GPDs. Secondly, via Fourier transformation, GTMDs can be related to Wigner functions, a concept that spans across other branches of physics as well. Partonic Wigner functions may allow for a hadron tomography in 5D phase-space <cit.>. Thirdly, certain GTMDs can unravel unique correlations between parton orbital motion and spin inside hadrons <cit.>. In particular, the Wigner distribution can be used for a gauge-invariant definition of the canonical orbital angular momentum <cit.>, which makes this quantity also accessible for calculations in lattice QCD <cit.>. Fourthly, there is a particular GTMD that is related to the Sivers TMD. By establishing a relation between GTMDs and the QCD odderon at small x, the authors in Ref. <cit.> have shown that one can access the gluon Sivers TMD through exclusive π^0 production in unpolarised ep scattering. This finding goes against our traditional belief that the Sivers function can only be measured with a transversely polarised target.
For a long time, it was questionable whether GTMDs could be measured at all. The authors in Ref. <cit.> were the first to propose addressing gluon GTMDs through exclusive diffractive dijet production in lepton-nucleon/nucleus collisions at small x (see left panel of fig:exclusive-dijet). The GTMDs depend on the average transverse parton momentum k⃗_⊥ and the transverse momentum transfer to the target Δ⃗_⊥, and it is possible to decompose the angular correlation between these two vectors into a Fourier series. The leading angular dependent term, known as the elliptic distribution, has a characteristic cos(2ϕ) angular modulation similar to the observed elliptic flow phenomenon in relativistic heavy-ion collisions <cit.>. It was shown that the cross section of this diffractive dijet process also exhibits such a cos(2ϕ) behavior where ϕ is now the angle between the dijet total and relative momenta. The pioneering work in Ref. <cit.> gave impetus to the field of GTMDs and subsequently many other interesting ideas were put forward; see, for instance, Refs. <cit.>.
An alternative idea <cit.> is to exclusively produce a single particle (instead of two jets) such as a J/ψ. The role of the second jet is now played by the scattered electron which must be detected. It has been shown that in this process the elliptic cos 2ϕ correlation of the gluon GTMD manifests itself in the angular correlation between the scattered electron and the J/ψ <cit.> (or the recoiling proton/nucleus <cit.>). For a proton target, a sizable v_2 of a few percent or larger has been predicted <cit.>. The same effect can also be seen in DVCS, but J/ψ production is more promising since there is no contamination from the Bethe-Heitler
process. In the GPD-based approach to DVCS, the same angular correlation is known to be generated by the so-called gluon transversity GPD. The elliptic gluon GTMD is the mother distribution of the gluon transversity GPD <cit.>.
Quarkonium production processes are also useful to study other aspects of GTMDs.
In Ref. <cit.>, it was shown that exclusive double production of pseudo-scalar quarkonia (η_c/b) in hadronic collisions could serve as a direct probe of GTMDs for gluons at moderate x (see right panel of fig:exclusive-dijet). A similar idea came out in Ref. <cit.> where the authors proposed to access the Weizsäcker-Williams gluon GTMD at small x via double χ_cJ or η_c meson production in
diffractive pp/pA collisions where (one of) the proton(s) stays intact.
At the EIC, the primary process to look for gluon GTMDs is exclusive diffractive dijet production, as mentioned above. A challenge, however, is that due to the limited
centre-of-mass energy, the transverse momenta of diffractively produced particles in the forward rapidity region are often not large enough to cleanly reconstruct jets.
As a first step to test the underlying GTMD picture of exclusive diffractive production processes, like dijet or J/ψ electro- and photoproduction at small x, a GTMD model can be fitted to existing HERA data. Predictions can then be obtained for EIC in different kinematic regions. This has been considered for dijet production in <cit.>, where it was shown that a gluon GTMD model based on the impact-parameter-dependent McLerran-Venugopalan model can give a reasonably good description of diffractive dijet production data from
H1 <cit.>.
The same framework (slightly extended) can be applied to exclusive diffractive J/ψ production to describe the H1 and ZEUS data, as shown in fig:JPsi on the left (= 319 GeV). With the resulting GTMD parametrisation, predictions for exclusive diffractive J/ψ production at EIC can be obtained. These are shown for = 45 and 140 GeV in fig:JPsi on the right.
Generally, at small x, and in particular for nuclear targets, a GTMD-based description becomes more appropriate for exclusive and diffractive processes.
Exclusive quarkonium production at the EIC could be used to systematically study the transition between the collinear and k_⊥-dependent frameworks.
§.§ Exclusive quarkonium production near threshold and the trace anomaly
It has been noticed long ago that the mass M of a hadronic system can be expressed in terms of the forward matrix element of the trace of the QCD energy-momentum tensor
as <cit.>
2M^2=⟨ p| β/2g F^2+(1+γ_m) ψmψ |p⟩,
where β and γ_m are anomalous dimensions and the operator β / (2g) F^2 + γ_m ψmψ is the QCD trace anomaly <cit.>. The decomposition of the r.h.s. of Eq. (<ref>)
into quark and gluon contributions has been discussed in detail <cit.>. Other mass decompositions, based this time on the QCD Hamiltonian, have also been proposed in the literature <cit.>. The latter all require the knowledge of the same four quantities, combined in different ways for the physical interpretation <cit.>. Two of these quantities, namely the quark momentum fraction A_q(0)=⟨ x⟩_q and the gluon momentum fraction A_g(0)=⟨ x⟩_g, are already well known. The other two numbers C̅_q(0) and C̅_g(0) can be determined by measuring the quark and gluon contributions to Eq. (<ref>). While the quark condensate ⟨ p| ψmψ |p⟩ has already received a lot of attention over the last decades (see <cit.> and references therein), little is known so far about the gluon condensate ⟨ p | F^2 | p ⟩ from the experimental side.
Four-momentum conservation implies that A_q(0)+A_g(0)=1 and C̅_q(0)+C̅_g(0)=0. From a phenomenological point of view, the knowledge of A_q(0) and the quark condensate is therefore sufficient for specifying all the contributions to the various mass decompositions (see <cit.> for recent estimates). Measuring the gluon condensate is not expected to change much the current phenomenology of the nucleon mass, but it will provide a fundamental sanity check of the mass sum rules and the virial theorem <cit.>. Another motivation for measuring the gluon condensate is that it could shed light on the existence and nature of the recently discovered LHCb “pentaquark” states <cit.>.
More than two decades ago, exclusive heavy-quarkonium production, near the production threshold, was suggested as a promising tool for constraining the gluon condensate in the nucleon <cit.>.
This development together with the prospect to obtain through this process further information about the gravitational structure of the nucleon, which is contained in the form factors of the energy-momentum tensor
(such as the mass radius and mechanical pressure distributions <cit.>), as well as the measurement of exclusive J/ψ photoproduction near threshold at Jefferson Lab <cit.> has stimulated a significant amount of activities in this area <cit.>.
Recently, it was argued that the extraction of the gravitational form factors through exclusive quarkonium photoproduction will necessarily retain model dependence <cit.>. Generally, access to the gravitational structure of the nucleon is expected to be cleaner for electroproduction <cit.>.
At the EIC, one would have the unique opportunity to explore photo- and electroproduction of both J/ψ and Υ close to threshold <cit.>.
§.§ Probing double parton scattering at the EIC with quarkonium pairs
§.§.§ A word of context
In this section, we study the possibility to observe double-J/ψ production at the EIC. In particular, we discuss both the single-parton-scattering (SPS) and the double-parton-scattering (DPS) mechanisms, which could lead to the observation of a pair of . In fact, the cross section for the latter case would allow one to access new information on the so-called proton
double-parton-distribution functions (dPDFs), which encode novel information on the partonic structure of the proton.
Let us recall the analysis of four-jet photoproduction at HERA, which pointed out the relevance of multi-parton interactions (MPIs) to account for the measured total cross section <cit.>.
In Ref. <cit.>, the DPS cross section for four-jet photoproduction was
calculated.
DPS are initiated by a quasi-real photon <cit.> splitting into a q q̅ pair. The same strategy as for pp collisions <cit.> has been used to evaluate the photoproduction cross section. At this stage, the only missing quantity was σ_eff^γ p, the effective size of the photon-proton interaction, which is expected to be process independent. It was estimated
for the first time <cit.> and compared to that of the pp case from Refs. <cit.>.The four-jet DPS cross section has then been calculated for the HERA kinematics <cit.> to be σ_DPS^4j≥ 30 pb, while the total one was inferred from <cit.> to be σ_tot^4j∼ 135 pb at x_γ < 0.75. This indicated that the DPS contribution is sizeable even in photon-induced reactions for the production of four jets
and that it could also be so for other processes like quarkonium-pair production. Further analyses of the HERA data could lead to the extraction of σ_eff^γ p and, in turn, provide a first access to the mean transverse distance between two partons in the proton,
an unknown property of the proton structure. To this aim, the needed luminosity was evaluated to be ℒ∼ 200 pb^-1 <cit.>. Double-J/ψ production from DPS at EIC will be
presented below along the same lines.
§.§.§ DPS at the EIC and -pair production
Here we discuss J/ψ-pair photoproduction at the EIC. In ep collisions, the radiated quasi-real photon can interact with the partons
within the proton in two ways, namely as a “pointlike” particle and via its “resolved” hadronic content. In the first case, the photon “directly” interacts with the target while, in the latter case, the photon splits into (colour charged) partons, which subsequently interact with partons in the proton.
The treatment of the interaction between a proton and such a resolved photon is
carried out by using a PDF describing the momentum distributions of these partons inside the photon. One of these is the GRV <cit.> set, which is
adopted here.
For what regards the quarkonium-production mechanisms, the CSM ( the leading v^2 contribution of NRQCD) is used.
fig:DPS_SPS shows different Feynman graphs for SPS and DPS photoproduction.
In the SPS case, the contributing channels at leading order, αα_s^4, are shown in fig:DPS_SPS(a-c), namely, γ q→ J/ψ+J/ψ+q, g g→ J/ψ+J/ψ and q q̅→ J/ψ+J/ψ. However, the graph in fig:DPS_SPS(d) contributes at order αα_s^5, i.e. via the SPS γ g→ J/ψ+J/ψ+g+g. The gluon-initiated channel in DPS for di-J/ψ production at α_s^6 is shown in fig:DPS_SPS(e), while the quark-initiated channel does not contribute in the CSM at order α_s^6. The partonic channel
gg→ J/ψ+g
dominates for single-J/ψ production.
The SPS cross section, the squared matrix elements convoluted with single-parton PDFs, can be calculated using
HELAC-Onia
<cit.>. In order to estimate the DPS cross section, we need to use the poorly known proton dPDFs, which
provide the number densities of a parton pair with a given transverse distance b_⊥ and carrying the longitudinal momentum fractions (x_1,x_2) of the parent hadron <cit.>.
Assuming that dPDFs can be factorised in terms of
ordinary 1D PDFs and a transverse part, the DPS cross section can be expressed in terms of two SPS cross sections for the production of each
of the observed particles among the pair:
σ^(J/ψ ,J/ψ)_DPS=1/2σ^(J/ψ)_SPS σ^(J/ψ)_SPS/σ_eff^γ p ,
which is the so-called "DPS pocket formula", valid under the assumption of
totally uncorrelated
kinematics between both parton scatterings.
The σ^(J/ψ)_SPS is the SPS contribution for single J/ψ production.
In the present study, within the mentioned assumptions, one gets:
σ_eff^γ p= [∫ d^2k⃗_⊥(2 π)^2 F_2^γ(k⃗_⊥,Q^2) F_2^p(k⃗_⊥) ]^-1
where here F^p(γ)_2(k_⊥) parametrises the transverse structure of the proton (photon) <cit.>. For the photon, the only available calculation is that of Ref. <cit.> while, for the proton, there are several models based on the data for
DPS in pp collisions.
Recently, several experimental analyses on DPS have been carried out for the production of J/ψ + W <cit.>, J/ψ+Z <cit.>, J/ψ+charm <cit.> in pp and J/ψ+J/ψ <cit.> in pp̅
processes. A comprehensive comparison between theory and experiments for di-J/ψ production at the Tevatron and the LHC has been presented in <cit.>, and it
was observed that DPS dominates the yield at large J/ψ-rapidity difference.
DPS has been also studied for J/ψ-pair production for the LHC fixed-target (also referred to as AFTER@LHC) kinematics in <cit.>.
At the EIC, LO computations using HELAC-Onia show that measurements are possible at = 140 GeV with SPS contributions generally dominant over the DPS ones, but there are certain regions (low z and large Δ y) in the phase space where DPS cannot be disregarded. If σ_eff^γ p is not too small, DPS events could be measured. In these regions, there is thus a compelling opportunity to distinguish between the resolved and unresolved contributions in the cross section and thereby to gain valuable insight into the internal structure of photons and protons.
§ QUARKONIA AS TOOLS TO STUDY THE PARTON CONTENT OF NUCLEI
§.§ Nuclear PDFs
Decades of experimental and theoretical studies showed that the distributions of partons in a nucleus are considerably modified compared to the nucleon ones. While significant progress has been made since the initial observation of the modification of PDFs in bound nucleons by the EMC Collaboration <cit.>, our understanding of nuclear PDFs (nPDFs) is still not satisfactory, most notably in the case of gluons. Measurements of quarkonium production in
eA reactions can bridge this knowledge gap.
One of the main EIC goals is a high-precision survey of the
partonic structure of the nucleus
to significantly advance
our quantitative understanding of
nPDFs. The EIC
will offer the possibility
to study nPDFs over a broad range of momentum transfers
<cit.>. An improved knowledge of nPDFs will enable more precise theoretical calculations for
nuclear effects and increase the scientific benefit of already successful heavy-ion programmes at RHIC and LHC.
A widely accepted approach to quantify nuclear effects in
PDFs is to start with proton PDFs and use a function R(x,Q^2) that captures the modification of a given PDF in a nucleus. Experimentally, such a modification could be studied by a ratio of structure functions F_2
or by the so-called nuclear modification factor as done by RHIC and LHC experiments. In the case of eA collisions, R(x,Q^2) is defined as
R_eA = 1/A(d)σ_eA/(d)σ_ep,
where (d)σ_eA and (d)σ_ep are the cross sections for
the process under consideration, respectively, in eA and ep reactions, while mass number A serves as a normalisation factor. Note that these cross sections can be differential in different kinematical variables. With the definition of eq:R_eA, = 1 in the absence of nuclear effects. In the following, we review and quantify prospects for nuclear-PDF determination at EIC via measurements.
§.§.§ Gluons
In order to give an
estimate of the potential impact of the EIC on nPDF determination,
the nuclear modification factor
R_eAu,
which can be measured in inclusive J/ψ photoproduction in
eAu reactions,
is compared with projected statistical uncertainties. Such a prediction
is shown in fig:R_eAu-45GeV and fig:R_eAu-90GeV at two different values of the centre-of-mass energy
, √(s_eN),
45 GeV and 90 GeV, as a function of the J/ψ rapidity in the Nγ centre-of-mass frame
[Note that we adopt the same kinematical configuration as the EIC Yellow Report, with the proton(ion) moving along +ẑ and the electron along -ẑ (see also Fig. <ref>).] and as a function of W_γ N. Kinematical cuts are applied on the
elasticity (0.2 < z < 0.9) and on the pseudorapidity of the electron pair coming from the J/ψ→ e^+e^- decay (|η_ee| < 3.5). Different cuts on W_γ N are applied for the rapidity spectra at the two different energies.
The nuclear-modification-factor predictions are calculated using HELAC-Onia <cit.>, adopting the CT14nlo set <cit.> as a proton PDF baseline and using two different nuclear PDF sets for the gold nucleus, namely EPPS16nlo <cit.> and nCTEQ15FullNuc <cit.>. Factorisation and renormalisation scales are taken to be the J/ψ transverse mass, μ_F = μ_R = m_T=√(M^2_J/ψ+^2). Note also that, since these predictions are calculated at LO in the CSM, where the only partonic subprocess is γ + g → J/ψ + g, they can be directly interpreted as R_g, the nuclear modification factor for the gluon nPDF. The statistical projections are calculated assuming
R_eAu=1 (using the central value of CT14nlo) and assuming an integrated luminosity of 10 fb^-1/A.
The branching ratio for the J/ψ→ e^+e^- decay was taken to be 5.94% and a reconstruction efficiency of 64% was assumed (considering an average identification efficiency of the electrons from the decay to be approximately 80%).
Some comments are in order. First, as can be seen in fig:R_eAu-45GeV and fig:R_eAu-90GeV,
J/ψ is expected to be mostly produced in the backward region in the
γ N centre-of-mass frame
as the yield essentially vanishes at positive rapidities (see the increase of the statistical uncertainties of our projections). This happens for both energy configurations. Second,
the regions where shadowing (relative parton depletion at x smaller than 0.01), antishadowing (relative parton excess at x around 0.11) and the EMC effect (relative parton depletion for 0.3 < x < 0.7) take place
can be probed at the
EIC via J/ψ photoproduction. The antishadowing peak is expected to be observed at moderate backward rapidity in the
γ N centre-of-mass frame, while the shadowing region would be probed at larger negative rapidities. Such regions are also
those where the projections point to a smaller statistical uncertainty compared to the PDF and scale uncertainties,
i.e. the gluon nPDFs would be the most constrained.
The W_γ N dependence of the nuclear modification factor would also be a very interesting tool to probe gluon nPDFs. A large shadowing tail is expected to be probed for larger values of W_γ N, while clear antishadowing peaks are expected in the region W_γ N∈ [10:20] GeV, in both energy configurations. The projected uncertainties are also
small, and seem to have an interesting constraining power for the gluon nPDFs. More detailed dedicated studies are surely required and would help in motivating new measurements to probe gluon nPDFs at the
EIC.
fig:jpsi-pt-eAu-eic presents predictions for
the dependence of at =100 GeV
by using the same factorisation formalism in eq:jpsi-lp-fac, with proton PDFs replaced by nuclear PDFs for the eA collision. The total, LP and NLP contributions are shown.
The EPPS21nlo central set <cit.> is used as nPDF.
Since the production rate is dominated by the γ+g→ [cc̅] + g subprocess, this ratio is directly sensitive to the nuclear dependence of the gluon PDF. At EIC energies,
the distribution of production is sensitive to the gluon at a relatively large momentum fraction due to the soft-photon distribution in
the incoming electron. The enhancement of the production rate in
eAu over ep collisions in fig:jpsi-pt-eAu-eic is a direct consequence of the “antishadowing" behavior of the nuclear gluon distribution from the EPPS21nlo nuclear PDF set. Since the quark-initiated subprocesses dominate the LP contribution, the ratio of the LP contribution (blue dashed and red dotted lines) shows the well-known EMC-type effect from nuclear quark PDFs. However, this feature of the LP contribution does not have a real impact on the observed nuclear dependence of the dependence of production at the EIC energies (the solid line), since the LP contribution is strongly suppressed; that is, the distribution of production at the EIC should also be an excellent observable for probing the nuclear gluon PDF.
§.§ Nuclear GPDs
In coherent diffractive production of vector mesons off a nucleus, the light (photon) generated by the electron interacts, similarly to optical experiments of diffraction, with the nucleus as a whole, resulting in the production of a vector meson in the final state.
This process has been proposed as a tool to investigate gluon saturation dynamics <cit.>. Here, the production of lighter vector mesons, such as the ϕ meson, is expected to be sensitive to saturation effects. On the other hand, the production of quarkonia would because of the heavier quarkonium mass (and thus smaller size of the dipole formed by the quark–anti-quark pair that evolves into the vector meson) not be optimal to study gluon saturation and rather serve as a baseline free from saturation effects. Diffractive production also gives access to the spatial distribution of partons inside the nucleus. While coherent diffractive production provides information on the average spatial distribution of partons, incoherent production, where the nucleus does not stay intact, probes local fluctuations of this spatial distribution <cit.>.
For the study of the spatial distribution of gluons in heavy nuclei, in particular, the diffractive production of a quarkonium, such as a J/ψ, is most adequate. For the coherent process, the momentum transfer distribution
√(|t|) from the photon to the target nucleus is expected to exhibit a diffractive pattern, where the details of the shape of this pattern encode information on the gluon GPD <cit.>. An example of such a diffractive pattern is shown in Fig. <ref>, as represented by the square symbols. The data points have been simulated using the Sartre Monte-Carlo event generator <cit.>. Results including (filled symbols) and excluding (open symbols) saturation effects are shown. In addition to the diffractive coherent production, the expected incoherent contribution (circles) is shown.
As can be seen, apart from the very low |t| region, the incoherent contribution dominates
the coherent one.
Elastic and inelastic diffractive quarkonium production off the proton has been studied at the HERA lepton-proton collider experiments H1 <cit.> and ZEUS <cit.>, while a first measurement of exclusive J/ψ photoproduction at threshold has been performed in the fixed-target experiment GlueX at Jefferson Lab <cit.>.
At hadron-collider experiments, diffractive quarkonium production has been investigated in collisions <cit.> at the Tevatron, in <cit.>, <cit.> and <cit.> collisions at the LHC and in
dAu <cit.> and
AuAu <cit.> collisions at RHIC.
The existing measurements off nuclei are at present restricted in statistical precision, while only offering a rough determination of the momentum transfer √(|t|) and in general a limited separation of coherent and incoherent production. Hence, the knowledge on the gluonic structure of nuclei is at present poor, with many fundamental questions unanswered.
The EIC is expected to perform measurements of diffractive vector-meson production off light and nuclear ions with unprecedented precision. The two experimental challenges consist in determining t with high precision and in
distinguishing coherent from incoherent events <cit.>. Recently, the capability of proposed EIC detectors in reconstructing t and
their ability to suppress incoherent production have been examined <cit.>, <cit.>, <cit.>.
The variable t needs to be reconstructed from the scattered lepton and reconstructed vector meson, since in coherent production the trajectory of the ion after the interaction is nearly unmodified and thus the ion cannot be detected, while in the case of incoherent production not all fragments from the nuclear break up can be detected. The distribution in |t| for coherent diffractive J/ψ production off gold ions is shown in Fig. <ref>, left.
Here, |t| is reconstructed as the squared sum of the transverse momenta of the scattered lepton and of the lepton pair originating from the J/ψ decay.
It forms a good approximation for the true -t.
The data have been simulated again with Sartre and subsequently passed through a full simulation of the ePIC detector.
The histogram represented by the continuous line is the generated distribution, while the other curves represent the reconstructed distribution, with beam effects.
The latter include an angular divergence originating from the focussing and defocussing quadrupoles in the interaction region and
a small angular kick from the crab cavities. The crossing angle from the beams in principle also influences the t distribution, but contrary
to the other effects it can be corrected for.
For the curve indicated by the open, blue circles only information from tracking detectors is used for the reconstruction of the scattered lepton, while for the curve indicated by the black, closed circles only information from the backward electromagnetic calorimeter is used for the reconstruction of the scattered lepton. The curve indicated by the red, open circles selects the best of the two methods. As can be seen, the quality of the reconstruction in t is strongly dependent on the quality of the reconstruction of the scattered beam lepton. In the diffractive process the beam lepton generally is scattered under a small angle and covers a region where the tracking performance is degraded. Using in addition the electromagnetic calorimeter in the backward region for the reconstruction of the scattered lepton improves the reconstruction in t vastly.
The spatial distribution of partons in impact-parameter space is
related to a Fourier transformation, with t going from 0 to infinity <cit.>. Experimentally, one is limited by a maximal momentum transfer, which preferably extends as far as possible. In practice,
studies have shown that it is necessary to resolve the minima up to the third one for the evaluation of the spatial distribution <cit.>. This dictates the needed level of suppression
of the incoherent contribution. The suppression of incoherent events includes the requirement of exactly three reconstructed lepton tracks with the correct charge in absence of any other signal in the main detector and various criteria corresponding to the absence of signal in a series of far-forward detectors, which can tag protons (Roman Pots for protons with energy close to the beam energy and the B0 spectrometer and off-momentum detectors for nuclear-breakup protons), neutrons (Zero-Degree Calorimeters) and photons (B0 and Zero-Degree Calorimeters). The capability to suppress incoherent production is illustrated in Fig. <ref>, right, which shows the -t distribution for coherent and incoherent production off gold nuclei.
The former is again simulated using Sartre, while for the latter the BeAGLE generator <cit.> is used.
The generated coherent (incoherent) contribution is represented by the continuous (dotted) line. The generated data are passed through
a full simulation of the ATHENA detector. The effect of data selection requirements on the event activity in the main detector and on the absence of activity in the far-forward detectors,
based on the studies in Ref. <cit.>, is represented by the
blue, open circles. As can been seen, the obtained distribution lies close to the distribution from coherent events simulated by Sartre. The remaining contribution from incoherent events is given by the red, star symbols. The largest suppression of the incoherent process comes from the requirement on the absence of any neutron signal in the Zero-Degree Calorimeter, while the requirement on the absence of photon signals in this Zero-Degree Calorimeter
also has an impact.
Ways to further improve the reconstruction of t and the suppression of incoherent production are at present under investigation.
The study of light nuclei can offer additional insights into the
internal structure of the nuclear medium.
In contrast to
measurements with heavy nuclei, the total final state in incoherent diffractive production off light nuclei can be
unambiguously identified through tagging of the spectator nucleons.
Such measurements are of interest when studying the short-range correlation (SRC) of a nucleon pair, which is the temporal fluctuation of two nucleons into a strongly interacting pair in close proximity and large measured relative momentum <cit.>.
SRC pairs are suggested as a possible explanation for the nuclear modification of the momentum distribution of high-x partons, known as the EMC effect,
with a strong correlation between the two phenomena suggested by measurements by the CLAS experiment at Jefferson Lab <cit.> and a quark-level QCD basis for SRC has been proposed for the lightest nuclei <cit.> and A≥4 nuclei <cit.>.
The simplest nuclear system consists of deuteron and the first measurement of incoherent diffractive production with spectator tagging was performed in the measurement of incoherent diffractive J/ψ production in ultra-peripheral
dAu collisions by the STAR experiment at RHIC <cit.>, with tagging of the spectator neutron in the Zero-Degree Calorimeter.
At the EIC, similar measurements can be performed with enhanced precision, and studies of incoherent diffractive J/ψ production off the deuteron at the EIC have been proposed to study the nuclear modification of the
gluon distribution and its possible link with the SRC <cit.>.
For the proposed measurement, the scattered lepton and J/ψ decay leptons are reconstructed in the main detector, while both the leading
and spectator nucleon (neutron and proton) can be detected in the far-forward detectors. The detection of both nucleons instead of only one
offers certain advantages in the reconstruction of the event and some
kinematic variables <cit.>.
In Fig. <ref>, the three-momentum distribution of the tagged neutron (left) and tagged proton (right) in the deuterium rest frame is illustrated for incoherent diffractive production of J/ψ in the scattering of 18 GeV electrons off 110 GeV deuterons at the EIC, as
simulated with BeAGLE <cit.>.
The star symbols represent the generated distribution, the open circles represent the distribution including acceptance effects of the main and far-forward detectors, and the open squares also take
the finite
detector resolution and beam effects into account. The momentum distribution of the tagged nucleon reflects the initial-state momentum
of the nucleons inside the deuteron.
The region above 300 MeV corresponds to the region of the SRC, and as visible in the figures, the EIC will be able to provide a good reconstruction of the tagged-nucleon momentum. A similar statement holds for the reconstruction of
other variables of interest <cit.>.
§.§ Study of transport properties of nuclear matter
The vital element of portraying nuclear matter is to get information on how the medium responds to a parton traversing the matter. It is characterised by transport coefficients,
a diffusion coefficient
or
q̂, which is the mean squared momentum transfer between the propagating particle and the medium per unit length. Transport coefficients are an essential ingredient in the modelling of nuclear reactions
, and determining these parameters is one of the main goals of high-energy nuclear physics experimental and phenomenological efforts.
Measurements of hadron production in collisions have shown a broadening of the transverse momentum distribution at
intermediate
hadron transverse momentum compared to reactions. This phenomenon is visible over a wide range of hadronic collision energies, starting from collisions at ≈ 20 GeV <cit.> up to 200 GeV at RHIC <cit.>. The Cronin effect is also anticipated for quarkonium production in collisions <cit.>. A similar effect was observed also in semi-inclusive deep-inelastic scattering off nuclei by the HERMES experiment <cit.>. One possible source of this effect is the multiple scattering of the struck parton while traversing the nucleus, which broadens the parton momentum . Under this assumption, the modification of can be related to the transport properties of matter, expressed by the transport coefficient q̂.
Other effects, like nuclear absorption and parton energy loss, are also expected to contribute when studying particle production in nuclear matter.
Additional measurements of the spectrum in
ep and
eA at the EIC can help to discriminate between models and constrain their parameters, including the relative role of multiple scattering and nuclear absorption. Such a programme
will greatly extend the studies pioneered by the HERMES collaboration.
We present here an example of the calculation of the expected modification of the quarkonium energy spectrum in
eA collisions due to multiple scattering of the parton in the medium. The study is based on an earlier work <cit.>, where a microscopic approach was adopted for the calculation of the decay of J/ψ and Υ in the QGP. Here the QGP medium is replaced with cold nuclear matter, specifically with a large gold
nucleus, and its properties are constrained taking into account various nuclear effects: nuclear shadowing <cit.>, coherent QCD multiple scattering <cit.>, initial- and final-state parton energy loss <cit.>, and initial and final scattering effects (including multiple scattering) <cit.>.
To study the nuclear modification, the ratio of cross sections for quarkonium production in reactions that involve a nucleus
and a proton baseline is used:
R_AA = 1/⟨ N_ bin⟩dσ_AA/ dσ_pp ,
R_eA = 1/Adσ_eA/ dσ_ep .
Here, A and the average number of nucleon-nucleon collisions ⟨ N_ bin⟩ provide the relevant normalisation factors such that in the absence of nuclear modification the ratios are unity. The R_AA presents suppression from QGP, including thermal dissociation in the QGP, while R_eA offers the cold nuclear-matter counterpart.
A preliminary study demonstrates that most
quarkonium states show a larger R_eA compared to R_AA, and thus a decreased suppression, with the exception of the state, which sees an increase in suppression by roughly 20%, and of the χ_b(1P) state, which sees a relatively low increase in suppression of roughly 10%.
The χ_c state experiences a significant decrease of about 50%
in the suppression factor. The
Υ states follow
an analogous trend, with decreased suppression of around 25% for Υ(1S) and Υ(2S) and 90% for Υ(3S). Finally, χ_b(2P) and χ_b(3P) show decreases in their suppression factors of roughly 55% and >95%, respectively.
The overall trend seems to indicate that highly suppressed states see the largest decrease in suppression, while the least suppressed states show either a small decrease or a slight increase in their suppression factors. All states retain a similar amount of E dependence, which is not surprising given that it is assumed that the time for the onset of the interaction is τ_ form.=1 fm. We direct an interested reader to <ref> for more details.
These preliminary results show that one can expect a significant modification due to cold nuclear-matter effects, which should allow for experimental investigation of these effects at the EIC. Thus,
quarkonium studies in
eA collisions at the EIC will help to understand the impact of different transport coefficients on quarkonium production in reactions that involve heavy nuclei and, in turn, help to calibrate quarkonium as a probe of the properties of matter created in high-energy
pA and AA
collisions.
§ SUMMARY
Quarkonium is an extremely useful tool to probe the internal structure of matter, namely one of the main goals of the Electron Ion Collider. In this review, we argue that studies of quarkonium production and correlations in (polarised) electron-proton and electron-nucleus collisions can produce unprecedented insights into the 3D structure of the nucleon and into the partonic content of the nuclei as well as help to settle the long-lasting debate on how quarkonia form.
Section <ref> briefly introduced the EIC project, its key parameters, and requirements for an EIC detector. We also defined conventions and basic kinematical quantities useful for describing lepton-hadron reactions. Finally, we made a case for a muon detector for quarkonium studies at the EIC.
Studies of collinear PDFs, form factors, TMD PDFs, GPDs, GTMDs and even double-parton distribution functions can be done at EIC using quarkonium production on a nucleon. In Sections <ref>, <ref> and <ref>, we reviewed the physics case for quarkonium measurements at the EIC. Quarkonium production at large transverse momenta in proton-proton and electron-proton collisions has been studied extensively within the frameworks of NRQCD and collinear factorisation. As discussed in Sections 3.1 and 3.2, it remains a challenge to obtain a simultaneous description of all HERA, LHC and Tevatron data for J/ψ photo- and hadroproduction, η_c hadroproduction, J/ψ+Z hadroproduction, J/ψ polarisation as well as inclusive production in e^+e^- annihilation at B factories.
Further data from the EIC can help but its p_T reach is limited to 10-15 GeV for charmonia and much less for bottomonia. The focus would then be on low-p_T data. The latter needs to be described within the framework of transverse momentum dependent parton distributions (TMDs) and requires the inclusion of so-called shape functions, which are the subjects of Section 3.3. In this way the EIC will provide new data to further unravel the quarkonium production mechanism, while at the same time offer new ways to employ quarkonium production as a tool to study TMDs and other parton distributions (the subjects of Section 4). This applies especially to gluon TMDs about which currently very little is known.
Analogous studies can be performed in electron-nucleus collisions (including, among others, insights into transport properties of nuclear matter), which is the subject of section 5. J/ψ polarisation studies can be done, as well as various spin asymmetry measurements, where the electron, proton and light nuclei can be polarised.
All these observables can contribute to our understanding of hadron structure and hadron formation, in particular those involving heavy quarks.
Overall, the physics case for quarkonium physics at the EIC is very extensive and promising.
§ ACKNOWLEDGEMENTS
We thank M. Chithirasreemadam, M.A. Ozcelik and H.F. Zhang for useful comments and inputs.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No.824093 (STRONG-2020).
This project has also received funding from the French ANR under the grant ANR-20-CE31-0015 (“PrecisOnium”).
This work was also partly supported by the French CNRS via the IN2P3 project GLUE@NLO, via the Franco-Chinese LIA FCPPL (Quarkonium4AFTER), via the IEA No.205210 (“GlueGraph") and “Excitonium”, by the Paris-Saclay U. via the P2I Department and by the GLUODYNAMICS project funded by the "P2IO LabEx (ANR-10-LABX-0038)" in the framework "Investissements d’Avenir" (ANR-11-IDEX-0003-01) managed by the Agence Nationale de la Recherche (ANR), France.
C.V.H. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska–Curie grant agreement No 792684 and from the programme Atracción de Talento, Comunidad de Madrid (Spain),
under the grant agreement No 2020-T1/TIC-20295.
M.N. has been supported by the Marie Skłodowska-Curie action “RadCor4HEF” under grant agreement No. 101065263.
The work of U.D. and C.P. is supported by Fondazione di Sardegna under the projects “Quarkonium at LHC energies”, No. F71I17000160002 (University of Cagliari) and ”Proton tomography at the LHC”, No. F72F20000220007 (University of Cagliari). The work of C.F. and C.P. is supported by the European Union ”Next Generation EU" program through the Italian PRIN 2022 grant n. 20225ZHA7W.
D.K. was supported by the National Science Centre, Poland, under the research grant no. 2018/30/E/ST2/00089.
P.T. is supported by a postdoctoral fellowship fundamental research of the Research Foundation Flanders (FWO) no. 1233422N.
The work of X.Y. was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics grant DE-SC0011090 and currently by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) (https://iqus.uw.edu) under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science.
The work of V.C. and R.V. was supported by the Office of Nuclear Physics in the U.S. Department of Energy at Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and the LLNL-LDRD Program under Project No. 21-LW-034 and No. 23-LW-036.
The work of I.V. was supported by the Laboratory Directed Research and Development program at Los Alamos National Laboratory. The work of C.S. was supported by the Indonesia Endowment Fund for Education (LPDP).
S. B. and Y. H. are supported by the U.S. Department of Energy under Contract No. DE-SC0012704, and also by Laboratory Directed Research and Development (LDRD) funds from Brookhaven Science Associates.
The work of A. M. was supported by the National Science Foundation under grant number PHY-2110472, and also by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the TMD Topical Collaboration.
J.R.W. was supported by the EIC Center at Jefferson Lab, the LDRD programs of LBNL and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-05CH11231.
L.C. and S.Y. were supported by the Guangdong Major Project of Basic and Applied Basic Research under the project No. 2020A1515010794.
The work of C.E.H. was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics grant DE-FG02-96ER40960.
J.W.Q. is supported in part by the U.S. Department of Energy (DOE) Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab.
The work of M.B. was supported by the German Research Foundation DFG through Grant No. BU 3455/1-1.
§ ESTIMATION OF MEASUREMENT EFFICIENCY
The measurement efficiency is calculated using the decay kinematic simulated with PYTHIA8 <cit.> and two cases for the minimum transverse momentum of the electron measurable in the experiment: ^ele > 0.2 GeV for a detector with a magnetic field B=1.5 T, and > 0.4 GeV for B = 3 T <cit.>. The single electron tracking efficiency is assumed to be 80%. jpsi:efficiency shows the efficiency as a function of rapidity and : it approximately constant, and for the B = 3 T case, there is a mild decrease of efficiency with increasing due to decay kinematic. For high- , one of the electrons tends to carry the majority of the momentum; thus the of the other falls below the reconstruction threshold. Based on these results, we assume measurement efficiency to be 64%.
§ NUMERICAL RESULTS FOR NUCLEAR MODIFICATION R_AA AND R_EA FOR QUARKONIUM PRODUCTION WITHIN THE MICROSCOPIC MODEL PRESENTED IN SEC. <REF>
§ THE LEPTON, PHOTON AND PARTON DISTRIBUTION IN AN UNPOLARISED ELECTRON
|
http://arxiv.org/abs/2409.02309v1 | 20240903213958 | QID$^2$: An Image-Conditioned Diffusion Model for Q-space Up-sampling of DWI Data | [
"Zijian Chen",
"Jueqi Wang",
"Archana Venkataraman"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.LG"
] |
Diffusion models for Q-space Up-sampling of DWI
Z. Chen et al.
Department of Electrical and Computer Engineering, Boston University
{zijianc,jueqiw,archanav}@bu.edu
: An Image-Conditioned Diffusion Model for Q-space Up-sampling of DWI Data
Zijian ChenEqual Contribution, Jueqi Wang^⋆ and Archana Venkataraman
==========================================================================
§ ABSTRACT
We propose an image-conditioned diffusion model to estimate high angular resolution diffusion weighted imaging (DWI) from a low angular resolution acquisition. Our model, which we call , takes as input a set of low angular resolution DWI data and uses this information to estimate the DWI data associated with a target gradient direction. We leverage a U-Net architecture with cross-attention to preserve the positional information of the reference images, further guiding the target image generation. We train and evaluate on single-shell DWI samples curated from the Human Connectome Project (HCP) dataset. Specifically, we sub-sample the HCP gradient directions to produce low angular resolution DWI data and train to reconstruct the missing high angular resolution samples. We compare with two state-of-the-art GAN models. Our results demonstrate that not only achieves higher-quality generated images, but it consistently outperforms the GAN models in downstream tensor estimation across multiple metrics. Taken together, this study highlights the potential of diffusion models, and in particular, for q-space up-sampling, thus offering a promising toolkit for clinical and research applications.
§ INTRODUCTION
Diffusion weighted imaging (DWI) is a non-invasive technique that capitalizes on the directional diffusivity of water to probe the tissue microstructure of the brain <cit.>. A typical DWI acquisition applies multiple magnetic gradients, with the field strength controlled by the b-value and the gradient directions given by the b-vectors. Mathematically, these gradients can be represented by a set of coordinates on the sphere, where the magnitude and direction of each coordinate is related to the corresponding b-value and b-vector, respectively. The domain of all such coordinates is called the q-space <cit.>. In general, a denser sampling of directions in the q-space, also known as the angular resolution, leads to higher quality DWI. For example, higher angular resolution acquisitions can improve the tensor estimation <cit.> and facilitates the progression from single-tensor models <cit.> to constrained spherical deconvolution models <cit.> that estimate an orientation distribution function, which captures more complex fiber configurations. However, increasing the angular resolution also prolongs the acquisition time, which can be impractical in clinical settings. Not only are longer acquisitions more expensive, but they are also difficult for some patients to tolerate, which in turn increases the risk of artifacts due to subject motion <cit.>. Given these challenges, it is necessary to explore computational methods that can achieve high-quality DWI with a minimal number of initial scan directions.
Several studies have applied generative deep learning to DWI data. For example, the work of <cit.> uses a spherical U-Net to directly estimate the ODF using DWI acquired with only 60 gradient directions. More recently, generative adversarial networks (GANs) have also been used to estimate DWI volumes. Specifically, the work of <cit.> generates DWI for a user-specified gradient direction based on a combination of T1 and T2 images <cit.>. Similarly, the authors of <cit.> use the Pix2Pix model introduced by <cit.> to synthesize DWI with 6 gradient directions from data originally captured with only 3 gradient directions <cit.>. Further variants of the GAN, such as CycleGAN and DC^2Anet, have been applied to simulate a high b-value image from a low b-value one <cit.>. Autoencoders have also been used to adjust the apparent b-value <cit.>. While these works are seminal contributions to the field, none of them consider the clinically relevant problem of up-sampling a low angular resolution DWI acquisition.
Diffusion models (DMs) have emerged as powerful tool for image generation. At a high level, DMs work by successively adding Gaussian noise to the input and then learning to reverse this noising process <cit.>. DMs have been employed in several medical imaging tasks <cit.>, including image translation between modalities <cit.>, super-resolution and artifact removal <cit.>, registration <cit.>, and segmentation <cit.>. We will leverage DMs to up-sample the DWI gradient directions, as task which to our knowledge, has not been explored in prior work.
In this paper, we propose an image-conditioned DM, which we call , that can estimate high angular resolution DWI from a low angular resolution acquisition. One highlight is that automatically identifies several closest available directions and uses the corresponding images as prior knowledge for generating images from any target direction not included in the initial scan. This target image generation process, carried out using a U-Net based structure conditioned on this prior information, can be seen as an extrapolation based on the identified directions and images. By focusing on the most relevant data, solicits more targeted prior information and is more computationally efficient. We train and evaluate on DWI curated from the Human Connectome Project (HCP) dataset <cit.>. Our model demonstrates superior performance over GAN-based approaches, particularly when the available low angular resolution images are sparsely distributed across the sphere.
§ METHODS
Fig. <ref> provides an overview of our framework. For any user-specified target gradient 𝐛_g, our model will find and take as input the R closest reference b-vectors 𝐛̅ = (𝐛_1, …, 𝐛_R) available in the low angular resolution scan and the corresponding DWI slices 𝐗̅ = (𝐗_1, …, 𝐗_R). will then output the estimated target image 𝐗_𝐛_g. We can obtain a high angular resolution DWI by sweeping the target gradient directions across the sphere and aggregating the generated images with the original low angular resolution scan.
§.§ A Diffusion Model for Q-space Up-sampling of DWI
Inspired by recently-introduced image-conditioned Denoising Diffusion Probabilistic Models (DDPMs) <cit.>, we design a position-aware diffusion model that leverages “neighboring" DWI data to estimate the image associated with a target gradient direction. Similar to traditional diffusion models <cit.>, is comprised of both a forward noising process and a reverse denoising process.
In the forward process, Gaussian noises are added successively at each time step t∈{0,1,…, T} to the generated image. This corruption process is
q (𝐗_𝐛_g^(t)|𝐗_𝐛_g^(t-1)) = 𝒩(𝐗_𝐛_g^(t) ; √(1-β_t)𝐗_𝐛_g^(t-1) , β_t𝐈), t≥ 1,
where {β_t} are the forward process variances, and 𝐗_𝐛_g^(t) is the noisy image at time t. By repeatedly applying Eq. (<ref>) to the starting image 𝐗_𝐛_g^(0), we have
q (𝐗_𝐛_g^(t)|𝐗_𝐛_g^(0)) = 𝒩(𝐗_𝐛_g^(t) √(α_t)𝐗_𝐛_g^(0) , (1-α_t)𝐈)
where α_t=1-β_t and α_t=∏_s=1^t α_s. Therefore, at step t, the generated image 𝐗_𝐛_g^(t) can be represented as a function of the initialization 𝐗_𝐛_g^(0):
𝐗_𝐛_g^(t) = √(α_t) 𝐗_𝐛_g^(0) + √(1-α_t) ϵ, ϵ∼𝒩(0,𝐈).
While the forward noising process operates solely on 𝐗_𝐛_g^(0), the reference DWI slices {𝐗_1, …, 𝐗_R } will be used to guide the subsequent denoising process. Rather than constructing a separate network to encode the reference images, which greatly increases the number of parameters and may introduce information loss, we opt to simply concatenate these slices with the target image being generated (i.e., denoised) at each time t as 𝐗̅_𝐛_g^(t) = 𝐗_𝐛_g^(t)⊕_i=1^R 𝐗_𝐛_i.
Starting from the fully corrupted image 𝐗̅_𝐛_g^(T), the reverse process aims to gradually recover the original image 𝐗̅_𝐛_g^(0). We denote this process as p_θ(·), where θ denotes the learnable parameters of the underlying neural network. By restricting the denoising to be Gaussian, the process p_θ(·) can be written:
p_θ(𝐗̅_𝐛_g^(t-1)|𝐗̅_𝐛_g^(t) ; {𝐛_g,𝐛̅}) = 𝒩(𝐗̅_𝐛_g^(t-1) ; μ_θ(𝐗̅_𝐛_g^(t),{𝐛_g,𝐛̅}) , σ^2_t 𝐈),
where the variances σ_t^2 are hyperparameters of the model. We note that the denoising process relies on the references DWI data {𝐗_1, …, 𝐗_R } and the corresponding gradient directions 𝐛̅ = {𝐛_1,…,𝐛_R}, and the target direction 𝐛_g. This combination of inputs allows to be position-aware.
To reverse the forward noising process, we train by minimizing the KL-divergence between p_θ(·) and q(·) at each time step t. As shown in <cit.> this loss minimization is equivalent to matching the mean functions, i.e.,
ℒ = 𝔼_t,q[ 1/√(α_t)( 𝐗̅_𝐛_g^(t) - β_t/√(1-α̅_t)ϵ) - μ_θ(𝐗̅_𝐛_g^(t),{𝐛_g,𝐛̅}) ^2 ] .
The mean function μ_θ(·) is generated with a U-Net architecture <cit.> with the cross-attention mechanism based on the concatenated gradient vectors 𝐛=[ 𝐛_g 𝐛_1 ⋯ 𝐛_R ]. Specifically, the encoding block is computed as follows:
𝐇_1 = FF(𝐇_0)+𝐇_0, 𝐇_2 = Attn(𝐇_1, 𝐛) + 𝐇_1,
where FF(·) denotes a feed-forward network, 𝐇_0 denotes the block input, and
Attn(𝐇_1, 𝐛) = Softmax((W_Q 𝐇_1)(W_K 𝐛)^⊤/√(d_k))W_V𝐛,
with W_Q, W_K, W_V being the learned weights and d_k being the dimension of 𝐛. The decoding block follows a similar expression but includes skip connections from the corresponding encoding block. This design ensures that image features are effectively attended to and integrated with positional information.
Once is trained, we can generate DWI for arbitrary gradient directions by sampling from the standard normal distribution and applying the reverse process in Eq. (<ref>) recursively with the corresponding reference images, namely:
𝐗̅_𝐛_g^(t-1) = μ_θ(𝐗̅_𝐛_g^(t),{𝐛_g,𝐛̅}) + σ_t ϵ, ϵ∼𝒩(0,𝐈).
§.§ Baseline Comparison Methods
We compare with two state-of-the-art GAN models.
The first model is a conditional GAN (cGAN) for image generation proposed by <cit.>. We use the same cross-attention U-Net architecture for the generator as used in . We use a PatchGAN discriminator <cit.> and inject the gradient direction information {𝐛_g,𝐛̅} into the discriminator with cross-attention mechanism. We train the generator to minimize the minimax GAN objective plus a regularization term that encourages voxel-level similarity of the generated and ground-truth images:
G^* = min_Gmax_Dλ_G [𝔼_𝐗[log D(𝐗,𝐛)] + 𝔼_𝐗[log (1 - D(𝐗, 𝐛))] ] + λ_V ℒ_1(G)
where 𝐗 = [𝐗_𝐛_g, 𝐗_1,…,𝐗_R] is the concatenated real sample with 𝐗_𝐛_g drawn from the (high resolution) training data and 𝐗 = [G(𝐗_1:R, 𝐛), 𝐗_1,…,𝐗_R] represents the synthesized data of generated DWI and real reference slices. Finally, λ_G and λ_V balance the adversarial and similarity L_1 losses, respectively.
The second model is the Q-space conditional GAN (qGAN) proposed by <cit.>. Unlike and the cGAN baseline, qGAN incorporates the gradient directions and reference DWI data using a feature-wise linear modulation scheme <cit.>. The qGAN discriminator is also a conditional U-Net <cit.> and combines the gradient directions and reference DWI data via an inner product.
Finally, as a sanity check, we compare the deep learning models to a simple interpolation scheme (Interp), in which we express the target gradient direction as a linear combination of the reference gradients and then use the linear coefficients to interpolate between the reference DWI slices to obtain the target.
§.§ Implementation Details
For , we use a linear noise schedule of 1000 time steps. The U-Net employs [128, 128, 256] channels across three levels with one residual block per level. We use the Adam optimizer with a learning rate of 2.5 × 10 ^-5, β_1=0.5 and β_2=0.999. These hyperparameters are selected based on a relevant study <cit.> and not fine-tuned. We use the same U-Net architecture for the cGAN generator with the same set of hyperparameters. We fix λ_G = 1 and λ_V=100 in the loss function weights for both GAN methods. The discriminator is updated once for every two updates of the generator during training <cit.>. For qGAN, we use a learning rate of 5 × 10^-5 with the Adam optimizer. For comparison, we evaluate all models with both R=3 and R=6 reference DWI data.
To avoid memory issues <cit.>, we train the deep learning models to generate 2D axial slices, which we stack into 3D DWI volumes. Each 2D image has a size of (145, 174). During training, we independently normalize each slice from its original intensity to a range of [0,1]. Data augmentation is employed to enhance model training. Specifically, we use rotations by random angles in [- 15^∘, 15^∘] and random spatial scaling factors in [0.9, 1.1]. The final output is rescaled voxel-wise back to the original intensity and masked by the subject 𝐗_𝐛_g image.
§ EXPERIMENTAL RESULTS
Dataset Curation and Preprocessing:
We curate a total of 720 subjects from the HCP S1200 release <cit.>. The remaining HCP subjects are excluded due to an inconsistent number of gradient directions at b=1000 s/mm^2. The DWI is acquired on a Siemens 3T Connectome scanner at 3 shells (b=1000,2000 and 3000 s/mm^2). Each shell has exactly 90 gradient directions sampled uniformly on the sphere. The voxel size is 1.25 × 1.25 × 1.25 mm^3. The data is preprocessed with distortion/motion removal and registration to the 1.25mm structural space.
Clinical diffusion imaging typically uses lower b-values with approximately 30 gradient directions <cit.>. To better accommodate this situation, we focus our evaluation on the b=1000 s/mm^2 shell. From here, we construct low angular resolution DWI data by subsampling the 90 gradient directions to 30 evenly spaced ones that preserve the uniformity of the sphere <cit.>. The data for the remaining 60 directions serve as the targets for model training and evaluation.
Each volume is broken down into 145 axial slices. The deep learning models are trained to predict the image slices for each target direction. Based on this scheme, we create 60 samples for each slice. Each sample consists of one 2D slice for the target gradient direction and R reference slices corresponding to the closest low resolution gradient directions. The distance between gradients is defined by the geodesic distance on the sphere: d(𝐛_1,𝐛_2) = arccos(𝐛_1𝐛_2^⊤).
Finally, we use 576 HCP subjects for training, 72 for validation, and 30 for testing. The original DWI scans are treated as the gold standard for evaluation.
Comparing Reconstructed Image Quality:
Fig. <ref> presents qualitative results that compare the ground-truth DWIs to those generated by and the baseline methods. The GAN models and Interp fail to preserve high-frequency details in the synthesized DWI data, while succeeds in capturing the finer details more accurately, as highlighted in the zoomed-in blue boxes.
Table <ref> (left) reports the Fréchet inception distance (FID) <cit.> and the structural similarity index measure (SSIM) <cit.> of the synthesized DWI data. We observe that achieves nearly a two-fold improvement (i.e., decrease) in FID than the GAN models for both R=3 and R=6, which indicates that the DWI data generated by diffusion possess higher quality and greater diversity. Although the GAN models achieves a slightly higher SSIM than , the difference is not statistically significant using a two-sample (paired) t-test. Interestingly, the simple interpolation technique achieves better FID than when R=3. This is likely because the interpolation tracks the closest reference image, which is more akin to the original DWI distribution.
However, the improved FID does not generalize to better tensor estimation, as seen in the next section.
Impact on Tensor Estimation:
We estimate the fractional anisotropy (FA) using the standard single-tensor model <cit.>. Fig. <ref> shows the fiber direction and FA value maps among the ground-truth, and baseline methods for R=3 and R=6. Similarly to the finding in the reconstructed image, we observe that the qGAN and cGAN methods capture the general FA trends but fail to capture the high frequency features. Conversely, the diffusion-generated image by more closely resembles the ground-truth data by capturing finer details more accurately. This shows that the visual differences in the reconstructed images in Fig. <ref> are important when estimating tensors. The Interp method fails to generate realistic FA maps for R=3. Empirically, we also observe quality issues with Interp for R=6 even though they are less evident in the figure.
Table <ref> (right) reports the mean absolute error and SSIM, as compared to the FA computed from the ground-truth high angular resolution DWI. As seen, consistently outperforms the GAN-based model and the Interp method for both R=3 and R=6. Specifically for R=3, the error in FA is roughly three times lower for than for the GANs. also achieves significantly higher SSIM values. These trends persist when the number of reference images increases to R=6, i.e., even when more prior information is provided. However, the relative performance gain over the GANs shrink. Additionally, although the image-based metrics are better for the interpolation-generated (Interp) images, outperforms this baseline by a large margin when estimating FA. Taken together, these results suggest that is particularly effective in scenarios where the images are scarce and distributed sparsely, i.e., smaller values of R.
§ CONCLUSION
We introduce an image-conditioned diffusion model () that can generate high angular resolution DWI from low angular resolution data, effectively estimating high-quality imaging with limited initial scan directions. Our approach takes advantage of similar DWI data as prior information to predict the data for any user-specified gradient direction. The results demonstrate that diffusion-generated DWIs by achieve superior quality and significantly outperform those generated by GAN models in downstream tensor modeling tasks. Although our method currently exhibits longer training times due to the denoising characteristics of DDPMs, this limitation could be mitigated by employing more efficient sampling techniques <cit.> in future work. We will also focus on on clinical datasets to evaluate its robustness in real-world clinical scenarios.
§.§ Acknowledgments
This work was supported by the National Institutes of Health R01 HD108790 (PI Venkataraman), the National Institutes of Health R01 EB029977 (PI Caffo), the National Institutes of Health R21 CA263804 (PI Venkataraman).
splncs04
|
http://arxiv.org/abs/2409.02988v1 | 20240904180001 | Signatures of high-redshift galactic outflows in the thermal Sunyaev Zel'dovich effect | [
"Guochao Sun",
"Steven R. Furlanetto",
"Adam Lidz"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
firstpage–lastpage
Angular Spread Statistics for 6.75 GHz FR1(C) and 16.95 GHz FR3 Mid-Band Frequencies in an Indoor Hotspot Environment
Dipankar Shakya, Mingjun Ying, and Theodore S. Rappaport
NYU WIRELESS, Tandon School of Engineering, New York University, Brooklyn, NY, 11201
{dshakya, yingmingjun, tsr}@nyu.edu
September 9, 2024
=========================================================================================================================================================================================
§ ABSTRACT
Anisotropies of the Sunyaev Zel'dovich (SZ) effect serve as a powerful probe of the thermal history of the universe. At high redshift, hot galactic outflows driven by supernovae (SNe) can inject a significant amount of thermal energy into the intergalactic medium, causing a strong y-type distortion of the CMB spectrum through inverse Compton scattering. The resulting anisotropies of the y-type distortion are sensitive to key physical properties of high-z galaxies pertaining to the launch of energetic SNe-driven outflows, such as the efficiency and the spatio-temporal clustering of star formation. We develop a simple analytic framework to calculate anisotropies of y-type distortion associated with SNe-powered outflows of galaxies at z>6. We show that galactic outflows are likely the dominant source of thermal energy injection, compared to contributions from reionized bubbles and gravitational heating. We further show that next-generation CMB experiments such as LiteBIRD can detect the contribution to y anisotropies from high-z galactic outflows through the cross-correlation with surveys of Lyman-break galaxies by e.g. the Roman Space Telescope. Our analysis and forecasts demonstrate that thermal SZ anisotropies are a promising probe of SNe feedback in early star-forming galaxies.
galaxy: formation – galaxies: ISM – cosmology: large-scale structure of Universe – cosmology: theory
§ INTRODUCTION
The James Webb Space Telescope (JWST) has made unprecedented observations of high-redshift galaxies across different environments and mass regimes, leading to notable discoveries regarding galaxy formation in the first billion years of cosmic time <cit.>. One of the most intriguing findings about galaxy populations at z>6 is the prevalence of spatio-temporal clustering of star formation, namely the formation of stars in spatially clustered clumps <cit.> and with a temporally stochastic (or `bursty') star formation history <cit.>. Interestingly, high-resolution cosmological simulations of galaxy formation suggest a similar picture that early galaxies are characterized by spatially and temporally clustered star formation <cit.>.
Stellar feedback from supernovae (SNe) is considered as a main physical driver of the spatio-temporal clustering of star formation in galaxies <cit.>. Besides regulating star formation, SNe feedback also drives multi-phase outflows with particularly strong rates at high redshift where galaxies are more gas rich and less massive <cit.>. An essential part of the cosmic baryon cycle, galactic outflows inform the efficiency at which SNe feedback regulates stellar and gas mass buildup in galaxies while depositing energy and metal-enriched gas into the intergalactic medium (IGM). A complete picture of high-z galaxy formation thus requires understanding galactic outflows.
The cosmic microwave background (CMB) is a powerful and versatile tool to probe the collective effects of early galaxy populations in a highly complementary way to galaxy observations. Its polarization provides an integral constraint on the history of cosmic reionization driven by the UV radiation from galaxies <cit.>. High-z galaxies and reionization also leave measurable imprints on the CMB spectrum through the Sunyaev-Zel'dovich (SZ) effect <cit.> associated with the inverse Compton scattering of CMB photons on free electrons. The bulk motion of electrons created during and after patchy reionization causes the kinetic SZ (kSZ) effect, a Doppler shift whose spatial fluctuations constrain the reionization timeline and morphology <cit.>. The thermal motion of electrons, on the other hand, causes the thermal SZ (tSZ) effect characterized by the Compton y parameter. This y-type distortion of the CMB spectrum is a promising probe of the thermal energy of gas surrounding early galaxies due to both reionization and winds launched by SNe <cit.>.
In this Letter, motivated by the recent progress in understanding high-z galaxies and the planned next-generation CMB experiments like LiteBIRD <cit.>, we analytically estimate the y-type distortion and its spatial fluctuations (i.e. anisotropies) induced by the thermal energy injection from high-z galaxies especially their SNe-driven winds. We then compare the derived anisotropies with the expected CMB measurement uncertainties and its potential synergy with galaxy surveys to estimate the constraining power on y. Throughout, we assume a Chabrier initial mass function <cit.> and a flat, ΛCDM cosmology consistent with measurements by <cit.>.
§ MODELS
§.§ Cooling of SNe-powered galactic outflows
Recent observations and simulations have suggested that star formation might be highly clumpy and bursty in high-z galaxies. Therefore, rather than considering supernova explosions randomly distributed in the ISM, we consider the possibility that clustered SNe can drive superbubbles by launching more powerful galactic winds into the IGM <cit.>. The breakout of superbubbles driven by clustered SNe minimizes radiative losses in the interstellar medium (ISM), thus depositing a substantially higher fraction of SNe energy into the IGM, which is then transferred to the CMB. For a galaxy of halo mass M, we can approximate SNe as simultaneous events with a total energy released of E_SN,tot = N_SNℰ_SN = (f̃_* f_b M / 50 M_⊙) × 10^51erg, where f̃_* is the time-integrated star formation efficiency (SFE) that should be distinguished from the instantaneous SFE and the baryon mass fraction f_b = 0.16. In this work, we adopt f̃_* = 0.03, as permitted by models that can plausibly explain the galaxy populations at z>6 observed by JWST <cit.>. We also assume here that on average one SN (releasing ℰ_SN∼10^51erg of energy in total) explodes for every ∼ 50 M_⊙ of stars formed <cit.>, or an energy output per unit mass of star formation ω_SN∼ 10^49erg.
To estimate the fractions of kinetic energy released by supernova explosions lost through radiative and inverse Compton processes, we follow <cit.> and compare the timescales of isobaric radiative cooling, t_rad, to the ambient ISM/IGM gas, and of inverse Compton cooling, t_comp, to CMB photons in the IGM. The fraction of explosion energy deposited in the IGM which thus modifies the CMB spectrum can be approximated as
ϵ≈1/t_comp/1/t_comp + 1/t_rad,
with
t_comp = 3 m_e c/8 σ_T a T_CMB^4 = 5×10^8 (1+z/7)^-4 yr,
where σ_T is the Thomson scattering cross section for electrons and a=7.57×10^-15 erg cm^-3 K^-4, and
t_rad = 2.5 k_B T/n Λ(T),
where Λ(T) is the cooling function at the postshock temperature T and n is the number density of the ambient gas. Within the inhomogeneous ISM, the cooling due to radiative losses can be expressed as <cit.>
t_rad = 5 × 10^3(n/100 cm^-3)^-0.53( Z/0.2 Z_⊙)^-0.17 yr,
which is a few orders of magnitude shorter than t_comp even if large density fluctuations exist in the turbulent ISM.
However, considering the expansion and the eventual breakout (from the galactic disc) of winds powered by clustered SNe, <cit.> suggest that SNe exploded in massive star clusters efficiently overlap in space and time to collectively power superbubbles. The cooling time for radiative losses is significantly longer than the expansion time t_exp = R_s/v_s of bubbles out to scales of ∼100 pc if the mixing of SNe ejecta and the ambient ISM remains inefficient
t_rad/t_exp≃ 100 ( n/100 cm^-3)^-2/3( M_cl/10^5M_⊙)^-1/3( R_s/100 pc)^-1/3.
Thus, radiative cooling can be very modest throughout the bubble expansion within galaxies even if the gas is as dense as n∼10^3 cm^-3, a typical threshold for star formation. Although models of disc galaxy evolution may not be entirely valid at z>6, R_s of order 100 pc is still a relevant scale for the breakout of SNe-powered superbubbles into ambient gas with significantly lower densities, given the typical sizes of high-z galaxies <cit.> and spatially-resolved star-forming clumps <cit.> observed by the JWST. In low-mass galaxies, SNe feedback may be strong enough to rapidly blow out the dense gas entirely, making radiative losses even less of an effect.
Provided that radiative losses are small before breakout, we can estimate the postshock temperature after breakout from the Sedov-Taylor solution assuming a strong shock, namely
T = (3/16)μ m_p v^2_s k^-1_B, where μ=0.61 for ionized gas with primordial abundances and the speed at which the shock wave of radius R_s propagates is v_s=dR_s/dt=0.4γ_0(E_SN,totρ^-1_IGMt^-3)^1/5 with γ_0=1.17. The radius of the SN-driven bubble during the adiabatic Sedov-Taylor phase is R_s=γ_0(E_SN,tott^2/ρ_IGM)^1/5. Evaluating T at t=t_comp and comparing the resulting t_rad with t_comp, we have
T = 10^5 ( f̃_*/0.03)^2/5( M/10^10M_⊙)^2/5( 1+z/7)^18/5 K,
which is broadly consistent with the physical picture that warm-/hot-phase (T ≳ 10^5K) outflows dominate the energy loading of SNe-driven winds suggested by simulations <cit.>. Combining T with the assumption that after breakout the post-shock density is related to the mean IGM density by n ∼ 4 n_IGM∼ 10^-3[(1+z)/7]^3 cm^-3, we get <cit.>
t_rad/t_comp = 0.2 ( f̃_*/0.03)^2/5( M/10^10M_⊙)^2/5( 1+z/7)^23/5,
which suggests a typical wind-powering energy fraction of ϵ≈ 0.2, in line with the time-averaged energy loading factor in hydrodynamical simulations resolving the spatio-temporal clustering of SNe <cit.> and the often made assumption in semi-analytic models of high-z galaxy formation <cit.>. Note the weak halo mass dependence (assuming a constant f̃_*, though see Section <ref>) but strong redshift dependence of t_rad/t_comp. In what follows, we adopt a constant ϵ = 0.2 for simple estimates of the y-type distortion associated with galactic outflows at z>6.
§.§ CMB spectral distortions induced by high-redshift galaxies
The Compton-y parameter that characterizes the tSZ effect directly probes the line-of-sight integral of the (proper) electron gas pressure, P_e, namely
y = σ_T/m_e c^2∫d z 1/1+zdχ/d z P_e(z) = σ_T/m_e c∫d z P_e(z)/(1+z)H(z),
where m_e is the electron mass and χ is the comoving radial distance.
In addition to distortions in the sky-averaged CMB spectrum, equation (<ref>) also implies a large-scale clustering signal of y-type distortion anisotropies that scales as
C_ℓ, 2h∝ y^2.
It is therefore instructive to compare thermal energy contributions to ⟨ b P_e⟩ from different sources, including SNe-powered galactic winds, reionized bubbles, and gravitational heating. Considering the halo model <cit.>, we can sum up the thermal energy contributed from individual haloes to obtain the average bias-weighted (proper) electron pressure <cit.>
⟨ b P_e⟩ = ∫d M d n/d M( E_SN + E_reion + E_G) (1+z)^3 b(M),
where d n / d M is the halo mass function and b(M) is the halo bias.
Assuming clustered SNe, we can approximate the thermal energy injected by the SNe into their host galaxy/halo as
E_SN = f_D f_b M ℰ_SN/50 M_⊙( f̃_*/0.03) (ϵ/0.2) ≈ 3×10^56(M/10^10M_⊙) erg,
where ℰ_SN=10^51erg and ϵ=0.2 as has been discussed in Section <ref>. For our fiducial model, we take a duty cycle of f_D=1 such that all haloes have active SNe-driven outflows (see Section <ref> for discussion of alternative assumptions). For an order-of-magnitude estimate of the y-type distortion induced by hot galactic winds, we can simply assume that all the SN energy is injected and lost to the CMB (with a fraction ϵ) instantaneously at redshift z_i. The amount of thermal energy added to the CMB per baryon in the universe is then <cit.>
E̅_comp≈ 5 (ϵ/0.2) (f̃_*/0.03) (f_coll/0.05) eV,
where f_coll is the collapse fraction of dark matter haloes. This implies a y parameter proportional to E̅_comp <cit.>
y ≈c t_comp(z_i) σ_T n_e(z_i)/m_e c^2(E̅_comp/5 eV),
which is approximately y = 5 × 10^-7 at z_i = 6. A more accurate determination of y(z) requires tracing the thermal history of the CMB, which in terms of the total CMB energy density is given by <cit.>
d u/d z = - 4 H(z) u(z) d t/d z + d u_comp/d z (1+z)^3,
where u(z) = a T_CMB^4(z) and
u_comp(z) = ∫d M d n/d M E_comp(M, z).
The thermal energy of reionized bubbles can be estimated by
E_reion = A_Heζ f_b M ℰ/m_p≈ 5×10^55(M/10^10M_⊙) erg,
where A_He=1.22, ζ = f̃_* f_esc N_γ with f_esc = 0.1 and N_γ=4000, and ℰ=2 eV is the typical energy of the reionized medium photo-heated to roughly 2×10^4K. With f̃_* canceled out, the ratio E_SN/E_reion thus scales as ϵ f^-1_esc. Note that equation (<ref>) assumes that the ionized bubble is created for the first time, ignoring previous photoheating. Thus, by z∼6, it in fact serves as an upper limit on the thermal energy due to reionization. Finally, for isothermal gas shocked-heated to the virial temperature by gravity, we have
E_G = 3k_BT_vir/2f_bM/μ m_p≈ 10^56(M/10^10M_⊙)^5/3(1+z/7) erg.
§.§ Auto-/cross-correlations of y-type distortion anisotropies
The y-type spectral distortion power spectrum from the tSZ effect associated with high-z galaxies can be written in the large-scale, two-halo limit[We note that a non-negligible one-halo term may arise from extended/overlapping outflows and ionized bubbles. However, since observational constraints considered here are most sensitive to ℓ≲ 1000 (equivalent to physical scales ≳3Mpc at z=6), we ignore its contribution in our analysis.] as
C^yy_ℓ = ∫d z H(z)/c χ^2 P_m( ℓ/χ) [ σ_T/m_e c^21/1+zdχ/d z⟨ b(z) P_e(z) ⟩]^2,
which is approximately proportional to y̅^2 as is evident from equations (<ref>) and (<ref>).
The cross-correlation of y-type distortion anisotropies with galaxy surveys provides an alternative and likely more reliable way to constrain the energy of SNe-powered high-z galactic winds, given the significant low-z contributions in y measurements that are challenging to remove. The y–galaxy cross-correlation is advantageous because low-z contributions to y only add to the variance but do not bias the measurement. On large scales, the cross-power spectrum is
C^yg_ℓ = ∫d z H(z)/c χ^2(z) P_m[ ℓ/χ(z), z ] f_g(z) f_y(z) b_g(z) ⟨ b(z) P_e(z) ⟩
where the y window function f_y(z) = (dχ / d z) σ_T / (m_e c^2) / (1+z) and the top-hat galaxy window function f_g(z) is 1/Δ z_g over z_g±Δ z_g/2 and zero elsewhere.
The uncertainties of the auto- and cross-power spectra can be expressed as <cit.>
Δ C^yy_ℓ = 1/√(f_sky (ℓ+1/2) Δℓ)[ C^yy_ℓ + C^yy_ℓ,N]
and
Δ C^yg_ℓ = √(( C^yy_ℓ + C^yy_ℓ,N)( C^gg_ℓ + C^gg_ℓ,N)+ (C^yg_ℓ)^2/f_sky (2ℓ+1) Δℓ),
where f_sky is the sky covering fraction and Δℓ is the multipole bin width centered at ℓ. The terms C^yy_ℓ,N, C^gg_ℓ, C^gg_ℓ,N are the instrument noise power spectrum for the y-type distortion, the galaxy angular power spectrum, and the galaxy noise power spectrum (equal to the inverse of galaxy number density), respectively.
§ RESULTS
§.§ Contributions to y from different sources of thermal energy
We show in the left panel of Figure <ref> the fractional contribution of each source of thermal energy to ⟨ b P_e⟩ at z = 6 for different halo masses. Because of the identical linear dependence on M, the SNe and reionization components have the same larger contributions from low-mass haloes with M < 10^10 M_⊙, whereas haloes with M ∼ 10^11.5 M_⊙ contribute most to the gravitational heating component due to its stronger M dependence.
The right panel of Figure <ref> shows the level of y̅ signal contributed by haloes at different redshifts, as predicted by our fiducial model. For gravitational heating, the large contribution from less abundant massive haloes makes it subdominant to the other two components. Given our fiducial parameters, we expect y̅≳ 5 × 10^-7 from galaxies above z = 6 and with a significant (>50%) contribution from SNe-driven galactic outflows. This predicted y̅ value is consistent with the observational constraints available from COBE FIRAS <cit.> and a joint analysis of Planck and SPT data <cit.>. As will be shown below, the LiteBIRD satellite to be launched in 2032 is expected to substantially increase the detectability of y̅ from large-scale tSZ anisotropies <cit.> and place useful constraints on its high-z contribution.
§.§ Anisotropies of the y-type distortion from high-z galaxies
With the estimated y̅ associated with high-z galaxies in hand, we can calculate the anisotropy signals and compare them against the sensitivity level of CMB observations as described in Section <ref>. In Figure <ref>, we show the large-scale angular power spectrum C^yy_ℓ of the y-type distortion derived from our fiducial model for galaxies at 6 < z < 15. For completeness, we also illustrate the level of shot noise arising from the self-correlation of individual sources, which is much smaller than the clustering signal on the scales of interest. In contrast with previous studies, our predicted y̅ and C^yy_ℓ are slightly below even the most pessimistic model of <cit.> in which a higher ϵ is assumed. Our predicted y̅, however, is significantly larger than those (y̅≈ 10^-8–10^-7) from <cit.> and <cit.> as a combined result of the larger ϵ and f̃_* in our fiducial model. As elaborated in Section <ref>, these values are motivated by our current understanding of galaxy populations at z > 6 although some caveats apply (see Section <ref>).
The predicted C^yy_ℓ is compared against the sensitivity curves of two example CMB experiments: LiteBIRD and CMBPol, which is a hypothesized, future-generation CMB satellite with better angular resolution than LiteBIRD. Following <cit.>, we take the instrument noise power of LiteBIRD and CMBPol to be C_ℓ,N^yy = 3.3×10^-19 e^(ℓ/135)^2+2.8×10^-19 e^(ℓ/226)^2 and C_ℓ,N^yy = 1.9×10^-19 e^(ℓ/1000)^2+1.8×10^-19 e^(ℓ/1600)^2, respectively. On degree scales (ℓ≈ 100), C^yy_ℓ sourced by high-z galaxies lies above the 1-σ uncertainty level of LiteBIRD. Considering 50<ℓ<150 and an f_sky close to unity, we can use equation (<ref>) to estimate the signal-to-noise ratio (S/N) of the pure C^yy_ℓ signal to be about 3.
However, in reality, it would be extremely challenging to separate or remove the low-z contribution to y̅ (by methods like masking, see e.g. ), whose residuals can easily overwhelm the high-z signal of interest <cit.>. Therefore, it is instructive to consider the y–galaxy cross-correlation that allows a cleaner and unbiased measurement of y̅ truly associated with high-z galaxies <cit.>.
§.§ Detectability of the y–galaxy cross-correlation
The full-sky survey strategy of LiteBIRD makes it convenient to cross-correlate with other cosmological data sets, as discussed in several previous studies <cit.>. To assess whether y-type distortion anisotropies of high-z origin can be detected when residual low-z contributions are present, we consider a case study for a joint analysis between LiteBIRD and the High Latitude Wide Area Survey (HLWAS) of the Nancy Grace Roman Space Telescope. Assuming f_sky = 0.05 corresponding to the planned 2200 deg^2 coverage of HLWAS and a 5-sigma depth of m_AB = 26.5, we adopt the galaxy number density and bias estimated for HLWAS by <cit.> to evaluate C^yg_ℓ, C^gg_ℓ, and C^gg_ℓ,N in the redshift range of interest.
Figure <ref> shows the cross-power spectrum C^yg_ℓ between the y-type distortion to be measured by LiteBIRD and Lyman-break galaxies (LBGs) to be observed by the Roman HLWAS survey, together with a decomposition of the uncertainty Δ C^yg_ℓ as given by equation (<ref>). Note that here we have included a low-z contribution (after a reasonable level of masking) to the auto-power term C^yy_ℓ using the estimate in <cit.>, which is roughly two orders of magnitude stronger than the high-z contribution in our fiducial model. We have also assumed that the distinctive y-distortion spectral signature allows perfect component separation. That is, we assume negligible residual foreground contamination in the multi-frequency LiteBIRD data from cosmic infrared background fluctuations and other components <cit.>.
The comparison suggests that even with the inclusion of a strong residual low-z contribution to y̅, thanks to the wide overlapping area and the large number of galaxies available the y–galaxy cross-correlation is likely detectable on scales ℓ∼ 150. More specifically, summing over the ℓ bins (with Δℓ≈ℓ) over 50 < ℓ < 300, we estimate the total S/N of C^yg_ℓ to be about 4.6 and 2.5 for 6<z<7 and 7<z<8, respectively. This implies that the cross-correlation measurement can place useful constraints on the y-type distortion induced by high-z galaxies, in particular the thermal energy content of their SNe-driven outflows.
In addition to Roman, we show at 6<z<7 the expected improvement in the detectability of C^yg_ℓ from accessing a larger f_sky by cross-correlating LiteBIRD with LBGs from the Rubin Observatory Legacy Survey of Space and Time (LSST; ). Assuming Rubin/LSST LBG samples at the same depth as Roman but with f_sky = 0.44, we expect a roughly threefold increase in C^yg_ℓ sensitivity. This adds to both the ℓ range probed and the constraining power for a wider range of parameter combinations, especially less optimistic ones. For example, models where the product ϵf̃_* f_D is 2–3 times lower than the fiducial value taken (0.02) can still be significantly constrained up to z∼7 by the cross-correlation between LiteBIRD and Rubin/LSST.
§ DISCUSSION
It is worth noting that the roughly comparable thermal energy contributions from SNe-driven winds and reionized bubbles we find should be contrasted with previous simulation-based analyses such as <cit.> with caution for two main reasons. First, the simulations do not distinguish thermal energies attributed to galactic winds and the ionized bubble created by the galaxy, but rather focus on the total thermal energy content. Second, the weak dependence of y signal on the prescription of stellar feedback reported in <cit.>, which might be interpreted as evidence for a subdominant role of SNe-driven winds, can be (at least) partially due to the limited resolution of the simulations to properly resolve the spatio-temporal clustering of SNe. As discussed in Section <ref>, superbubbles driven by clustered SNe can be the main reason for a significant fraction of SNe energy to be vented into the galactic halo and thus contribute to the y-type distortion. This explains the overall larger y anisotropies predicted by our analytic calculations.
Several aspects of our simple model should be further investigated in the future for better understanding the y-type distortion signal associated with high-z galaxies. First, the amplitude and relative importance of each form of thermal energy shown in Figure <ref> can vary if alternative values or the evolution of quantities like f̃_* and f_esc are considered. While the values assumed (f̃_*=0.03 and ϵ=0.2) for our main results are physically motivated, more sophisticated treatments considering e.g. the halo mass dependence of these parameters <cit.> can lead to weaker y signals that are more challenging to detect. Moreover, not all haloes have active SNe-driven outflows, though we note the significant longer timescale for inverse Compton cooling (∼500 Myr) compared to that for burst cycles of star formation (∼50 Myr) in high-z galaxies <cit.>, which makes f_D = 1 likely still a valid assumption. Furthermore, at high redshift, the presence of a more top-heavy stellar IMF and/or massive Population III stars may increase the average energy released per supernova by a factor of 2–3 <cit.> and thereby counteract in part the effects of lower f̃_* or ϵ values. It would be interesting for future work to quantify the impact of these model variations and complexities on C^yy_ℓ and C^yg_ℓ.
Another key simplification made here is the treatment of low-z contributions to the y-type distortion. Our simplistic model prevents us from physically describing the thermal energy deposited by low-z haloes, especially the massive ones hosting resolved or unresolved galaxy clusters which are responsible for the majority of the observable tSZ signal. It is possible that the simulation-based predictions we adopt from <cit.> do not accurately capture the true level of low-z signals or the effectiveness of source masking. Building data-driven models across redshift in future work will therefore be extremely helpful. The y–galaxy cross-correlation described in Section <ref> may be attempted at lower redshift with existing data (e.g. Planck/ACT and DESI) for such purposes.
Finally, many physical factors not considered in our analysis actually affect the expected y-type distortion signals and lead to measurable signals of interest. For example, with sufficiently high angular resolution, distinctions in the scale/redshift dependence of different sources of thermal energy (e.g. galactic outflows versus reionized bubbles) may be utilized for component separation on intermediate scales where non-linear clustering dominates. This is helpful for isolating and exclusively constraining SNe feedback and galactic outflows with C^yg_ℓ. It is thus instructive to extend the current modeling framework and self-consistently predict the size evolution of ionized bubbles during reionization in future studies. Alternatively, one may also stack on galaxies of different types, e.g. starburst versus quiescent galaxies, to narrow down the strength of outflow signals.
§ CONCLUSIONS
In summary, we have presented in this Letter a physically motivated model that allows us to calculate and analyze the y-type distortion of the CMB spectrum and its anisotropies induced by high-z galaxies, especially their SNe-driven outflows. Motivated by recent discoveries of how clustered SNe feedback may boost the fraction of SNe energy injected in the IGM by powering superbubbles, our model predicts a relatively large y̅≈ 5×10^-7 associated with high-z galaxies, primarily powered by the thermal energy of galactic outflows rather than reionized bubbles or gravitational heating. While still in good agreement with observational constraints (5.4×10^-8<y̅<2.2×10^-6), this higher level of y̅ implies large-scale anisotropies of y stronger than many previous models predict. We have demonstrated that, in cross-correlation with forthcoming wide-area surveys of LBGs such as Roman/HLWAS and Rubin/LSST, the planned LiteBIRD mission can measure y anisotropies induced by high-z galactic outflows at high statistical significance up to z∼8.
§ ACKNOWLEDGEMENTS
We thank Greg Bryan, Claude-André Faucher-Giguère, Drummond Fielding, and Natsuko Yamaguchi for helpful discussions. GS was supported by a CIERA Postdoctoral Fellowship. SRF was supported by NASA through award 80NSSC22K0818 and by the National Science Foundation through award AST-2205900. AL acknowledges support from NASA ATP grant 80NSSC20K0497.
§ DATA AVAILABILITY
The data supporting the plots and analysis in this article are available on reasonable request to the corresponding author.
mnras
|
http://arxiv.org/abs/2409.03017v1 | 20240904181419 | Jet observables in heavy ion collisions : a white paper | [
"Ankita Budhraja",
"Marco van Leeuwen",
"José Guilherme Milhano"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
JuliaQCD: Portable lattice QCD package in Julia language
Akio Tomiya
September 9, 2024
========================================================
§ ABSTRACT
This paper presents an overview of a survey of jet substructure observables used to study modifications of jets induced by interaction with a Quark Gluon Plasma. We further outline ideas that were presented and discussed at the New jet quenching tools to explore equilibrium and non-equilibrium dynamics in heavy-ion collisions workshop, which was held in February 2024 at the ECT^* in Trento, Italy. The goal of this white paper is to provide a brief report on the study of jet quenching observables earlier conducted and to present new ideas that could be relevant for future explorations.
§ INTRODUCTION AND MOTIVATION
The heavy-ion programs at the Relativistic Heavy Ion Collider (RHIC) and at the Large Hadron Collider (LHC) have revealed properties of the strongly interacting matter created under extreme conditions <cit.>. This strongly interacting matter, commonly referred to as the quark-gluon plasma (QGP), provides a rich opportunity to study deconfined quarks and gluons. Additionally, this new state of matter exposes a unique opportunity to study the phase diagram of Quantum Chromodynamics (QCD). However, since this state of matter exists for only about O(10^-23) s after the collision, one needs to look for natural probes that originate during these collisions in order to study it.
One such approach is to examine the evolution of highly energetic quarks and gluons, generated at the hard interaction of the two nuclei, as they traverse the QGP medium. These energetic partons shower into collimated sprays of particles referred to as jets. By studying the modifications of jets in the QGP, one aims to uncover the microscopic interactions of quarks and gluons with the medium and thereby the properties of the medium itself. An important experimental evidence that has been observed is the suppression of jets in the medium (even for very high p_T jets) when compared to the vacuum baseline (p-p collisions) <cit.>.
This serves as a prominent evidence of the matter that is created. A wide range of efforts has been
directed at formulating an understanding of the modification in the structure of these jets as they traverse through the QGP <cit.>.
One of the most exciting advances in the study of jets, both theoretically and experimentally, has been the development of techniques to study the internal structure of jets to determine the properties of the underlying microscopic collisions.
Jet substructure studies in heavy-ion collisions aim at disentangling the properties of the QGP by looking at the modifications of the jets' inner structure in A-A collisions when compared to p-p collisions. However, due to the extraordinary complexity of the system created in these collisions as well as the presence of very large backgrounds, the simple task of relating the modifications in the substructure of jets to medium properties is far from trivial.
In addition, while medium-induced gluon radiation is the main topic of physical interest, additional effects like colour coherence, elastic scattering and medium response, are expected to contribute to the observed modifications of jets and their structure. To facilitate an understanding of the medium properties, observables to study jet quenching in heavy-ion collisions should be selected considering two main aspects: sensitivity to the specific physics aspect of interest and the robustness of the observable against underlying event background. Some examples of the specific physics aspects that a given observable may be sensitive to are listed as under:
* The angular distribution of radiation pattern inside a jet: The multiple soft kicks received by partons in the developing shower lead to a broadening of the radiation (resolved by the medium) inside the jet in addition to the presence of additional medium-induced radiation, see Section <ref> below;
* Azimuthal broadening of partons as well as rare hard momentum exchanges with the medium (Molière scattering): In fact, the angular broadening can be directly related to the jet transport coefficient q̂, see Section <ref> below;
* Differences between quark vs gluon-initiated jets: Naively, one expects gluons to interact more strongly with the medium and hence gluon-initiated jets to lose more energy than quark-initiated jets. Designing observables sensitive to such differences could, therefore, help provide a more differential understanding towards the energy loss mechanisms in the medium, see Ref. <cit.> for one such proposed jet observable. Furthermore, observables that can be made sensitive to heavy quark mass effects such as the dead cone can be of particular interest as well.
* Path length dependence of parton energy loss: Partons travelling along directions in which the medium is larger will, in principle, interact more with the medium and lose energy differently than partons travelling in directions where the medium extent is shorter.
* Interference effects in the medium: If the formation time of a radiation is smaller than the distance between the scatterers (incoherent emissions), the radiation is resolved by the medium and can be considered as an independent source for further emissions while if the formation time is larger than the distance between the scattering centers (coherent emissions), multiple scatterers act coherently until the radiation is resolved by the medium.
* Medium response: the back reaction from the medium to jet propagation.
* Formation time of emissions that can be used to tag splittings that happened inside the medium from those outside.
The other important consideration is the robustness of the measured value of a jet observable against contributions from the underlying event background. In heavy-ion collisions, a large number of particles are produced, most of which are not associated to a hard scattering. In data analysis, the p_T-density of the background is estimated on an event-by-event basis using η-ϕ areas outside the jet cone and subtracted. However, statistical fluctuations of the background level inside the jet cone are significant. Some background subtraction methods attempt to provide an event-by-event estimate of the in-cone background <cit.>.
Furthermore, as jets in vacuum are highly collimated,
this makes it particularly interesting to study medium-induced radiation and medium response <cit.> appearing typically at large angles, where these effects may dominate over the vacuum parton shower physics. Unfortunately, these effects compete with the background effects in particular at large angles and an excellent control of the background subtraction is needed to study such large angle radiation.
This article is organized as follows. In Section <ref>, we present some general aspects of physics of jet modification in the medium. In Section <ref>, we outline the major findings of the survey of jet observables conducted in Ref. <cit.>. This analysis aims towards finding the minimal set of observables that provides mutually uncorrelated information about the medium. In Section <ref> and <ref>, we discuss observables for future measurements that were discussed at the ECT^* meeting.[The ECT^* meeting page can be found at this link: <https://indico.ectstar.eu/event/198/timetable>.] Finally, in Section <ref>, we discuss the resilience of jet substructure observables to background contributions in heavy-ion collisions.
§ ANGULAR AND LONGITUDINAL STRUCTURE
Hard scattering processes in the initial state of collisions of hadrons and nuclei produce quarks and gluons with a large transverse momentum. These highly energetic partons radiate as they propagate outward, forming parton showers, which subsequently hadronize, giving rise to jets of high-momentum particles in the final state. Jets are identified in experiments using jet finding algorithms which are formulated in a manner that they are insensitive to soft and collinear structures of the radiation, typically associated with divergences in the theoretical treatment. As a result, the total energy and momentum of a jet is a measure of the energy and momentum of the parton (quark or gluon) that initiates it. In heavy-ion collisions, interactions of energetic partons with the thermal medium induce additional radiation due to elastic as well as inelastic interactions of the parton shower with the medium. A specific feature of the medium-induced radiation is the absence of any collinear enhancement associated to it. As a result of this, medium-induced radiation is typically at large angles. Gluons (or quarks) emitted at such large angles may escape the jet cone, leading to energy loss. In addition, the internal structure of the jet is also modified by the jet-medium interactions.
Generically speaking, the expected effects of jet quenching on the jet structure are: (1) a softening of the longitudinal structure and (2)
a broadening of its transverse structure. The softening of the longitudinal structure is directly driven by additional medium-induced splittings which reduce the number of fragments with a large momentum fraction z and increase the number of fragments at low z. Medium response, i.e. soft partons from the QGP that acquire momentum due to interactions with the jet and end up in the jet cone, may also contribute to an increase of the number of soft fragments. The transverse broadening of the jet structure, on the other hand, is driven by two separate effects: medium-induced radiation (and the corresponding recoil of the jet axis), as well as by momentum broadening due to transverse momentum exchanges with the medium partons (elastic scattering). It is important to note that these effects are with respect to reference jets in p-p collisions produced by partons with the same (initial) energy that fragment without interacting with the medium. In experiments, jet properties are generally compared between p-p and A-A collisions at the same reconstructed jet energy. In A-A collisions, a jet in a given observed p_T range may originate from a parton with higher momentum that has lost a significant amount of energy due to out-of-cone radiation or a parton with only slightly higher momentum that lost a small amount of energy. This leads to a selection bias effect for jets in heavy-ion collisions. The resulting biases can, in principle, be reduced or avoided by either using γ-jet and Z-jet pairs (see Section <ref> below), or through the procedure outlined in <cit.>.
The understanding of jet quenching effects in the medium is then largely driven by the study of modifications to the internal structure of jets. Jet substructure can be characterised by reporting distributions of the longitudinal and transverse (opening angle) distributions, or their moments, called angularities. It is worth noting that while fragment distributions provide an inclusive measure of the jet structure, jet shape variables like the angularities provide a measure for each jet, which may be sensitive to event-by-event variations in path length or energy loss. Another strategy for characterising the jet structure involves going back in the clustering history of the jet and extracting characteristic variables; this is often combined with grooming techniques that aim to provide a robust measure of the 'hardest' jet substructure, e.g. by reporting the momentum fraction and/or opening angle of a hard splitting (z_g, R_g) <cit.>. Additionally, dynamical grooming with different orderings can be utilized to obtain variables like the k_⊥ of the hardest splitting or even to access time structure of the splittings in heavy ion collisions <cit.>.
§ SURVEY OF OBSERVABLES: MAIN RESULTS
Heavy-ion collisions are a complex experimental environment, which also means that individual measurements are generally sensitive to a combination of different physics effects. To identify the jet observables that are most sensitive to jet quenching effects in the medium, a survey of 31 jet observables was conducted, using the JEWEL model as a reference for quenching effects.
The full results are presented in <cit.> and we provide a short summary of the main findings here for reference.
The observables considered are listed in Table <ref>. All the jet observables are calculated based on the constituents of a groomed jet.
Two complementary machine learning-based analyses are employed to study the correlations between these observables separately for p-p and A-A jet samples: linear correlations are studied using Principal Component Analysis (PCA) and non-linear relations are studied using a Deep Auto-Encoder (AE). Below we list the main findings of this survey :
* The information content of the entire set is described by a small number of effective degrees of freedom. For the PCA, this effective degrees of freedom corresponds to the first 10 principal components describing about ∼ 90% of the distributions of all input observables. When the non-linear relations are also included through the AE analysis, this further reduces to only 5 nodes on the main hidden layers providing similar reconstruction ability.
It is important to note that these effective degrees of freedom do not necessarily correspond to simple observables. Instead these may correspond to a subset or a combination of all the input observables with suitable weights.
* The correlations between observables are mostly resilient to quenching effects included in the JEWEL model. This is found for both the linear and non-linear analyses: in both cases, the ability to reconstruct the observables in A-A samples using the effective degrees of the p-p sample indicates remarkable resilience of correlations to quenching effects. The effect of quenching is manifested through a strong population migration modifying mostly the mean or most probable values of observables and not the correlation between pairs.
* Specific observables and pairs of observables provide similar discrimination ability as the full set. By training a boosted decision tree (BDT) on all observables and comparing it to the discrimination ability achieved by BDTs trained on each single observable and pairs of these observables, it is found that several observables as well as pairs of observables already exhaust the discrimination ability of the full BDT.
More specifically, from Figure <ref>, we find that the individual observables that are the most senstive to jet quenching in this study are rz_SD and τ_2, SD, each accounting for 0.99 of the discriminating power of the BDT trained in all observables. Similarly, n_const,SD, r^2z_SD, τ_3, SD, κ_ktD are equally powerful observables, reaching 0.98 of the discriminating power. Additionally, some pairs of observables match the discrimination power of the BDT trained in all observables. These are pairings of rz_SD with (Δ p_T)_SD, τ_3, SD, κ_TD or κ_ktD; also the further pairings of κ_ktD with any of n_const,SD, p_TD_SD, z̅^2_SD, τ_2, SD or τ_3, SD: and κ_TD with τ_2, SD. Pairs involving a dynamical grooming observable and an angularity-type observable dominate this list.
§ REDUCING THE BIASES: Γ-JET MEASUREMENTS
The energy loss of a parton propagating through a given amount of QGP has very large fluctuations. In most experimentally relevant situations, the number of medium-induced radiations is small and Poisson fluctuations of this number already lead to a large spread in energy loss, with a sizeable probability of no energy loss <cit.>. Moreover, medium-induced radiation may look similar to vacuum radiation, and/or be partially recaptured by the jet finding algorithm.
As discussed briefly in Section <ref>, when selecting a given momentum range for data analysis or theoretical consideration from an inclusive sample of jets, the sample includes a combination of jets that did not lose energy in the QGP and jets that did lose energy and originate from partons with a larger initial transverse momentum. Due to the steeply falling jet spectrum, the contribution of partons with large energy loss is naturally suppressed; this effect is sometimes referred to as the leading jet bias or fragmentation bias, to indicate that an inclusive sample is generally biased towards jets that did not lose much energy and therefore fragment like jets in p-p collisions. This bias can be reduced by using experimental techniques that use jet pairs or gauge boson-jet pairs to gain access to the initial parton energy.
The clearest access to the initial parton kinematics is provided by the use of γ-jet and Z-boson-jet pairs. At leading order, a perfect balance of the boson and jet momenta is expected: the transverse momentum of the gauge boson is equal to that of the jet. The transverse momentum of photons and Z-bosons can be measured in experiment and since they do not lose energy in the QGP, this provides a direct measure of the initial jet parton momentum. Measurements so far have focused on γ-jet momentum balance <cit.>, as well as measurements of both longitudinal and transverse fragment distributions in the recoil jet <cit.>. These momentum balance measurements show a clear increase of the number of asymmetric pairs, where the reconstructed jet momentum is significantly smaller than the photon momentum. In the currently available analyses, the combination of the photon momentum selection and the minimum jet result in a relatively large cut-off in the energy asymmetry, e.g. x_j/γ=p_T,jet/p_T,gamma > 0.3 for p_T^γ = 100 - 158 GeV see Ref. <cit.>, leading to a significant loss of pairs. With larger data samples in Run 3 and 4 of the LHC, the photon energy range can be increased and the asymmetry selection can be relaxed to reduce the bias on the recoil jet sample. This was discussed in two separate presentations at the workshop <cit.>.
§ NOVEL OBSERVABLES FOR FUTURE STUDIES
With the goal to better understand the physics of jet quenching mechanisms, several suggestions were made during the meeting to propose observables/measurements that can reduce the biases in jet selection. These observables broadly fall into the following classes: energy flow-based, multivariate observables, and γ/Z-tagged observables (see Section <ref>).
Proposals were also made to study asymmetry-based observables that would, in principle, be more robust against background effects than the standard jet observables providing a complementary handle towards properties of the medium. Below we will outline these ideas in more detail.
The first class of observables concerns energy-energy correlators (EECs) that describe asymptotic energy flow in scattering events and exhibit fundamentally distinct properties from standard jet substructure observables, like the ones considered in Table <ref>. Correlation functions are one of the most fundamental objects in a field theory and enjoy a direct description in terms of the light ray operators. This enables a direct theory-experiment comparison to be possible. On the theoretical front, due to the simple analytic structure of EECs, their vacuum baseline is extremely well controlled with the 2-point energy correlator known precisely up to N^3LL accuracy and up to NLL for the higher-point correlators <cit.>. Furthermore, the energy weighing in the observable naturally suppresses the sensitivity to soft radiation without the need for jet grooming techniques. The presence of an accurate baseline as well as reduced soft sensitivity of the observable make it an interesting candidate for jet quenching studies in the medium. A few recent efforts have been dedicated to calculating the 2-point EECs in the medium <cit.>
While the leading order calculations suggest a strong enhancement at large angles, recent efforts towards a higher order description reveal an O(1) deviation suppressing the contribution in this regime <cit.>. Recent theoretical efforts towards a systematically improvable effective field theory framework further indicate the importance of resummation in the small angle region for the two-point correlator <cit.>.
It was additionally proposed to study energy correlator observables using γ-tagged jets to not only mitigate the leading jet bias effects, but also to provide a unique approach to exposing a possible medium response, or the 'wake' generated in the medium by the passage of the jet. Utilizing the Hybrid model, the authors <cit.> studied the shape dependence of the full 3-point energy correlator. By looking at the largest separation between the three directions of the triangle, it was shown that the wake could be exposed in the ratio at large values of the observable. A very first analysis of the two-point energy correlator in the medium was recently presented by the CMS collaboration, revealing interesting modifications at large angular scales, see <cit.> and CMS PAS HIN-23-004 for details. Future theoretical and phenomenological efforts are needed to completely disentangle the different medium effects specifically at large angles.
Since the leading jet bias effect reduces the sensitivity of inclusive measurements to jet quenching, multivariate approaches are being considered to provide increased sensitivity.
In a recent work in Ref <cit.>, the authors showed that by selecting the hardest splitting in a jet above a certain transverse momentum scale k_t^ min and studying its angular distribution, different energy loss mechanisms in the medium may be isolated.
Sufficiently large values of k_t^ min select the regime dominated by vacuum-like splittings and energy loss with little-to-no modification of the internal structure of jets. Varying k_t^ min to lower values enhances the sensitivity of the observable towards other medium effects such as color coherence
while even smaller k_t^ min∼ O(1 GeV) receive contribution from various competing effects such as multiple soft scatterings, medium recoil as well as non-perturbative hadronization effects which are hard to disentangle.
In another recent work of Ref. <cit.>, the authors showed that by making use of inclusive jet R_AA measurements along with jet azimuthal anisotropies over various centrality bins could provide a distinct probe of the coherence physics in the medium. Here the azimuthal anisotropy is caused by the fact that particles oriented in the direction in which medium is shorter suffer less energy loss compared to those in the direction where the medium is longer. A key idea of this framework is the determination of the resolved phase space in a jet where all resolved partons/subjets undergo quenching independently. The amount of resolved phase-space is determined by the physics of color decoherence in the medium with dipoles with an angle smaller than the critical angle θ_c loosing energy as a single color source. As the critical angle depends on the distance traversed by the jet in the medium, a study of jet quenching observables as a function of centrality may provide a handle into the coherence physics in the medium.
Finally, a promising novel direction is to use asymmetry observables that are affected by the finite flow velocity of the medium and study its effect on jet observables. It has been shown that the parton energy loss in the medium is not only impacted by the energy density of the medium but will also depend on the strength and direction of the collective flow field <cit.>. Recent developments <cit.> have established a first principles formulation of quenching in the presence of density gradients and flow.
Particles oriented in the direction of medium flow will experience a preferential p_T broadening known as drift <cit.>. Efforts have also been made to study the impact of spatial inhomogeneities in the medium <cit.>. While most current formulations studied the effects of flow on hard partons, its impact on jet substructure measurements was addressed in <cit.>.
The presence of multiple scales in the evolution of jets make them an interesting probe of the medium dynamics but also complicates any attempts towards extraction of the medium scales of interest. Several approaches were proposed to study the dynamics of medium-induced modifications of parton showers. Generically speaking, it was identified that observables with: (a) γ/Z-tagged jet measurements, (b) well controlled vacuum baseline and (c) reduced sensitivity to underlying event as well as medium response; are of potential interest.
§ BACKGROUND CONSIDERATIONS
Heavy ion collisions are a dense environment where a large number of particles are produced, most of which are not associated to a hard scattering event. In presently available data analyses, the p_T-density of the background is estimated on an event-by-event basis using η-ϕ areas outside the jet cone and subtracted, using either an area-based <cit.> or the constituent-based method <cit.>. The remaining statistical fluctuations of the background level inside the jet cone lead to a significant smearing of the measured jet momentum and are generally corrected for using unfolding methods. More recently, machine-learning techniques have been used to develop a subtraction method for the jet momentum which reduces fluctuations below the statistical limit by incorporating multiplicity information in the calculation of the correction <cit.>.
While the effect of background and background fluctuations on most inclusive jet-shape variables is well-understood, observables that introduce additional cuts or selections may have additional sensitivity to background fluctuations. For example, upward fluctuations of the local background may promote a subjet above the grooming threshold in softdrop-based measurements, leading to an increase of apparent splitting rate in particular at large R, where the relative contribution of the background to the total energy flow is largest <cit.>.
The relative contributions from underlying event background are largest at low momenta and/or large angles to the jet axis, which makes measurements in those regimes the most sensitive to background fluctuations. Genuine medium effects, produced by the transfer of momentum to medium constituents (recoil, or wake effects) are expected to be strongest at momenta close to the thermal scale, where background fluctuations are large. The study of medium response therefore requires excellent control of the background fluctuations. Or, in other words: special care is needed to disentangle background fluctuations from medium response.
A detailed study, presented at the workshop <cit.>, showed that the ability to distinguish between modified jets and vacuum-like ones with the Machine Learning approach of <cit.> is enhanced by the presence of medium response and that this enhancement survives when background fluctuations are accounted for.
§ ACKNOWLEDGEMENTS
We thank Alba Soto-Ontoso, Hannah Bossi, Jack Holguin,
Konrad Tywoniuk, Leticia Cunqueiro, Matthew Sievert and Yen-Jie Lee for helpful discussions during the workshop.
This work is a result of the activities of the Networking Activity 'NA3-Jet-QGP: Quark-Gluon Plasma characterisation with jets' of STRONG-2020 "The strong interaction at the frontier of knowledge: fundamental research and applications" which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824093.
|
http://arxiv.org/abs/2409.02708v1 | 20240904134422 | Few-shot Multi-Task Learning of Linear Invariant Features with Meta Subspace Pursuit | [
"Chaozhi Zhang",
"Lin Liu",
"Xiaoqun Zhang"
] | cs.LG | [
"cs.LG",
"stat.ME"
] |
Hong-Ou-Mandel Interference in a temporal-average-inversion-symmetric chain
Zhoutao Lei
September 9, 2024
===========================================================================
§ ABSTRACT
Data scarcity poses a serious threat to modern machine learning and artificial intelligence, as their practical success typically relies on the availability of big datasets. One effective strategy to mitigate the issue of insufficient data is to first harness information from other data sources possessing certain similarities in the study design stage, and then employ the multi-task or meta learning framework in the analysis stage. In this paper, we focus on multi-task (or multi-source) linear models whose coefficients across tasks share an invariant low-rank component, a popular structural assumption considered in the recent multi-task or meta learning literature. Under this assumption, we propose a new algorithm, called (abbreviated as ), that provably learns this invariant subspace shared by different tasks. Under this stylized setup for multi-task or meta learning, we establish both the algorithmic and statistical guarantees of the proposed method. Extensive numerical experiments are conducted, comparing against several competing methods, including popular, off-the-shelf model-agnostic meta learning algorithms such as . These experiments demonstrate that achieves superior performance over the competing methods in various aspects.
§ INTRODUCTION
Modern machine learning methods generally demand extensive amount of data to achieve good learning performance. However, in certain application domains such as clinical medicine and social sciences, collecting a large amount of training data can be extremely cost and time consuming, and sometimes even unethical due to privacy concerns. As a result, data scientists and statisticians will have to face the so-called “data-scarce” or “data-hungry” challenge when deploying those modern machine learning methods <cit.>.
To bypass this difficulty, the multi-task or meta learning paradigm <cit.> has emerged as a promising option both in theory and in practice <cit.>, by borrowing information across different data sources/tasks but sharing certain invariances. Intuitively, multi-task or meta learning methods are advantageous because they are designed to acquire those invariant information shared by all tasks during the learning process.
To make progress in our theoretical understanding of multi-task or meta learning, a large body of the current literature focuses on the multi-task large-dimensional linear models <cit.>, a stylized statistical model but still preserving some essential features of more general multi-task or meta learning problems encountered in real life. Specifically, we are given a set of T tasks, indexed by [T] {1, ⋯, T}. For each task t ∈ [T], we are given access to m independent and identically distributed (i.i.d.) observation pairs (_t∈^m × d, _t∈^m), where _t and _t, respectively, denote the design matrix of d features and the scalar-valued responses/labels, over all m samples in task t. The total sample size is denoted as N ≡ T m. We further assume that for every task t, the response _t follows a noise-corrupted linear model:
_t = _t _t^∗ + _t, t = 1, ⋯, T,
where _t ∈ℝ^m represents the exogenous white noise associated with task t. y_t, j, _t, j, _t, j are reserved for the response, the features, and the random noise for sample j in task t. In addition, x_t, j, k is the k-th coordinate of the feature vector _t, j. We let Θ^∗ [_1^∗, ⋯, _T^∗]^⊤∈^T × d be the collection of the true regression coefficients over all tasks. Since it becomes more difficult to estimate Θ when d becomes larger relative to m (corresponding to data-scarcity/data-hungry regime <cit.>), further structural assumptions on the model (<ref>) are indispensable. A popular choice is the so-called “Hard Parameter Sharing (HPS)” condition by assuming that the regression coefficients over all tasks enjoy a factor structure Θ^∗^∗^∗, where ^∗∈^T × s is the task-varying low-rank component with rank s ≤ d and ^∗∈^s × d is the task-invariant component that maps the intrinsic task-varying coefficient _t^∗∈^s linearly to _t^∗∈^d, for every t ∈ [T]. Intuitively, once the task-invariant ^∗ is at our disposal, the original high-dimensional task can be reduced to easier, lower-dimensional task by projecting the features onto {^∗}, the linear span of the column space of ^∗.
§.§ Related Work
Although exceedingly simple, the model (<ref>) has been analyzed extensively as a first-step towards understanding multi-task or meta learning. Due to space limitation, we only highlight several works that are relevant to our development. <cit.> introduced a Method-of-Moments () algorithm that learns the task-invariant subspace {^∗} via Singular-Value-Decomposition (SVD) of a y^2-weighted empirical Gram matrices of the features , where we recall that y is the response variable. The method, although extremely simple, is nearly rate optimal for estimating {^∗} in sine-angle distance by matching the minimax lower bound derived in the same paper <cit.>, which scales roughly in order √(d s/T m), where we recall that d, s, m, T are respectively the feature dimension, the rank of the task-invariant subspace ^∗, the sample size of each individual task, and the total number of tasks. Lower bounds in a similar spirit have also been derived in <cit.> and in <cit.>, where the authors studied a variant of the HPS linear model by allowing the low-rank subspace/representation ^∗ to slightly vary across tasks.
Despite being rate-optimal, as shown later in Section <ref>, does not perform well in practice, even in very standard settings where the feature is drawn from isotropic Gaussian. A modified version of in <cit.> offers improved performance in estimating the subspace, particularly when data is extremely scarce. However, this modified struggles to perform that well in practice as the feature dimension and sample size grow (see Figure <ref>). Another alternative method presented in <cit.> incorporates the Burer-Monteiro factorization method <cit.>. However, this method fails to effectively handle cases with very scarce data, and the derived rate of estimating ^∗ does not shrink to zero as the number T of tasks grows, hence a suboptimal rate compared against the minimax lower bound √(d s/T m). <cit.> proposes an algorithm that involves alternating minimization and gradient descent techniques. Nevertheless, as shown in Figure <ref>, this method still requires a substantial amount of data, which may be difficult to acquire in some applications.
In a slightly different vein, <cit.> demonstrate that Model-Agnostic Meta-Learning (MAML) <cit.> and Almost-No-Inner-Loop algorithm (ANIL) <cit.>, which are not designed for linear models, are still capable of learning the shared representation, or the task-invariant subspace ^∗. However, the outcomes of these algorithms are not well-suited for data scarcity situations, as their analysis relies on drawing independent mini-batches at each iteration. As shown in our numerical experiments, ANIL requires more data to achieve similar error level(see Figure <ref>). These empirical performance suggests that when the data generating mechanism satisfies certain complexity-reducing structures (e.g. the HPS linear model), MAML and ANIL could be sub-optimal because they fail to leverage such structures at the expense of being model-agnostic.
Finally, we mention that <cit.> proposed a trace-norm regularized estimator and derived error bounds of their proposed method[Their error bounds were later improved to nearly optimal by <cit.> under Gaussianity with Gordon's Convex Gaussian Min-Max Theorem [CGMT] <cit.>. However, <cit.> further imposes the constraint T < m; see the statement of their Theorem 3.]. As will be evident later, this approach is the closest to ours as the trace norm can be viewed as a relaxation of the matrix rank used in our method. Nevertheless, they do not provide a satisfactory algorithmic solution to address the optimization problem[In their numerical experiments, the BFGS method is directly used to solve the relevant optimization problems, which may lead to sub-optimal solution. In Section <ref>, we elaborate on this issue when evaluating the finite-sample performance of different methods.].
§.§ A Prelude to Our Approach and Our Contributions
Overall, the existing methods present various strengths and weaknesses, making it essential to develop new approaches that can effectively handle data scarcity by learning the task-invariant subspace effectively. In this paper, we propose a method of directly learning the shared representation, or equivalently the invariant subspace in the HPS linear model, inspired by the matrix rank minimization problem. We will formally introduce our new algorithm later in Section <ref>. In this section, we provide some intuitive explanation of our proposed algorithm.
Specifically, we address the problem of learning task-invariant subspace by treating it as a matrix rank minimization problem. The corresponding constrained matrix rank minimization problem is formulated as follows:
min_X rank (X)
s.t. 𝒜 (X) = b,
where X is the unknown matrix, 𝒜 is a linear map and b is a vector. As in <cit.>, there is a more robust formulation of (<ref>):
min_X ‖𝒜 (X)-b ‖_2
s.t. rank (X) ≤ s,
which we will really consider in our methods.
The matrix rank minimization problem is closely related to the compressed sensing problem. They become equivalent when X is reduced to an unknown vector, and the objective function is replaced by ‖ X ‖_0. Solving compressed sensing and matrix rank minimization problems exactly, in the form (<ref>), are well known NP-hard problems <cit.>. To find an approximate solution to the compressed sensing problem, it is common to replace the objective function with the ℓ_1 norm, which is the convex envelope of the ℓ_0 norm. Various algorithms, such as iterative hard thresholding <cit.>, subspace pursuit <cit.>, compressive sampling matching pursuit <cit.>, iterative thresholding with inversion <cit.>, and hard thresholding pursuit <cit.>, are typically employed for the original ℓ_0 minimization problem. For the ℓ_1 optimization problem, several algorithms, such as those introduced in <cit.>, <cit.>, <cit.>, and <cit.>, are commonly used.
In analogy, nuclear norm minimization, which replaces the rank of the matrix with its nuclear norm, solves the matrix rank minimization problem (<ref>) approximately. The method proposed in <cit.> exemplifies this approach. As the nuclear norm minimization problem is equivalent to some semi-definite programming (SDP) problem, interior-point methods can be used to solve it, as shown in <cit.>. Several other first-order algorithms, including singular value thresholding (SVT), fixed point continuation (FPC), and the approximate SVD-based FPC algorithm (FPCA), have also been proposed in <cit.> and <cit.>. In our paper, we opt to utilize the iterative hard thresholding algorithm mentioned in <cit.>. This algorithm is a generalization of the iterative hard thresholding algorithm used in compressed sensing <cit.>. Within the context of meta-learning, we designate this algorithm as (). We substantiate its efficacy by demonstrating that it can attain a convergence outcome analogous to the one reported in <cit.>. In practical applications, our approach demonstrates improved statistical properties and computational efficiency empirically.
We now summarize our main contributions as follows:
* On the methodological side, we develop a new, iterative algorithm for learning the task-invariant subspace in the multi-task HPS linear models, contributing a new method to the multi-task/meta/invariant learning paradigm <cit.>. is easy and transparent to implement, .
* On the theoretical side, we establish how fast the regression coefficients and task-invariant subspace outputted by converge to the truth ^∗ and ^∗, as m, d, s vary. In particular, we directly analyze the iterations of and establish the convergence rates at k-th iteration, for every k ≥ 1. As a consequence, our analysis directly reveals the iteration complexity of .
* Empirically, through extensive experiments based on simulated and real datasets, we demonstrate that outperforms most of the competing methods on several aspects. Several future directions for theoretical investigation are also hinted by the results of our numerical experiments. For more details, see Section <ref>.
§.§ Organization of the Paper
The rest of our paper is organized as follows. In Section <ref>, we introduce the basic problem setup, the statistical model being analyzed, and key underlying assumptions. The new algorithm is then proposed in Section <ref>, with its theoretical properties, in terms of sample and computational complexities, established in Section <ref>. In Section <ref>, we conduct numerical experiments with both simulated and real datasets that demonstrate the performance of our proposed method in practice, together with an extensive comparison between our method and other competing methods. Finally, we conclude our paper in Section <ref> with a discussion on future research directions.
§.§ Notation
Before proceeding, we collect some notation frequently used throughout the paper. The distributions for multivariate Gaussian and sub-Gaussian random variables are denoted as (μ,Σ) and (μ,Σ), respectively, where μ is the mean and Σ is the covariance matrix. For sub-Gaussian random variables , by (μ, Σ), we mean that the population covariance matrix [( - μ) ( - μ)^⊤] is dominated by Σ in the positive semidefinite sense. In general, we reserve bold fonts for vectors and matrices and regular fonts for their elements.
Given a real-valued vector 𝐛, let 𝐛_q denote its ℓ_q norms. For a real-valued matrix , _2, _, _∗, and [] represent its spectral norm, Frobenius norm, trace norm, and rank respectively. () and ^⊥ () denote the column space spanned by and its orthocomplement. With slight abuse of notation, we denote ^⊥ as a matrix with (^⊥) = ^⊥ (). In general, ^⊥ is not unique, and we just pick one arbitrarily. Throughout the paper, we use the sine angle distance to measure the distance between the column spaces spanned by two different matrices _1 and _2, defined as sin∠ (_1, _2)‖_1^⊤_2^⊥‖_2 <cit.>. Given any positive integer k, we let _k denote the k × k identity matrix. Given a matrix ∈^k_1 × k_2, we introduce the following vectorization operator to flatten the matrix :
(⃗) [ 𝐦_1, ·^⊤, ⋯, 𝐦_k_1, ·^⊤]^⊤.
If is a symmetric positive semidefinite matrix, let ^† denote its Moore-Penrose pseudo inverse (also known as the ridgeless regularized inverse ^†lim_λ↓ 0 ( + λ)^-1): let = ^⊤ be the eigen-decomposition of and let be the diagonal matrix of only the non-zero eigenvalues and be the corresponding eigenvectors, then ^†≡^-1. Finally, given two matrices and of appropriate sizes, ⊙ denotes their matrix Hadamard product.
§ PROBLEM SETUP AND MAIN ASSUMPTIONS
§.§ The statistical model
As alluded to in the Introduction, the Hard Parameter Sharing (HPS) linear model posits a factor structure of the regression coefficients among different tasks: There exists a task-invariant matrix ^∗∈^s × d and a task-varying, but lower-dimensional matrix ^∗∈^T × s, with s ≤ d, such that Θ^∗ = ^∗^∗. Here ^∗≡ [_1^∗, ⋯, _T^∗]^⊤ can be interpreted as the intrinsic task-specific coefficients and ^∗ represents the task-invariant linear map that lifts _i∈^s to _i∈^d for all i ∈ [T]. ^∗ is sometimes called the task-invariant linear representation in the literature <cit.>. ^∗ is assumed to have orthonormal columns as we are only interested in the linear space {^∗} spanned by the columns of ^∗. To avoid clutter, ^∗ will also mean {^∗} when it is clear from the context.
In multi-task or meta learning, an essential idea is to learn the low-dimensional shared representation ^∗ by trained from multiple data sources. Based on learnt , the original high-dimensional problem is reduced to lower dimensions, thus enjoying improved sample and computational complexity. Following the current literature <cit.>, the quality of learning ^∗ by is gauged by the sine angle distance sin∠(, ^∗).
With Model (<ref>) in place, we also need to specify some assumptions on the data. First, we impose the following distributional assumptions on the covariates _i and the additive noise _i:
For every task t ∈ [T], x_t, j, ki.i.d.∼ (0, 1) for j = 1, ⋯, m and k = 1, ⋯, d; and ε_t, j∼ (0, σ^2) for j = 1, ⋯, m.
The distributional assumption imposed on _t implies that it satisfies the Restricted Isometry Property (RIP) (or properties alike such as the Restricted Eigenvalue Conditions <cit.>) with high probability, a common condition imposed on the covariates to ensure identifiability (i.e. well-posedness along any sparse directions). Similar to the existing literature, we also assume the following condition on the intrinsic task-specific regression coefficients ^∗:
[Task Diversity]
Let λ_1, ⋯, λ_s be the largest to the smallest eigenvalues of the Gram matrix of task-varying components averaged over tasks, Ξ^∗ T^-1^∗⊤^∗≡ T^-1∑_t = 1^T_t^∗_t^∗⊤. There exists some universal constant L > 0 such that λ_s≥ L.
This assumption essentially requires that the task-varying information in the regression coefficients is sufficiently diverse across tasks, to the extent that their Gram matrix Ξ^∗ averaged across tasks is well-conditioned.
To ease exposition, we introduce some further notations: 𝕏∈^mT × dT denotes the block-diagonal matrix with block-diagonal elements _t∈^m × d for t = 1, ⋯, T, = (_1^⊤, ⋯, _T^⊤)^⊤∈^N × 1 and = (_1^⊤, ⋯, _T^⊤)^⊤∈^N × 1, with N m T. Additionally, given any Θ∈^T × d, we define the linear mapping : ^T × d→^N × 1 as follows
(Θ) 𝕏·(⃗Θ) ≡[ [ _1 ; ⋱ ; _T; ]]
[ [ _1; ⋮; _T; ]].
Hence we can rewrite Model (<ref>) compactly in matrix form as:
= (Θ^∗) + .
Our formulation can also be adapted to the case where different tasks have different sample sizes m_1, ⋯, m_T. But to not lead readers astray, we decide to focus on the simpler, equal-sized case, i.e. m_1≡⋯≡ m_T≡ m.
§.§ Restricted Isometry Property
Similar to the compressed sensing literature, In the pursuit of solving the aforementioned problem through matrix rank minimization, the concept of the Restricted Isometry Property (RIP) plays a pivotal role. While RIP for sparse vectors in compressed sensing was originally introduced in <cit.>, it has been extended to matrices to address matrix rank minimization, as seen in <cit.>. Here, we define RIP in the context of Model (<ref>):
For any integer r where 1≤ r ≤ d, consider the linear operator :ℝ^T × d→ℝ^N × 1. The operator satisfies the Restricted Isometry Property with the restricted isometry
constant δ_r(), where δ_r() is the smallest constant for which the following inequality holds:
(1-δ_r())Θ_^2 ≤(Θ)_2^2 ≤ (1+δ_r())Θ_^2
for all Θ∈ℝ^T × d with rank(Θ)≤ r.
In our context, we define the linear operator =𝒜/√(m), where 𝒜 is as defined in Equation (<ref>). Equation (<ref>) can be rewritten as:
(1-δ_r())(∑_t=1^T_t_2^2) ≤∑_t=1^T1/√(m)_t_t_2^2 ≤ (1+δ_r())(∑_t=1^T_t_2^2)
Since rank(Θ)≤ r, all _t lie within an r-dimensional subspace. Therefore, there exist an orthonormal matrix ∈ℝ^r × d(^⊤=_r) and a set of vectors {_t}_t=1^T such that _t=^⊤_t. Equation (<ref>) can then be equivalently expressed as:
(1-δ_r())(∑_t=1^T_t^⊤_t)
≤∑_t=1^T1/m_t^⊤_t^⊤_t^⊤_t ≤
(1+δ_r())(∑_t=1^T_t^⊤_t).
This version of the RIP condition can often be justified by the distributional assumptions on the feature vector , using matrix concentration inequalities <cit.>.
Let _1, ⋯, _m ∈^d_1 × d_2 be independent, centered random matrices of the same size, where _j = 0 and _j≤ L for every j = 1, …, m. Let ∑_j=1^m_j. Define the quantity ν(), which can be viewed as the matrix extension of the variance for scalar Bernstein inequality <cit.>, as follows:
ν() max{ (^⊤), (^⊤)}
≡max{ (∑_j=1^m_j_j^⊤), (∑_j=1^m_j^⊤_j)}.
Then we have:
ℙ (≥ u) ≤ (d_1 + d_2) ·exp{-u^2 / 2/ν () + L u / 3} for all u ≥ 0.
The proof of Lemma <ref> can be found in <cit.>. Armed with this lemma, we obtain the following:
The linear operator =𝒜/√(m) where 𝒜 is defined in Equation (<ref>), satisfies the Restricted Isometry Property with the restricted isometry constant:
δ_r()≤√(8(a+1)r-4/3mlog2r/ϵ)
with probability at least (1-ϵ)^T(1-15/a^2)^rT for any a>0.
The proof of Theorem <ref> is presented in Appendix <ref>. It demonstrates that when the data generating mechanism follows Assumption <ref>, the associated linear operator 𝒜 satisfies the RIP condition with high probability. This guarantee forms the building block for the convergence analysis of our proposed method.
§ OUR METHOD
In this section, we are ready to describe the new methodology for learning the invariant subspace ^∗ under the HPS linear model. The method is coined as the Algorithm, or for short.
In a nutshell, consists of two steps. In the first step, task-specific regressions are solved jointly to obtain the coefficient matrix estimator Θ̂. To simplify exposition, we assume that the rank s of Θ^∗ (or equivalently, the dimension s of the subspace ) is known, whence Θ̂ is the solution to the following constrained optimization problem:
min_Θ ℒ(Θ):=𝒜()-_F^2,
s.t. rank() ≤ s
Here, 𝒜 (Θ) is the linear mapping defined in Equation (<ref>).
In the second step, with Θ just attained from step one, it is then straightforward to apply either QR matrix factorization or Singular Value Decomposition (SVD) to obtain the task-invariant subspace B. In the following, we introduce the proposed () algorithm for solving this constrained optimization problem.
§.§ The () Algorithm
The algorithm can be viewed as a variant of the iterative hard thresholding (IHT) algorithm, initially introduced for solving compressed sensing problems in <cit.>. The IHT algorithm has since then been adapted and generalized to handle matrix rank minimization problems, similar to the formulation in Equation (<ref>). Here we further adapt the matrix rank minimization problem to solve the task-invariant subspace learning problem. Since (<ref>) and (<ref>) have the same form when 𝒜() and are vectorized, we can use this kind of algorithm to solve (<ref>).
The algorithm constitutes the following iterative subroutines:
* Gradient Descent (GD) Step. In each iteration, a GD update is performed independently for each task. This step aims to reduce the task-specific loss. For the t-th task, the (k + 1)-th iteration is updated according to the following expression:
_t^(k+1) = _t^(k) - ∇__tℒ () |__t = _t^(k) = _t^(k) + γ/m_t^⊤ (_t - _t _t^(k))
where γ is the step size and _t^(k) is the regression coefficients updated after the k-th step to be defined later. The updated regression coefficients are also concatenated as Θ̂^(k+1) [_1^(k+1), ⋯, _T^(k+1)]^⊤.
* Hard Thresholding (HT) Step. After the GD update, a hard thresholding operation is then applied to the singular values of the concatenated coefficient matrix Θ̂^(k + 1) just obtained. Specifically, we first use SVD to factorize Θ̂^(k + 1) as Θ̂^(k+1)=^(k+1)^(k+1)^(k+1)^⊤ where ^(k+1)∈ℝ^T × T, ^(k+1)∈ℝ^T × d and ^(k+1)∈ℝ^d × d are the matrices for the left singular vectors, the singular values, and the right singular vectors, respectively. Then, only the largest s singular values and the corresponding left and right singular vectors are retained, recalling that s is the task-invariant subspace dimension. This step enforces the rank constraint over iterative updates of . The result of the hard thresholding step is a new matrix Θ^(k + 1) = ^(k + 1)^(k + 1)^(k + 1)^⊤, where ^(k+1)∈ℝ^T × s and ^(k + 1)∈ℝ^d × s are the first s columns of ^(k+1) and ^(k + 1), and ^(k + 1)∈ℝ^s × s is the diagonal matrix with the first s diagonal singular values of ^(k + 1). For short, we denote this hard thresholding operation as _s (·), so _s (Θ̂^(k + 1)) = Θ^(k + 1).
* Representation Learning Update. The right singular vectors ^(k + 1) ⊤ from the HT step is taken as the learnt task-invariant subspace ^(k + 1) at the (k+1)-th iteration, since it consists of orthonormal columns by construction.
The Algorithm iteratively performs the above three steps until a stopping criterion is met. The complete procedure of the is presented in Algorithm <ref>. It is noteworthy that the step size γ is an important hyperparameter – the algorithm converges to the desired solution only if γ is set appropriately.
§.§ Theoretical analysis of
In this section, our goal is to establish error bounds for the estimator Θ^(k) obtained by the algorithm and the ground truth Θ^∗. Additionally, we aim to constrain the sine angle distance between the subspaces 𝐁^(k) and ^∗.
In this context, we operate under the assumption that the Restricted Isometry Property (RIP) is valid, as established in Theorem <ref>. This assumption provides us the leverage to establish error bounds for the Θ^(k) estimator. Our primary theorem is presented below.
Under Assumptions <ref> to <ref>, we suppose the restricted isometry constant of =𝒜/√(m) is δ_s()(𝒜 is defined in Equation (<ref>) and δ_s() satisfies Theorem <ref>). If 1-1/2√(2)/1-δ_3s<γ≤1, for any a > 0, with probability at least (1 - ϵ) (1 - 15/a^2)^3s( 1 - 2/d - 14/m - 208/dm)^T, the following holds:
Θ^∗ - Θ^(k)_F ≤[2√(2)(1-γ+γδ_3s())]^kΘ^∗ - Θ^(0)_F + 2√(2)/2√(2)(1-γ+γδ_3s())√(dTσ^2/m).
In particular, if γ=1,
Θ^∗ - Θ^(k)_F ≤( 192(a+1)s-32/3mlog6s/ϵ)^k/2Θ^∗ - Θ^(0)_F + 2 √(2)σ/1 - √(192 (a+1) s - 32/3mlog6s/ϵ)√(dT/m).
Moreover, as k →∞, the above bounds can be simplified to:
Θ^∗-Θ^(k)_F ≤ O(√(σ^2dT/m))
The proof of Theorem <ref> is deferred to Appendix <ref>. Our proof loosely follows the strategy of <cit.>. However, during the proof, we need to derive more refined error bounds depending on both the sample size m and the data dimension d.
The bound in Frobenius norm shown in Theorem <ref> can then be used to establish convergence rate of the learnt task-invariant subspace ^(k) after k-th iteration of to the truth ^∗.
We give a theoretically provable bound for γ according to Theorem <ref>. In practice, this bound could be wider.
Under the same assumptions of Theorem <ref>, we have:
sin∠ (^(k), ^∗)≤Θ^∗-Θ^(k)_F/√(L_s T).
Furthermore, with the error bound for Θ^∗ - Θ^(k)_F shown in (<ref>), i.e. with probability at least (1 - ϵ) (1 - 15/a^2)^3s( 1 - 2/d - 14/m - 208/dm)^T, we have
sin∠ (^(k), ^∗)≤ O ( √(σ^2d/L_sm)).
The proof of Theorem <ref> is provided in Appendix <ref>.
§.§.§ Remarks on the Main Theoretical Results
Theorem <ref> provides valuable insights into the error bound of Θ^(k) and Θ^∗. This result places significant reliance on the RIP condition, which in turn necessitates certain conditions to be met. Specifically, for the restricted isometry constant in Theorem <ref> to be small, it implies that the value of m should not be small (m≥Ω(s)). Additionally, the distance of the initial point should not be excessively large. In practice, the choice of the zero point is often suitable and aligns with the selection made in the subsequent numerical experiments. Importantly, this error bound diminishes as m increases.
In Theorem <ref>, we give a bound of the step size γ. Since we use a lot of reduction methods in the proof of this theorem, the available step size range can be larger in practice.
In practice, we often focus more on the distance metric sin∠(^(k),^∗). Theorem <ref> demonstrates that this distance can be bounded by the distance presented in Theorem <ref>, and it also tends to decrease as m increases. Notably, this distance should also decrease with an increase in T, as observed in the numerical experiments in next section. While it has been empirically observed, further theoretical validation remains a topic for future research.
Finally, we comment on how the theoretical results scale with sample size per task m and compare the scaling with that of several other algorithms. Recall that we require m = Ω(s log s), which results in a smaller m requirement compared to other methods. For instance, Burer-Monteiro factorization in <cit.> necessitates m≥Ω(s^4log(T)), and alternating minimization in <cit.> requires m≥Ω(s^2log(T)). The Method of Moments in <cit.> indeed shares a similar requirement with our approach, where it requires m = Ω (s log s). The error bound of the obtained Θ for this method, as specified in Theorem <ref>, is O(σ√(sσ^2sd+T/m)+s√(d/m)). This error bound may not be as favorable, as evident from the numerical results presented in the subsequent section. Another method, nuclear norm minimization in <cit.>, has been shown to require m≥Ω(1), which is favorable in theory but may not perform as well in practical scenarios, especially when s is small. The error bound of the obtained Θ for this method, as specified in Theorem <ref>, is O ( σ√(s ( d^2/m^2 + T/m))+√(s d/mmax{d, T}/m)). This bound is less favorable when the noise level σ is small. In summary, our method holds a distinct advantage when dealing with scenarios where the sample complexity per task m is small.
§ NUMERICAL RESULTS
§.§ Simulated Data
In this section, we conduct extensive numerical experiments to evaluate our proposed method, benchmarked against several competing methods, including both the classical model-based methods such as Method of Moments() and the modern model-agnostic methods such as . Specifically, we consider a setup where d = 100 and s = 5, following the experimental setup outlined in <cit.>, <cit.> and <cit.>. We use the following performance metric to compare different methods: (1) the normalized squared Frobenius norm distance
𝐃𝐢𝐬𝐭_1 (Θ^†, Θ^∗) Θ^† - Θ^∗_F^2/T
and (2) the sine angle distance
𝐃𝐢𝐬𝐭_2 (^†, ^∗) sin∠(^†, ^∗)
where Θ and , respectively, denote arbitrary values that Θ and can take and Θ^∗ and ^∗ denote the ground truths, respectively. Using repeating draws from the true DGP, we investigate on average how these performance metrics are affected by key parameters including the task-specific sample size m, the total number of tasks T, and the noise level σ. Additionally, we examine their dynamics changing with iteration steps and system running time in a single experiment. More details on the experimental setups can be found in the Appendix.
We compare our method with several alternative approaches:
* and : The alternating minimization and alternating minimization gradient descent algorithms studied in <cit.>.
* : The ANIL algorithm outlined in <cit.>, which utilizes all available data in each iteration.
* : The Burer-Monteiro factorization method as presented in <cit.>.
* : The Method of Moments algorithm described in <cit.>.
* : Algorithm 2 presented in <cit.>.
* : The nuclear norm minimization method proposed in <cit.>.
* : Our method.
Figures <ref> and <ref> showcase how 𝐃𝐢𝐬𝐭_1 and 𝐃𝐢𝐬𝐭_2 change by varying the number of tasks T. When m = 25, it is notable that , , and achieve lower errors compared to other methods. Notably, , designed to model-agnostically identify the meta-representation, consistently underperformed by other methods under the squared distances induced by the matrix Frobenius norm. When m = s = 5, the scenario corresponding to the extreme data-scarcity setting, methods like , , and fail to learn either the regression coefficients (the left panel) or the task-invariant representation (the right panel). In this scenario, our proposed approach exhibits superior performance, over other methods that still exhibit shrinking estimation error as T grows, including , , , and .
Figure <ref> illustrates how the two metrics change as m varies. Notably, our proposed method consistently maintains strong performance across different m's. Particularly, for small m values (m < 20), , , and exhibit diminished performance. However, as m increases, the performance of and becomes better, and is eventually comparable to .
Figure <ref> illustrates how the two metrics change as σ changes. With m = 25, , , and perform better than other methods in general. Additionally, these three methods, along with , exhibit the ability to reduce errors as σ diminishes. More comprehensive numerical results regarding situations where m is very small or σ=0 can be found in the Appendix <ref>. Notably, fails to achieve zero error in the noiseless setting, since the implemented algorithm provided in <cit.> did not solve the optimization problem well[As explained in the Introduction, BFGS method is directly used to solve the optimization problem, which can lead to sub-optimal numerical performance. Theoretically, supported by the results of <cit.>, should achieve zero error in the noiseless setting if the numerical optimization step achieves negligible iteration errors.].
Figure <ref> along with Table <ref> present the empirical minimum amount of data for each method to attain a sine angle distance ≤ 0.1 over one simulation. Notably, and requires an extensive amount of data and are thus excluded from this representation. Strikingly, exhibits the most favorable performance in terms of data requirement, particularly when confronted with small sample size m per task.
Figure <ref> shows how 𝐃𝐢𝐬𝐭_1 and 𝐃𝐢𝐬𝐭_2 change over iterations and time for the four iterative methods: , , and . Evidently, demonstrates significantly better computational efficiency compared to the other methods. Although requires fewer iterations, its efficiency is hampered by the larger computation time per iteration. Notably, there are instances, as demonstrated in Appendix <ref>, where still requires more iterations than when dealing with small values of m.
In general, exhibits the highest computational efficiency and consistently achieves superior statistical accuracy across various scenarios. While and are shown to be effective when data is not excessively scarce, they are ill-suited when the sample size of each task is small. Furthermore, demands substantial computational resources per iteration. Its variant, , offers improved speed but lags behind other methods in terms of performance based on the two evaluation metrics. , , , and can effectively handle scenarios with highly restricted data. Although performs comparably to our method (), it falls short when confronted with extremely low noise intensity. On the other hand, only provides a subspace estimation, yielding less satisfactory results. Despite the conciseness of and , their demanding data requirements and suboptimal performance undermine their utility.
§.§ Real Data Analysis
In this section, we further examine the performance of and other competing methods in a real dataset. In particular, we use the air quality data from weather stations across different regions of China in 2023, which is originally from the China national urban air quality real-time publishing platform of the China Environmental Monitoring Station[The website of China national urban air quality real-time publishing platform is https://air.cnemc.cn:18007/https://air.cnemc.cn:18007/ and the dataset can be downloaded from https://quotsoft.net/air/#archivehttps://quotsoft.net/air/#archive.]. Model (<ref>) is adopted to study the relationship between the average PM2.5 in a day and the hourly CO value, hourly NO2 value, date, and geographical location. Specifically, the regression model for each weather station is treated as a single task, where the average value of 24-hour PM2.5 is the response variable (i.e., y), and the hourly CO value (24-dimensional), hourly NO2 value (24-dimensional), date (1-dimensional), and (geographical) location coordinates (2-dimensional) are combined into covariates or features (i.e., ), together with the intercept (all 1 vector). After removing corrupted data, outliers, and abnormal tasks, we are left with T = 1,210 tasks, each with a maximum sample size of 293 and a minimum sample size of 87 (i.e. m_t∈ [87, 293] for t = 1, ⋯, T). In order to better align with the modeling assumptions and the algorithmic implementation in this paper, we preprocess the data in the following steps:
* Normalize the location coordinates of different tasks within the range [-1,1].
* Sequentially assign date values x from 1,2,⋯,365 for each task. Since PM2.5 is generally low in summer and high in winter, we transform the date values to x=|x-183|, and then standardize them (x=x-x/σ_x, where x and σ_x are the mean and variance computed from the data).
* Take the logarithm of the CO and NO2 values and standardize each dimension in every task.
* Take the logarithm of PM2.5 values.
We assume that the coefficients of all tasks lie in an r-dimensional space. 80% of the tasks are used to train the model to get the low-dimensional subspace (), referred to as meta tasks. The remaining tasks are used to verify the validity of the resulting subspace, referred to as test tasks. For each meta task and test task, we divide the data into training points (randomly selecting m points) and test points (the rest). Training points in meta tasks are trained with different methods to obtain the subspace and cross-task regression coefficients . Test points in meta tasks are used to evaluate the results of . Given , training points (_j,_j) in a test task are used to obtain the intrinsic low-dimensional coefficients by computing
_j=((_j)^⊤(_j))^-1((_j)^⊤_j)
and
_j = _j.
Then test points in test tasks are used to verify the performance of the obtained coefficients, which also reflects the performance of . The training and testing process is divided into four stages: use training points in meta tasks to train the model with different methods(meta-train); use test points in meta tasks to test the model(meta-test); and use training points in test tasks to solve the regression problem with (test-train); use test points in test tasks to test the performance(test-test).
We focus on the results of the stage meta-test, test-train and test-test. For each stage, we compute the mean relative error (MRE) of the predicted PM2.5 values for each task, and use the mean of MRE (M-MRE) over all corresponding tasks as the performance metric. Additionally, we use two other methods as baselines in test tasks:
* : is randomly generated when solving regression problems in test tasks;
* : Combine the least square method and pseudo inverse, i.e. _j=(_j^⊤_j)^†_j^⊤_j.
We set r=6 and the results of different methods[ failed in the experiments of real data, so it is not shown in the results. Other approaches mentioned in Section <ref> are all compared here.] for various m are shown in Tables <ref>, <ref>, <ref>. The best-performing results are highlighted in red bold font and the second-best results are highlighted in blue bold font.
We also investigate the performance of the methods for different r values. The results are shown in Tables <ref>, <ref>, and <ref> for the meta-test, test-train, and test-test stages, respectively. In these tables, m is set to 30, and the best results are highlighted in red bold font, while the second-best results are in blue bold font.
The results in the tables show that our method consistently ranks among the top two methods in different situations and is often the best one. Additionally, the methods based on Model (<ref>) significantly outperform the baseline methods, indicating that Model (<ref>) could be appropriate for investigating the given dataset.
§ CONCLUDING REMARKS
In this study, we introduced the concept of utilizing matrix rank minimization techniques to address challenges posed by multi-task linear regression problems with limited data availability. Our proposed approach, termed (), demonstrates efficacy in resolving such problems. Moreover, we conducted a theoretical analysis of the error bounds associated with the estimations produced by this algorithm. Through experimental validation, showcased superior accuracy and efficiency compared to existing methodologies, particularly in scenarios with exceptionally scarce data. This underscores the potential for improved solutions in cases involving very limited samples.
Nevertheless, it is important to acknowledge that, based on our experimental findings, the error bounds we established may not be the most tight. Notably, the error associated with the sine angle distance displays a gradual reduction with increasing values of T. This intriguing observation warrants further exploration and validation in future research endeavors.
In this paper, we assume that the rank s of the common subspace is known. The immediate next step is to show that our procedure adapts to unknown s. Another interesting direction is to analyze related algorithms, including , under the so-called “proportional asymptotic limit” <cit.> (s ≍ d ≍ n and s ≤ d), an arguably more realistic high-dimensional regime than the d ≫ n or d ≪ n, d →∞ regimes, using techniques from statistical physics <cit.>.
It is also of theoretical interest to characterize the fundamental limit of estimating the common subspace when the task diversity assumption may be nearly violated, using the local minimax rate framework recently put forth in the theoretical computer science and statistics literature <cit.>. In most modern applications, the HPS linear model is at best an approximation of the actual data generating mechanism. Finally, developing methods for its non- or semi-parametric extension, such as HPS single index models or model-agnostic settings <cit.>, is another important research direction that deserves more attention.
§ ACKNOWLEDGMENTS
This work is supported by NSFC Grants No.12090024 (CZ, XZ, LL) and No.12471274 (LL), Shanghai Municipal of Science and Technology Grants 21JC1402900 (LL) and 2021SHZDZX0102 (XZ, LL).
alpha
§ PROOFS
§.§ Proof of Theorem <ref>
We analyze the concentration result for each term in Equation (<ref>). Using Lemma <ref>, we set _j to be the matrix 1/m(_t, j_t, j^⊤^⊤-_r). Consequently, =1/m_t^⊤_t^⊤-_r.
We denote _l as the l-th row vector of and _l_2=1. Since _t,j∼(0_d,_d), it follows that [(_l^⊤_t,j)^2]=1. According to [|z|^k]≤(2σ^2)^k/2kΓ(k/2) for any z∼(0,σ^2), we obtain [(_l^⊤_t,j)^2]≤15. By applying Chebyshev's Inequality, we can infer that ℙ(1-a ≤ (_l^⊤_t,j)^2≤ 1+a)≥1-15/a^2 for any a>0. Consequently, we have ^⊤_t,j_2^2=∑_l=1^r(_l^⊤_t,j)^2≤ (a+1)r with probability at least (1-15/a^2)^r for any a>0.
Subsequently, _j can be bounded as follows:
_j = 1/m(_t,j_t,j^⊤^⊤-)≤1/m_t,j_t,j^⊤^⊤ + 1/m = 1/m_t,j_2^2 + 1/m≤(a+1)r+1/m.
This holds with high probability. Since _j is Hermitian, we can compute the matrix variance statistic ν() under the assumptions stated in Lemma <ref>:
ν() = ∑_j=1^m𝔼_j^2
The variance of each term is calculated as:
𝔼_j^2 = 1/m^2( [_t,j(_t,j^⊤^⊤_t,j)_t,j^⊤^⊤]-)
= 1/m^2( [_t,j_2^2_t,j_t,j^⊤^⊤]-)
≼1/m^2{(a+1)r [_t,j_t,j^⊤^⊤]-}
≼(a+1)r-1/m^2
Thus, 𝔼_j^2≤(a+1)r-1/m^2. As a result, we find that:
ν() ≤∑_j=1^m𝔼_j^2 = (a+1)r-1/m
Applying Lemma <ref> with the aforementioned conditions and letting u=δ_r^(t)(assuming u≤1), we have
1/m_t^⊤_t^⊤-≤δ_r^(t)≤√(8(a+1)r-4/3mlog2r/ϵ).
This holds with probability at least (1-ϵ)(1-15/a^2)^r for any a>0. If it holds for all i, we can set δ_r()=√(2(a+4)r+8/3mlog2r/ϵ) and sum up Equation (<ref>) over i. This leads to the conclusion that Equation (<ref>) holds with the probability at least (1-ϵ)^T(1-15/a^2)^rT, which in turn implies that Equation (<ref>) holds with this probability.
§.§ Proof of Theorem <ref>
To start the proof, we first recall some relevant definitions and results from <cit.> and <cit.>.
Given a set of r rank-one matrices = {ψ_1,⋯,ψ_r}, there exists a set of s orthonormal matrices Γ = {γ_1, ⋯, γ_s} in the Frobenius sense[That is, ⟨γ_i, γ_j ⟩_ = 0 if i j and γ_i_ = 1 for all i, such that (Γ) = (). Here the inner product between two matrices is simply Frobenius inner product.]. We call Γ an orthonormal basis for the space (). We use P_ΓΘ to denote the projection of Θ onto the space (Γ). By definition, P_ΓΘ≡ P_Θ and
rank (P_ΓΘ) ≤ r, ∀Θ.
Assume that the rank-r matrix Θ_r has the singular value decomposition Θ_r=∑_i=1^r σ_iu_iv_i^⊤. Γ:=u_1v_1^⊤,u_2v_2^⊤,⋯,u_rv_r^⊤ is
called an SVD basis for the matrix Θ_r. Note that the elements in Γ are orthonormal rank-one matrices.
Suppose that the linear operator :ℝ^T × d→ℝ^N × 1 satisfies the RIP with constant δ_r(). Let be an arbitrary orthonormal subset of ℝ^T × d such that
rank(P_Θ) ≤ r, ∀Θ∈ℝ^d × T. Then, for all b∈ℝ^N × 1 and Θ∈ℝ^T × d the following properties hold:
P_^∗b≤√(1+δ_r())b_2,
(1-δ_r())P_Θ_≤P_^∗ P_Θ_≤ (1+δ_r())P_Θ_
Suppose that the linear operator :ℝ^T × d→ℝ^N × 1 satisfies the RIP with constant δ_r(). Let ,^' be an arbitrary orthonormal subset of ℝ^T × d such that
rank(P_∪^'Θ) ≤ r, ∀Θ∈ℝ^T × d. Then the following property hold:
P_^∗(I-P_)Θ_≤δ_r()(I-P_)Θ_, ∀Θ∈ (^')
Suppose is the best rank-s approximation to the matrix , and Γ is an SVD basis of . Then for any rank-s matrix _s and SVD basis Γ_s of _s, we have
‖ P_B-P_B‖_F ≤‖ P_B_s-P_B‖_F,
where B is any orthonormal set of matrices satisfying (Γ∪Γ_s)⊆ B.
Since is the best rank-s approximation to the matrix and the rank of _s is s, ‖-‖_F ≤‖_s-‖_F. Hence,
‖ P_B(-) ‖_F^2 + ‖ (I-P_B)(-) ‖_F^2 ≤‖ P_B(_r-) ‖_F^2 + ‖ (I-P_B)(_r-) ‖_F^2.
Since (I-P_B)=0 and (I-P_B)_r=0, this reduces to the result.
Provided here is a pivotal lemma that sets apart the proof in this paper from the approach presented in <cit.>.
Given the linear operator =𝒜/√(m) and the sub-Gaussian noise vector , the following inequality holds with probability at least (1-2/d-14/m-208/dm)^T:
^∗_≤^∗_≤√(2dTσ^2).
As ^∗_^2=∑_t=1^T_t^⊤_i_2^2/m, we can analyze _t^⊤_t_2^2/m for each task t individually. The expectation of it is [_t^⊤_t_2^2/m]= [1/m_t^⊤_t_t^⊤_t]=1/m [_t^⊤ [_t_t^⊤]_t]=dσ^2. According to [|z|^k]≤(2σ^2)^k/2kΓ(k/2) for any z∼(0,σ^2), the corresponding variance is
[_t^⊤_i_2^2/m] = [(∑_j=1^m∑_k=1^m_t,j^⊤_t,kε_t,jε_t,k)^2/m^2]-( [_t^⊤_t_2^2/m])^2
≤ [15m(d^2+14d)+(m^2-m)(d^2+2d)]σ^4/m^2-d^2σ^4
= (2dm^2+14d^2m+208dm)σ^4/m^2.
By Chebyshev’s inequality, we can establish
ℙ(dσ^2-t≤_t^⊤_t_2^2/m≤ dσ^2+u)≥1-(2dm^2+14d^2m+208dm)σ^4/u^2m^2
for any u > 0. Here, We take u = dσ^2, resulting in _t^⊤_t_2^2/m≤ 2dσ^2 with a probability at least 1-2/d-14/m-208/dm. By summing up these inequalities over t, the lemma is proved.
Then we give the proof of Theorem <ref>.
Assuming that the Restricted Isometry Property (RIP) holds, as established in Theorem <ref>, let Γ^∗ and Γ^k be the SVD basis of Θ^∗ and Θ^(k), respectively. We denote an orthonormal basis of the subspace (Γ^∗∪Γ^k) as _k
Starting with the relation Θ^∗-Θ^(k+1)_ = P__k+1Θ^∗-P__k+1Θ^(k+1)_ and Lemma <ref> (note that one can simply conduct a verbatim application of the lemma once replacing _r by ^*.), we can further expand this as follows:
Θ^∗-Θ^(k+1)_
= P__k+1Θ^∗-P__k+1Θ^(k+1)_
= P__k+1Θ^∗-P__k+1Θ̂^(k+1)+P__k+1Θ̂^(k+1)-P__k+1Θ^(k+1)_
≤ P__k+1Θ^∗-P__k+1Θ̂^(k+1)_ + P__k+1Θ̂^(k+1)-P__k+1Θ^(k+1)_
≤ 2P__k+1Θ^∗-P__k+1Θ̂^(k+1)_
Here, we define Z^k=Θ^∗-Θ^(k), which results in Θ̂^(k+1)=Θ^(k)+γ^∗( Z^k+/√(m)). Continuing the derivation, we obtain:
P__k+1Θ^∗-P__k+1Θ̂^(k+1)_
= P__k+1Θ^∗-P__k+1Θ^(k)-γ P__k+1^∗ Z^k-γ P__k+1^∗/√(m)_
= P__k+1Z^k-γ P__k+1^∗ P__k+1Z^k-γ P__k+1^∗(I-P__k+1)Z^k-γ P__k+1^∗/√(m)_
≤ (I-γ P__k+1^∗ P__k+1)P__k+1Z^k_ + γP__k+1^∗(I-P__k+1)Z^k_ + γ/√(m)P__k+1^∗_.
By using γ≤1 and utilizing Proposition <ref> and <ref>, the following inequalities are established with a probability at least (1-ϵ)(1-15/a^2)^3s(1-2/d-14/m-208/dm)^T for any a>0:
P__k+1Θ^∗-P__k+1Θ̂^(k+1)_
≤ (1-γ+γδ_2s()) P__k+1Z^k_ + γδ_3s()(I-P__k+1)Z^k_ + γ/√(m)^∗_
≤ (1-γ+γδ_3s()) P__k+1Z^k_ + (1-γ+γδ_3s())(I-P__k+1)Z^k_ + γ/√(m)^∗_
≤ √(2)(1-γ+γδ_3s())Z^k_ + √(2dTσ^2/m).
This implies Z^k+1_F≤ 2√(2)(1-γ+γδ_3s())Z^k_ + √(8dTσ^2/m). Consequently, with 1-1/2√(2)/1-δ_3s<γ<1, i.e. 0<2√(2)(1-γ+γδ_3s())<1, we have:
Θ^∗-Θ^(k)_
≤ [2√(2)(1-γ+γδ_3s())]^kZ^0_ + {[2√(2)(1-γ+γδ_3s())]+[2√(2)(1-γ+γδ_3s())]^2+⋯
+[2√(2)(1-γ+γδ_3s())]^k}√(8dTσ^2/m)
≤ [2√(2)(1-γ+γδ_3s())]^kZ^0_ + 2√(2)/2√(2)(1-γ+γδ_3s())√(dTσ^2/m)
In particular, when γ=1, with the result in Theorem <ref>,
Θ^∗-Θ^(k)_
≤ (2√(2)δ_3s())^kZ^0_ + 2√(2)/2√(2)δ_3s()√(dTσ^2/m)
≤ (192(a+1)s-32/3mlog6s/ϵ)^k/2Z^0_ + 2√(2)σ/1-√(192(a+1)s-32/3mlog6s/ϵ)√(dT/m).
If k is large enough, i.e. with k→∞, we have
Θ^∗-Θ^(k)_≤ O(√(σ^2dT/m)).
§.§ Proof of Theorem <ref>
We will draw on a similar result, namely Lemma 15 in <cit.>, and adopt their methodologies to prove Theorem <ref>.
We begin the proof by defining f () - ^∗^∗_^2 for short. Since ^∗ minimizes f (), it follows from the first-order condition f/=0 that ^∗=^∗^∗^⊤. Therefore for any , we have ^∗^∗^⊤-^∗^∗_^2≤-^∗^∗_^2. Moreover,
^∗^∗^⊤-^∗^∗_^2 = ^∗^∗(^⊤-)_^2
= ^∗^∗_^⊤__^2
= ^∗^∗_^⊤_^2
= tr(^∗^⊤^∗^∗_^⊤_^∗^⊤)
≥λ̅_s(^∗^⊤^∗)^∗_^⊤_^2
≥λ̅_s(^∗^∗^⊤)^∗_^⊤_2^2.
In the above inequalities, λ̅_s (A) represents the s-th largest eigenvalue of a given matrix A. It is worth noting that the second-to-last inequality follows from the property that for any positive semi-definite matrices M and N, (MN)≥σ_min(M) (N). Let Θ^(k) and ^(k) be the output obtained by after k iterations. As λ̅_s(^∗^∗^⊤)=λ_sT, it follows that:
sin∠(^(k),^∗) = ^∗_k^⊤_2 ≤Θ^∗-Θ^(k)_/√(λ_s T)≤Θ^∗-Θ^(k)_/√(L_s T).
Combining this result with (<ref>), we immediately have (<ref>).
§ MORE EXPERIMENTAL RESULTS AND DETAILS OF SIMULATED DATA EXPERIMENTS
In this part, we show more numerical results and give the details of all the experiments in Section <ref>.
§.§ More Experimental Results
We previously demonstrated the trend of metrics' evolution with varying T for cases where m=25 and m=5, depicted in Figures <ref> and <ref>. Now, we extend our analysis to the scenario where m=10, as illustrated in Figure <ref>. Similar to our previous observations, our proposed method continues to outperform others under these conditions. Moreover, , , and exhibit strong performance when T is relatively large. This underscores the importance of having both sufficiently large values of m and T to harness the effectiveness of these three methods.
In Figure <ref>, we present results identical to those shown in Figure <ref>, with the sole exception of setting σ=0. Notably, , , , and achieve remarkably low error rates, aligning with our expectations. In contrast, underperforms in this scenario, while , , and exhibit even poorer performance. These findings are in line with the outcomes depicted in Figure <ref>.
We also investigated a more challenging scenario with s=25. Figure <ref> and Figure <ref> illustrate the evolution of the metrics with respect to T for cases when m=40 and m=25. The comparison of results closely resembles that of the s=5 scenario with m=10 and m=5. These findings suggest that the methods can effectively address such challenges even as s increases.
Figure <ref> displays the same experiment as shown in Figure <ref>, except for T=1600. The results in these two scenarios are notably similar.
The above experiments investigate the influence of both m and T on the results. These two parameters determine the amount of data utilized to address the problems. All of these experiments consistently demonstrate that our method, , requires less data to solve the same problem and consistently achieves superior results under the same conditions.
We have presented results pertaining to the parameter σ in Figure <ref>. Figure <ref> illustrates the scenario when m=10 and T=1600, while Figure <ref> covers the case when m=5 and T=6400. Notably, , , and are omitted in Figure <ref> because they cannot effectively function when m=5. In Figure <ref>, the results closely resemble those in Figure <ref>, with the exception that exhibits less effectiveness. This observation aligns with the findings in Figures <ref> and <ref>. The outcomes presented in Figure <ref> underscore that our method, , stands out as the sole approach capable of performing effectively in scenarios with extremely limited data represented by small values of m.
Figure <ref> serves as an illustrative example of how the metrics evolve with iterations and time. To provide further insights, we present three additional examples under the same conditions of m=25 and T=400 in Figure <ref>. Furthermore, in Figure <ref>, we replicate the experiments with m=10 and T=3200. Notably, exhibits suboptimal performance in the second example, signifying its instability when dealing with small values of m. Collectively, these experiments underscore that our method, , not only delivers effective results but also achieves efficiency by necessitating fewer iterations and less time to yield favorable outcomes.
§.§ Experimental Details
In all the experiments, we generate the true task-invariant matrix ^∗ by first QR factorizing of a d × d matrix with elements sampled i.i.d. from the standard normal 𝒩 (0, 1), and then retrieving the first s columns. The elements of each task-varying coefficient vector _i^∗ are also sampled i.i.d. from the standard normal 𝒩 (0,1). For all methods except , initialization is based on random draws from the standard normal as well. In contrast, initializes _i^(0) as the zero vector for all i ∈ [T]. For the algorithm , a hyperparameter λ called the “regularization coefficient” is determined as λ = σ/T√(T + d^2 / m/mT), following the recommendation in <cit.>. When implementing the and algorithms, as in <cit.> and <cit.>, we use the L-BFGS algorithm <cit.> within the scipy package. Moreover, we employ the autograd package for gradient computations in these two methods.
The choice of step sizes and maximum iterations for our proposed , as well as , , and , are determined empirically. These parameters are set differently for various experiments. In Figure <ref>, the step sizes for and are 0.25 and 1.0, respectively, while the inner and outer loop step sizes for are both 0.5. , , and are run for 40, 200, 20, 200 iterations, respectively.
In Figure <ref>, Figure <ref>, Figure <ref> and Figure <ref>, the step sizes for and are 0.05 and 0.2, respectively, while the inner and outer loop step sizes for are both 0.1. , , and are run for 200, 1000, 100, 800 iterations, respectively.
In Figure <ref>, the step sizes for and are 0.5 and 1.0, respectively, while the inner and outer loop step sizes for are both 0.5. , , and are run for 300, 200, 100, 400 iterations, respectively.
In Figure <ref> and <ref>, the step sizes for and are 0.5 and 1.0, respectively, while the inner and outer loop step sizes for are both 0.5. , , and are run for 20, 50, 20, 50 iterations, respectively.
In Figure <ref> and Figure <ref>, the step sizes for and are 0.1 and 0.4, respectively, while the inner and outer loop step sizes for are both 0.2. , , and are run for 100, 500, 50, 400 iterations, respectively.
In Figure <ref>, the step sizes for and are 0.25 and 1.0, respectively, while the inner and outer loop step sizes for are both 0.5. , , and are run for 1000, 200, 100, 400 iterations, respectively. In addition, the regularization coefficient for is set to 1 × 10^-5 since the best value is zero when it follows the formulation in <cit.>.
In Figure <ref>, the step sizes for and are 0.3 and 0.4, respectively, while the inner and outer loop step sizes for are both 0.2. , , and are run for 15000, 500, 400, 500 iterations, respectively.
In Figure <ref>, the step sizes for and are 0.2 and 0.4, respectively, while the inner and outer loop step sizes for are both 0.3. , , and are run for 40, 100, 40, 100 iterations, respectively.
To enhance the reliability of our results, all the results in Figure <ref> to Figure <ref> and Figure <ref> to Figure <ref> are averaged over 5 trials. The results presented in Figure <ref> and Table <ref> are derived from experimenting with different T values for different m, approximated to the nearest hundred or ten, and represent the average error over 5 trials for each scenario.
|
http://arxiv.org/abs/2409.02501v1 | 20240904075720 | Impact of survey spatial variability on galaxy redshift distributions and the cosmological $3\times2$-point statistics for the Rubin Legacy Survey of Space and Time (LSST) | [
"Qianjun Hang",
"Benjamin Joachimi",
"Eric Charles",
"John Franklin Crenshaw",
"Patricia Larsen",
"Alex I. Malz",
"Sam Schmidt",
"Ziang Yan",
"Tianqing Zhang",
"the LSST Dark Energy Science Collaboration"
] | astro-ph.CO | [
"astro-ph.CO"
] |
firstpage–lastpage
Dispelling Four Challenges in Inertial Motion Tracking with One Recurrent Inertial Graph-based Estimator (RING)
[
September 9, 2024
===============================================================================================================
§ ABSTRACT
We investigate the impact of spatial survey non-uniformity on the galaxy redshift distributions for forthcoming data releases of the Rubin Observatory Legacy Survey of Space and Time (LSST). Specifically, we construct a mock photometry dataset degraded by the Rubin OpSim observing conditions, and estimate photometric redshifts of the sample using a template-fitting photo-z estimator, BPZ, and a machine learning method, FlexZBoost. We select the Gold sample, defined as i<25.3 for 10 year LSST data, with an adjusted magnitude cut for each year and divide it into five tomographic redshift bins for the weak lensing lens and source samples. We quantify the change in the number of objects, mean redshift, and width of each tomographic bin as a function of the coadd i-band depth for 1-year (Y1), 3-year (Y3), and 5-year (Y5) data. In particular, Y3 and Y5 have large non-uniformity due to the rolling cadence of LSST, hence provide a worst-case scenario of the impact from non-uniformity. We find that these quantities typically increase with depth, and the variation can be 10-40% at extreme depth values.
Based on these results and using Y3 as an example, we propagate the variable depth effect to the weak lensing 3×2pt data vector in harmonic space. We find that galaxy clustering is most susceptible to variable depth, causing significant deviations at large scales if not corrected for, due to the depth-dependent number density variations. For galaxy-shear and shear-shear power spectra, we find little impact given the expected LSST Y3 noise.
cosmology: observations – techniques: photometric – large-scale structure of Universe
§ INTRODUCTION
Observational cosmology enters the era of high-precision measurements. For example, weak gravitational lensing, which probes the small distortion of distant galaxy shapes due to the gravity of foreground large-scale structures, is particularly sensitive to the clustering parameter S_8=σ_8√(Ω_ m/0.3). Current weak lensing surveys have measured this parameter to be S_8=0.759^+0.024_-0.021 by the Kilo-Degree Survey <cit.>, S_8=0.759^+0.025_-0.023 by the Dark Energy Survey <cit.>, and S_8=0.760^+0.031_-0.034 (S_8=0.776^+0.032_-0.033) using the shear power spectra (two-point correlation function) by the Hyper Suprime-Cam <cit.>. The constraints are comparable to that measured by <cit.> from the primary cosmic microwave background (CMB), S_8=0.830±0.013, and the recent result from CMB lensing <cit.>, S_8=0.840±0.028, but are interestingly lower by 2-3σ.
The uncertainties of these measurements are already dominated by systematic errors - without a careful treatment of various systematic effects, the cosmological results can be biased up to a few sigma <cit.>.
The forthcoming Stage IV surveys such as the Rubin Observatory Legacy Survey of Space and Time (LSST) will achieve a combined Figure of Merit ten times as much as the Stage III experiments as mentioned above <cit.>.
While the high statistical power enables pinning down the nature of such tensions, systematic error needs to be controlled down to sub-percent level to ensure that our results are not biased.
One major systematic uncertainties come from survey non-uniformity. Galaxy samples detected at different survey depth, for example, will have different flux errors and number of faint objects near the detection limit. This could propagate down to systematic errors in redshift distribution and number density fluctuation.
The majority of the LSST footprint will follow the wide-fast-deep (WFD) observing strategy, which means that a large survey region will be covered before building up the survey depth.
At early stages of the survey, fluctuations in observing conditions, such as sky brightness, seeing, and air mass, are expected to be significant across the footprint. These can change the per-visit 5σ limiting magnitude, m_5, leading to depth non-uniformity in the early LSST data <cit.>.
The survey strategy later on could also affect uniformity. LSST will adopt a `rolling cadence', which means that during a fixed period, more frequent revisits will be assigned to a particular area of the sky, whereas the rest of the regions are deprioritized by up to 25% of the baseline observing time. The high- and low-priority regions continue to swap, such that the full footprint is covered with the same exposure time after ten years.
This can result in different limiting magnitudes across the sky at intermediate stages of rolling.
This strategy greatly advances LSST's potential for time domain science for e.g. denser sampling in light curves. However, it also poses challenges to the analysis of large-scale structure (LSS) probes, which normally prefers a uniform coverage.
Changes in m_5 can change the detected sample of galaxies and its photometric redshifts in two ways. Firstly, a larger m_5 means that fainter, higher redshift galaxies will pass the detection limit.
This increases the sample size, and could shift the ensemble mean redshift higher.
These faint galaxies also contain large photometric noise, resulting in larger scatter with respect to the true redshift, hence broadening the redshift distribution.
Secondly, at fixed magnitude, the signal-to-noise is larger given a larger m_5. This means that, contrary to the previous effect, the scatter in spec-z vs photo-z will be reduced due to the reduced noise.
These effect has been studied previously in a similar context.
The density fluctuation is quantified in <cit.> via 1+δ_ o=(1+δ_ t)(1+δ_ OS), where δ_ o is the observed density contrast, δ_ t is the true density, and δ_ OS is the fluctuation in the observing condition.
The effects on photo-z have been investigated in <cit.> in the context of LSST. They showed that the photo-z quality can change significantly with respect to different observing conditions, although they did not consider tomographic binning.
<cit.> also quantified the effects for KiDS-1000 data, where the depth varies significantly between different pointings. They showed that by varying the r-band limiting magnitude, a significant amount of high redshift objects can be included in the sample, such that the mean number density can double between the deepest and shallowest pointings, and the average redshift for a tomographic bin can shift by as much as Δ⟨ z⟩∼ 0.2.
Understanding these effects are important, because weak lensing is particularly sensitive to the mean redshift of the lens and source galaxies. <cit.> demonstrated that this effect is similar to a spatially varying multiplicative bias, and for cosmic shear analysis in configuration space, constraints in the Ω_ m-σ_8 plane can shift up to ∼ 1σ for a KiDS-like survey with the same area as LSST.
<cit.> also derived an analytic expression for anisotropic redshift distributions for galaxy and lensing two-point statistics in Fourier space. They showed that, assuming a spatial variation of scale ℓ_z, the effects are at percent and sub-percent level for the current and forthcoming galaxy surveys, and converge to the uniform case at ℓ≫ℓ_z.
In this paper, we investigate how survey non-uniformity can affect the redshift distribution of tomographic bins for LSST 1-year, 3-year, and 5-year observation (hereafter Y1, Y3, and Y5 respectively).
The LSST Dark Energy Science Collaboration (DESC) Science Requirements Document <cit.> states that the photometric redshifts needs to achieve a precision of ⟨Δ z ⟩ = 0.002(1+z) (0.001(1+z)) for Y1 (Y10) weak lensing analysis, and ⟨Δ z ⟩ = 0.005(1+z) (0.003(1+z)) for Y1 (Y10) large-scale structure analysis.
Here, using these numbers as a bench mark, we quantify changes in the mean redshift (⟨ z ⟩) and width (σ_z) of tomographic bins, as depth varies.
[Notice that the DESC SRD also provides requirements on the photometric redshift scatter of the full, unbinned sample, σ_Δ z. For weak lensing, this is σ_Δ z=0.006(1+z) (0.003(1+z)) for Y1 (Y10); for large-scale structure analysis, this is σ_Δ z=0.1(1+z) (0.03(1+z)) for Y1 (Y10). Because we do not try to optimize the photometric redshift estimation in this paper, we do not compare our results with the DESC SRD σ_Δ z values.]
We use the up-to-date LSST observing strategy and the simulated 10-year observing conditions for Rubin Observatory <cit.> to quantify the survey non-uniformity, and generate a mock catalogue of true galaxy magnitude in ugrizy, redshift, and ellipticity based on the Roman-Rubin (DiffSky) simulations <cit.>.
The degradation of photometry and photo-z estimation relies on the public software, Redshift Assessment Infrastructure Layers[<https://github.com/LSSTDESC/RAIL>] <cit.>, which will also be used in the LSST analysis pipeline.
Finally, we propagate these effects to the clustering and weak lensing two-point statistics.
This paper is organized as follows. We describe our simulation datasets in Section <ref> and introduce our methods in Section <ref>. The results are presented in Section <ref>.
We show the variation of the angular power spectra with varying depth effects in Section <ref>.
Finally, we conclude in Section <ref>.
§ SIMULATIONS
This section provides an overview of the simulations used in this work, namely, the Rubin Operation Simulator (OpSim; Section <ref>), which simulates the observing strategy and related properties for Rubin LSST, and the Roman-Rubin simulation (DiffSky; Section <ref>), which provides a truth catalogue complete up to z=3 with realistic galaxy colours.
§.§ Rubin Operations Simulator (OpSim)
The Operations Simulator[<https://rubin-sim.lsst.io/>] (OpSim) of the Rubin Observatory is an application that simulates the telescope movements and a complete set of observing conditions across the LSST survey footprint over the 10-year observation period, providing predictions for the LSST performance with respect to various survey strategies.
We utilise OpSim baseline v3.3, the most recent observing strategy. This strategy involves a rolling cadence that starts after the first year of observation. In subsequent years, parts of the sky will receive more visits than others, enabling higher resolution sampling for time domain science. At the end of the fiducial survey, uniformity will be recovered at the expected 10-year LSST depth.
The output of OpSim is evaluated by the Metrics Analysis Framework (MAF), a software tool that computes summary statistics (e.g. mean and median of a particular observing condition over a given period) and derived metrics (e.g. coadd 5σ depth) that can be used to assess the performance of the observing strategy, in terms of survey efficiency and various science drivers.
For the purpose of this study, we obtain survey condition maps in HEALPix <cit.> format using the MAF HEALPix slicer with N_ side=128 (corresponding to a pixel size of 755 arcmin^2), using the (RA, Dec) coordinates. We do not choose a higher resolution for the map because we expect that survey conditions vary smoothly on large scales, and this choice of N_ side is enough to capture the variation with the rolling pattern.
For our purposes, we mainly consider the following quantities in each of the ugrizy filters: extinction-corrected coadd 5σ point source depth (, hereafter m_5^ ex) and the effective full width half maximum seeing (, hereafter θ_ FWHM^ eff) in unit of arcsecond.
The m_5^ ex is different from the coadd depth, m_5, by the fact that it includes the lost of depth near galactic plane.
The effective seeing, θ_ FWHM^ eff, has a wavelength dependence, with a poorer seeing at bluer filters from Kolmogorov turbulence.
The MAF also takes into account for increase in PSF size with airmass, X, due to seeing, i.e. θ_ FWHM^ eff∝ X^0.6. However, the MAF does not include the increase in PSF size along the zenith direction with zenith angle, due to differential chromatic refraction.
This quantity is used here to convert point-source depth to that for extended objects.
We obtain maps of these quantities over the LSST footprint at the end of each full year of observation (e.g. Y3 for nights<1095).
The coadded depth in each band is computed by assessing the 5σ-depth (in magnitudes) of each visit within each HEALPix pixel, then computing the `stacked' depth.
Maps of θ_ FWHM^ eff contain the median over all visits in a particular HEALPix pixel.
Throughout the paper, we will use Y1, Y3, and Y5 as examples to showcase the impact of spatial variability on photometric redshifts.
Notice that the choice of Y3 and Y5 are a pessimistic one, because the survey strategy is close to uniformity in Y4 and Y7 where cosmological analysis are expected to be conducted. Hence, this paper provides a worst-case scenario of the severity of the impact from spatial variability.
Also, the Rubin observing strategy is still being decided, and the rolling cadence may move to different times during the survey.
There are ongoing efforts on recommendations about the observing strategy, and hence the results shown here should be interpreted in light of this particular strategy and years chosen.
We will focus on the wide-fast-deep (WFD) survey program footprint, and exclude areas with high galactic extinction E(B-V)>0.2 for cosmological studies. Notice that, in practice, additional sky cuts could also be applied (e.g. a depth cut that removes very shallow regions).
Specifically, we will focus on the variation with respect to i-band, the detection band of LSST.
Figure <ref> shows the spatial variation of the extinction-corrected coadd i-band depth for OpSim baseline v3.3 in Y1, Y3, and Y5. The stripes visible across the footprint in Y3 and Y5 are the characteristics of the rolling cadence.
The distribution of all OpSim variables are shown in Fig. <ref> for each of the six filters and for selected years of observation. One can see that the coadd depths build up in each band over the years, whereas the distributions of the median effective seeing per visit are relatively unchanged. One can also see a strong skewness in these distributions.
§.§ Roman-Rubin simulation (DiffSky)
In order to investigate the impact of varying survey conditions on photo-z for LSST, we need a simulated truth catalogue that is complete to beyond the LSST 10-year depth and realistic in colour-redshift space. For this purpose, we use the joint Roman-Rubin simulation v1.1.3. This simulation is an extension of the effort in <cit.>, but with many improvements, including self-consistent, flexible galaxy modelling. The simulation is based on its precursor, CosmoDC2 <cit.>, a synthetic sky catalogue out to z=3 built from the `Outer Rim' N-body cosmological simulation <cit.>. The N-body simulation contains a trillion particles with a box size of (4.225 Gpc)^2.
The galaxies are simulated with Diffsky[<https://github.com/LSSTDESC/lsstdesc-diffsky>], based on two differentiable galaxy models: Diffstar <cit.> and Differentiable Stellar Population Synthesis <cit.>.
Using Diffstar, one can build a parametric model that links galaxy star formation history with physical parameters in halo mass assembly. Then, with DSPS, one can calculate the SED and photometry of a galaxy as a function of its star formation history, metallicity, dust, and other properties.
The advantage of this galaxy model is that the distribution in colour-redshift is smooth and more realistic compared to that in CosmoDC2. This is thanks to the separate modeling for different galaxy components, i.e. bulge, disk, and star-forming regions. The SEDs built from these different components with different stellar populations makes the colours more realistic for photo-z estimation.
We randomly subsample the full simulated catalogue to N=10^6 objects complete to i<26.5 as our truth sample. For each object, we obtain its magnitude in the six LSST bands, true redshift, bulge size s_b, disk sizes s_d, bulge-to-total ratio f_b, and ellipticity e.
We obtain the galaxy semi-major and semi-minor axes, a,b via a=s/√(q) and b= s√(q), where s is the weighted size of the galaxy, s= s_bf_b + s_d(1-f_b), and q is the ratio between the major and minor axes, related to ellipticity via q=(1-e)/(1+e).
One caveat of the current sample is that, at z>1.5, there is an exaggerated bimodal distribution in the g-r colour and redshifts, which is not found in real galaxy data. As a result, the bluest objects in the sample are almost always found at high redshifts. This could be due to the high-redshift SPS models being less well constrained. One direct consequence of this is that, when training a machine learning algorithm to estimate the photo-z, the high-redshift performance may be too optimistic due to this colour-space clustering.
§ METHODS
This section describes our methodology for generating a mock LSST photometry catalogue for Y1, Y3, and Y5, applying photometric redshift estimation algorithms, and defining metrics to assess the impact of variable depth. Specifically, we describe the degradation process using the LSST error model in Section <ref>, the two photo-z estimators, BPZ and FlexZBoost, in Section <ref>, the tomographic binning strategy in Section <ref>, and the relevant metrics Section <ref>.
§.§ Degradation of the truth sample
Given a galaxy with true magnitudes m_ t={ugrizy} falling in a HEALPix pixel within the footprint, we `degrade' its magnitude with observing conditions associated with that pixel, and assign a set of `observed' magnitudes m_ o and the associated magnitude error σ_m, o, using the following procedure: (1). Apply galactic extinction. (2). Compute the point-source magnitude error for each object in each filter, using the LSST error model detailed in <cit.>. (3). Compute the correction to obtain the extended-source magnitude errors. (4). Sample from the error and add it to the true magnitudes. Steps (2) - (4) are carried out using the python package [<https://github.com/jfcrenshaw/photerr/tree/main>] <cit.>. We detail each step below.
Firstly, we apply the galactic extinction to each band with the E(B-V) dust map <cit.> via:
m_ dust=m+[A_λ/E(B-V)]E(B-V),
where for each of the six LSST filters we adopt [A_λ/E(B-V)]={4.81,3.64,2.70,2.06,1.58,1.31}.
Then, we utilize the LSST error model <cit.> to compute the expected magnitude error, σ_m, per band.
The magnitude error is related to the noise-to-signal ratio, nsr, via:
σ_m = 2.5log_10(1+ nsr).
The total nsr consists of two components:
nsr^2 = nsr_ sys^2+ nsr_ rand, ext^2,
where nsr_ sys is the systematic error from the instrument read-out and nsr_ rand is the random error arising from observing conditions on the sky, for extended objects.
Notice that in the high signal-to-noise limit where nsr≪ 1, σ_m∼ nsr, and Eq. <ref> recovers the form in <cit.>. Throughout the paper, we set nsr_ sys≈σ_ sys=0.005, which corresponds to the maximum value allowed from the LSST requirement.
For point sources, the random component of nsr is given by
nsr_ rand, pt^2=(0.04-γ)x+γ x^2,
where γ is a parameter that depends on the system throughput. We adopt the default values from <cit.>, γ={0.038, 0.039, 0.039, 0.039, 0.039, 0.039} for ugrizy. x is a parameter that depends on the magnitudes of the object, m, and the corresponding coadd 5σ depth, m_5, in that band:
log_10 x ≡ 0.4 (m-m_5).
For extended sources, we adopt the expression in <cit.>, where the nsr receives an additional factor related to the ratio between the angular size of the object and that of the PSF:
nsr_ rand, ext = nsr_ rand, pt√(A_ ap/A_ psf).
Here,
A_ psf=πσ_ psf^2, σ_ psf = θ_ FWHM^ eff/2.355,
where θ_ FWHM^ eff is the effective FWHM seeing (it is linked to the seeing by θ_ FWHM^ eff = θ_ FWHM X^0.6, where X is the airmass) for a given LSST band. The AP angular size of the object is given by
A_ ap =π a_ apb_ ap,
a_ ap = √(σ_ psf^2+(2.5a)^2),
b_ ap = √(σ_ psf^2+(2.5b)^2),
where a, b are the galaxy semi-major and minor axis.
We make one modification to Eq. <ref>, where we replace the denominator by the mean PSF area, √(⟨ A_ psf⟩), averaged over pixels in the i-band quantiles which we will elaborate shortly.
In the approximation that nsr_ rand,pt∝ x, the point-source noise is then proportional to θ_ FWHM^ eff (see Eq. <ref>), and so for the extended-source noise, θ_ FWHM^ eff cancels and Eq. <ref> effectively changes the dependence of m_5 on PSF size to that on the extended aperture size. However, in this work, we utilize the median seeing, for which the cancellation may not be exact.
Naively taking Eq. <ref> could lead to unrealistic cases, where, at fixed depth, nsr_ rand,ext increases with a better seeing. We have tested both scenarios, i.e., using individual A_ psf or the mean ⟨ A_ psf⟩ in Eq. <ref>, and find negligible difference for our main conclusion in the i-band quantiles. However, it does make a significant difference if one were to bin the samples by quantiles of seeing, as investigated in Appendix <ref>.
To obtain the observed magnitudes m_ o, we degrade in flux space, f_ o, by adding a random noise component Δ f drawn from a normal error distribution, Δ f∼𝒩(0, nsr), to the reddened flux f_ dust of the object. Here, nsr is computed by setting m=m_ dust in Eq. <ref>. The flux and magnitude are converted back and forth via
m_k=-2.5log_10f_k, k={ dust, o}.
Negative fluxes are set as `non-detection' in that band. The corresponding magnitude error σ_m, o is computed using Eq. <ref> and setting m=m_ o in Eq. <ref>, such that the error de-correlates with the observed magnitude.
To focus on the trend in the depth variation in the detection band, we subdivide pixels in the survey footprint into 10 quantiles in i-band m_5^ ex, where the first quantile (qtl=0) contains the shallowest pixels, and the last quantile (qtl=9) contains the deepest.
Table <ref> shows the mean and standard deviation of each i-band depth quantile. We also show in Table <ref> the mean and standard deviation of all other survey condition maps used in the analysis in each of the i-band depth quantiles.
Within each quantile, we randomly assign each galaxy to a HEALPix pixel in that quantile, with its associated observing conditions { E(B-V), m_5^ ex, θ_ FWHM^ eff} on that pixel for each LSST band, from the OpSim MAF maps. Then, we carry out the above degradation process to our truth sample.
On average, each pixel within each quantile is assigned 121 galaxies.
Notice that there are many other parameters that could affect the photometric errors, e.g. sky background, exposure time, and atmospheric extinction. Following <cit.>, because these quantities only contribute towards m_5, we do not include them otherwise in the degradation, and assume that m_5^ ex completely captures their variation.
Additionally, we explore the relation between m_5 and these extended quantities using OpSim in Appendix <ref>, and we explore the galaxy redshift distribution dependence with other survey properties in Appendix <ref>.
Finally, we apply an i-band magnitude cut corresponding to the LSST Gold sample selection on the degraded catalogue. For the full 10-year sample this is defined as i<25.3. For data with an observation period of N_ yr years, we adjust the gold cut to i_ lim=25.3 + 2.5log_10(√(N_ yr/10)).
Thus for Y1, Y3, and Y5, we adopt the following gold cuts respectively: i_ lim=24.0, 24.6, 24.9.
Notice that this is slightly shallower than the definition in the DESC SRD, where the Gold cut is defined as one magnitude shallower than the median coadd m_5. This is due to the fact that OpSim baseline v3.3 has a slightly deeper i-band depth in early years compared to previous expectations. For Y1, the median i-band m_5^ ex is ∼ 25.2, giving a DESC SRD Gold cut to be 0.2 mag deeper than what we adopt here.
Additionally, for our fiducial sample, we also apply a signal-to-noise cut in i-band: SNR=1/ nsr≥10, although we also look at the case with the full sample. This cut is motivated by the selection of the source sample, where shape measurements typically require a high SNR detection in i-band. In this work, we apply this cut to both the weak lensing and clustering samples.
§.§ Photo-z estimators
Methods for photometric redshift estimation can be broadly divided into two main categories: template-fitting and machine learning. Template fitting methods assume a set of SED templates for various types of galaxies, and use these to fit the observed magnitudes of the targets. Machine Learning methods, on the other hand, use machine learning algorithms trained on a reference sample, to infer the unknown target redshifts.
See <cit.> for a review and comparison of the performance of various photo-z estimators in the context of Rubin LSST. In this work, we adopt two algorithms with reasonable performance, a template fitting method, BPZ (Bayesian Photometric Redshifts), and a machine learning method, FlexZBoost. In this work, before applying these redshift estimators, all observed magnitudes are de-reddened, by applying the inverse of Eq. <ref>.
§.§.§ BPZ (Bayesian Photometric Redshifts)
BPZ <cit.> is a template-based photometric estimation code. Given a set of input templates 𝐭, BPZ computes the joint likelihood P(z,𝐭) for each galaxy with redshift z. A prior P(z,𝐭|m) is included based on the observed magnitude of the galaxy m. For example, the prior restricts bright, elliptical galaxies to lower redshifts.
For each galaxy, a likelihood P(z,𝐭|c, m) given the galaxy's colour c and magnitude is computed, and by marginalising over the templates, one obtains the per-object redshift probability P(z).
We use the RAIL interface of the BPZ algorithm, with the list of spectral energy distribution (SED) templates adopted in <cit.>: the CWW+SB4 set introduced by <cit.>, the El, Sbc, Scd & Im from <cit.>, the SB2 & SB3 from <cit.>, and the 25Myr & 15Myr `SSP' from <cit.>. We set the primary observing band set to i-band, and adopt the prior from the original BPZ paper <cit.>, which was used to fit data from the Hubble Deep Field North <cit.>. Notice that these set of SEDs may be different from that in the Roman-Rubin simulation, and the prior distributions may not match exactly.
The prior mismatch would only affect samples with low signal-to-noise and hence those posteriors are prior-dominated. For the Gold sample considered in this paper, the impact of the prior on the mean difference and scatter of the true and photometric redshifts is expected to be small, although galaxies with broad or bimodal posteriors may end up having a different point estimate (e.g. mode), hence the outlier rate could be slightly higher. We do not include extra SED templates here.
Additionally we compute the odds parameter, defined as
odds = ∫_z_ mode-Δ z^z_ mode+Δ zP(z) dz,
where z_ mode is the mode of P(z), and Δ z =ϵ (1+z_ mode) defines an interval around the mode to integrate P(z). The maximum value of odds is 1, which means that the probably density is entirely enclosed within the integration range around the mode, whereas a small odds means that the probability density is diffuse given the range.
Hence, odds denotes the confidence of the BPZ redshift estimation, and the choice of ϵ essentially sets the criteria. The (1+z_ mode) factor accounts for the fact that larger redshift errors are expected at higher redshifts. We choose ϵ=0.06 as a nominal photo-z scatter, and we use odds as a BPZ `quality control', where a subsample is selected with odds≥0.9, as comparison to the baseline sample.
§.§.§ FlexZBoost
FlexZBoost <cit.> is a machine-learning photo-z estimator based on FlexCode <cit.>, a conditional density estimator (CDE) that estimates the conditional probability density p(y|𝐱) for the response or parameters, y, given the features 𝐱.
The algorithm uses basis expansion of univariate y to turn CDE to a series of univariate regression problems.
Given a set of orthonormal basis functions {ϕ_i(y) }_i, the unknown probability density can be written as an expansion:
p(y|𝐱)=∑_j β_j(𝐱)ϕ_j(y).
The coefficients β_j(𝐱) can be estimated by a training set (𝐱,y) using regression.
The advantage of FlexCode is the flexibility to apply any regression method towards the CDE.
The main hyper-parameters involved in training is the number of expansion coefficients and those associated with the regression.
<cit.> found that FZBoost was among the the strongest performing photo-z estimators according to the established performance metrics.
In this paper, we utilise the RAIL interface of the FZBoost algorithm with its default training parameters.
We construct the training sample by randomly drawing 10% of the degraded objects from each of the deciles, and train each year separately.
Notice that this training sample is fully representative of the test data, which is not true in practice. Spectroscopic calibration samples typically have a magnitude distribution that is skewed towards the brighter end, and the selection in colour space can be non-trivial depending on the specific dataset used. Although there are methods to mitigate impacts from this incompleteness, such as re-weighting in redshift or colours <cit.>, and, more recently, using training data augmentation from simulations <cit.>, the photo-z performance is not comparable to having a fully representative sample, and one would expect some level of bias and increased scatter depending on the mitigation method adopted. Here, we are interested in whether our results on the non-uniformity impact changes significantly with an alternative photo-z algorithm. We thus leave the more realistic and sophisticated case with training sample imperfection to future work.
§.§.§ Performance
For both photo-z estimators, we use the mode of the per-object redshift probability, P(z), as the point estimate, z_ phot.
Fig. <ref> shows the scatter in spec-z and photo-z for Y1, Y3, and Y5 with BPZ and FZBoost redshifts, for the shallowest (qtl=0) and the deepest (qtl=9) quantiles in the i-band m_5^ ex respectively.
The scatter is always larger for the shallower sample in the full sample case (faint purple dots). This is expected following Eqs. <ref> and <ref>, given that the coadd depths in each band are strongly correlated. At fixed magnitude, the larger the m_5, the smaller the photometric error, hence also the smaller the scatter in photo-z. The signal-to-noise cut at SNR≥10 removes some extreme scatter as well as objects from the highest redshifts. This is more obvious for the shallowest sample compared to the deepest, due to the better signal-to-noise for the deepest sample at high redshifts.
There is a significant group of outliers in the BPZ case that are at low redshifts but are estimated to be at z>2, highlighted by the blue contours. By examining individual BPZ posteriors for this group, we find that these objects tend to have very broad or bimodal redshift distributions. This could be a result of the Oxygen line confusion between OII and OIII, and notice that the fraction of this population as well as its location can be influenced by the choice of the BPZ priors. Another possible cause is the spurious bimodal distribution in the colour-redshift space in the Roman-Rubin simulation, as mentioned in Section <ref>.
We see that after applying a strict cut with odds≥0.9, shown by the purple dashed lines enclosing 90% of the sample, the outlier populations are significantly reduced, as expected.
This cut retains 20.4% (27.7%), 25.7% (44.4%), 29.5% (44.0%) of the SNR≥10 sample in qtl=0 (9) for Y1, Y3, and Y5, respectively. We see that this cut further reduces the scatter at z_ phot∼ 1.5. FZBoost in general shows a much better performance, given that the training data is fully representative of the test data.
Table <ref> summarizes these findings for each sample via a few statistics of the distribution of the difference between photo-z and true redshifts: Δ z=(z_ phot-z_ true)/(1+z_ true). Namely, the median bias Median(Δ z), the standard deviation, the normalized Median Absolute Deviation (NMAD) σ_ NMAD=1.48 Median(|Δ z|), and the outlier fraction with outliers defined as |Δ z|>0.15.
§.§ Tomographic bins
In weak lensing analysis, the full galaxy catalogue is sub-divided into a `lens' sample and a `source' sample. The lens sample is often limited at lower redshifts, acting as a tracer of the foreground dark matter field which `lenses' the background galaxies. The source sample contains the background galaxies extending to much higher redshifts, whose shapes are measured precisely to construct the shear catalogue. The two samples together allow measurement of the so-called `3×2pt statistics', including galaxy clustering from the lens sample, galaxy-galaxy lensing from the lens galaxies and source shapes, and cosmic shear from the source shapes alone.
Additionally, both the lens and source samples are divided into several tomographic bins, i.e., sub-samples separated with sufficient distinction in redshifts. This further includes evolution information that improves cosmological constraints.
We adopt the Y1 tomographic bin definitions in the DESC SRD for all of our samples. The lens sample has 5 bins equally spaced in 0.2<z<1.2, with bin width Δ z =0.2, and bin edges defined using z_ phot.
For source samples, the DESC SRD requires five bins with equal number of galaxies. To do so, we first combine the 10 depth quantiles, and then split the sample into five z_ phot quantiles.
Notice that in practice, tomographic binning can be determined in different ways, often with the aim of maximizing the signal-to-noise of the two-point measurements. In some cases, clustering algorithm, e.g. random forest, rather than a photo-z estimator, is used to separate samples into broad redshift bins. We refer the interested readers to <cit.> for explorations of optimal tomographic binning strategies for LSST.
Notice also that, following the DESC SRD, we do not apply additional magnitude cuts for the lens sample. This is done, for example, for the DES Y3 MagLim lens sample, where a selection of i>17.5 and i<4z_ phot+18 is applied <cit.>. These cuts are applied to reduce faint, low-redshift galaxies in the lens sample, such that the photometric redshift calibration is more robust.
Notice that if the lens samples are selected with a brighter cut, one would expect a different and likely reduced depth variation.
We explore this particular case in Appendix <ref>.
Fig. <ref> shows the normalized true redshift distribution, p(z), of the lens and source tomographic bins for Y3 as an example, split by the BPZ redshifts (with or without odds selection) and the FZBoost redshifts. The dashed lines show the p(z) measured from the shallowest samples, whereas the solid lines show that from the deepest samples. The BPZ case shows more extended tails in each tomographic bin compared to the FZBoost case, and for the source galaxies, a noticeable outlier population at low redshifts in the highest tomographic bin. We see that in most cases, there is a clear difference in p(z) between the shallow and the deep samples: the deep samples seem to shrink the tails, making p(z) more peaky towards the mean redshift (although this is not the case for the odds≥0.9 sample), and their p(z) seems to shift towards higher redshift at the same time. To quantify these changes, we define metrics for the impact of variable depth below.
§.§ Metrics for impact of variable depth
The first metric is the variation in the number of objects in each sample, N_ gal, as a function of the coadd i-band depth. This is the most direct impact of varying depth: deeper depth leads to more detection of objects within the selection cut. The result is that the galaxy density contrast, δ_g(θ)=[N(θ)-N̅]/N̅, where N(θ) is the per-pixel number count at pixel θ, and N̅ is the mean count over the whole footprint, fluctuates according to the depth variation, leading to a spurious clustering signal in the two-point statistics.
To quantify the relative changes, we measure the average number of objects per tomographic bin across all 10 depth quantiles, N̅_ gal=∑_i N_ gal,iw_i, where i=1,..,10 denotes the depth bin, and w_i∼0.1 is the weight proportional to the number of pixels in that quantile. We quote the change of object number in terms of N_ gal/N̅_ gal.
The second metric quantifies the mean redshift of the tomographic bin as a function of depth:
⟨ z ⟩ = ∫ z p(z) dz ,
where p(z) is the true redshift distribution of the galaxy sample in the tomographic bin with normalization ∫ p(z) dz=1. Weak lensing is particularly sensitive to the mean distance to the source sample: the lensing kernel thus differs on patches with different depth.
Here, we look at the difference between the mean redshift ⟨ z ⟩_i of depth quantile i and that of the full sample, ⟨ z ⟩_ tot, i.e., Δ⟨ z ⟩≡⟨ z ⟩_i - ⟨ z ⟩_ tot. More specifically, we look at the quantity Δ⟨ z ⟩ /(1+⟨ z ⟩_ tot), where the weighting accounts for the increase in photo-z error towards higher redshifts. This format also allows us to compare with the DESC SRD requirements.
The third metric quantifies the width of the tomographic bin. This is not a well-defined quantity because the p(z) in many cases deviate strongly from a Gaussian distribution. One could use the variance, or the second moment of the redshift distribution:
σ_z^2 = ∫ (z-⟨ z ⟩)^2 p(z) dz.
However, this quantity is very sensitive to the tails of the distribution: larger tails of p(z) increases σ_z, even if the bulk of the distribution does not change much.
In our case, the width of the tomographic bin is most relevant for galaxy clustering measurements: the smaller the bin width, the larger the clustering signal.
Specifically, in the Limber approximation, the galaxy auto-correlation angular power spectrum is given by
C_ℓ^ gg=∫dχ/χ^2(z)[ H(z)/c p(z)]^2 P_gg(k=ℓ+1/2/χ, z),
where ℓ is the degree of the spherical harmonics, χ is the comoving distance, H(z) is the expansion rate at redshift z, c is the speed of light, k is the 3-dimensional wave vector, and P_gg is the 3-dimensional galaxy power spectrum.
Assuming that within the tomographic bin, the redshift evolution of galaxy bias is small, and all other functions can be approximated at the mean value at the centre of the bin, the clustering signal is proportional to the integral of the square of the galaxy redshift distribution, p(z). This assumption breaks down if the tomographic bin width is broad, for instances, the combination of all five lens bins.
Hence, we define the following quantity:
W_z ∫ p^2(z) dz
as the LSS diagnostic metric, which corresponds to changes of the two-point angular power spectrum kernel with respect to changes in p(z).
This is a useful complement to the second moment, σ_z, because σ_z can be sensitive to the tails of the p(z) distribution caused by a small population of outliers in photo-z; however, the impact of this population could be small for galaxy clustering, which is characterised by W_z. For both of these quantities, we look at the ratio with the overall sample combining all depth quantiles.
We show all the mean metric quantities in each tomographic bin and each quantile for Y1, Y3, and Y5 in Table <ref> for BPZ and Table <ref> for FZBoost.
Notice that for the p(z)-related quantities, we have used the true redshifts, but in practice, these are not accessible. Rather, unless one uses a Bayesian hierarchical model such as CHIPPR <cit.>, one only has access to the calibrated redshift distribution p_c(z) against some calibration samples via, e.g. a Self-Organizing Map (SOM), which is itself associated with bias and uncertainties that can be impacted by varying depth.
The case we present here thus is idealized, where the calibration produces the perfect true p(z). This allows us to propagate the actual impact of varying depth on p(z) to the 3×2pt data vector, but does not allow us to assess the bias at the level of modelling due to using an `incorrect' p_c(z) that is affected also by the varying depth.
We leave this more sophisticated case to future work.
§ RESULTS
This section presents our results on the impact of variable depth via three metrics: the number of objects (Section <ref>), mean redshift of the tomographic bin (Section <ref>), and the width of the tomographic bin (Section <ref>).
§.§ Number of objects
Figure <ref> shows the change in the number of objects, N_ gal, as a function of the i-band extinction-corrected coadd depth, m_5^ ex, compared to the overall mean, for lens and source tomographic bins in Y1, Y3, and Y5.
In general, we find an approximately linear increase of number of objects as the i-band depth increases, with the higher two redshift bins showing the most extreme variation. For the lower redshift bins, the variation can be ∼ 10% compared to the mean value, whereas for bin 5, the variation can be as large as ∼ 40%. The trend does not seem to change much at different observing years.
This is the result of the i-band gold cut and the high SNR selection. The scatter in magnitudes is larger for the shallower sample, hence given a magnitude cut, the shallower sample will have fewer objects.
At fixed magnitude, the deeper objects have larger SNR, resulting in more faint galaxies surviving the SNR cut.
Given that the gold cut and SNR at given magnitude evolve with depth in the observation year, we expect the trend to be similar across Y1 to Y5.
It is interesting to see also that per tomographic bin, the trends for baseline BPZ and FZBoost are similar, despite having quite different features in the photo-z vs spec-z plane. The variation between bins 1 - 4 is slightly larger in the BPZ case.
For the BPZ redshifts, the inclusion of the odds selection increases the variation in object number, especially in the highest redshift bin. The steeper slope might be due to the fact that, objects with larger photometric error from the shallower regions are likely to result in a poorer fit, leading to a smaller odds value. Hence, the odds≥0.9 selection removes more objects from the shallower compared to the baseline case.
§.§ Mean redshift
Figure <ref> shows the variation in the mean redshift of the tomographic bin, ⟨ z ⟩, as a function of the i-band extinction-corrected coadd depth, m_5^ ex, for lens and source samples in Y1, Y3, and Y5.
In general, ⟨ z ⟩ increases with the i-band coadd depth. This is expected as more faint, high redshift galaxies that are scattered within the magnitude cut are included in the deeper sample, resulting in an increased high redshift population.
In general, the slope of this relation is similar across tomographic bins for both lens and source samples, with a variation of |Δ z /(1+⟨ z ⟩)|∼0.005 - 0.01.
This is not true for bin 5 in the source sample, where the variation with depth is noticeably larger.
This could be explained by this bin containing objects with the highest z_ phot, which are also most susceptible to scatter in the faint end and outliers in the photo-z estimators. This trend becomes more extreme from Y1 to Y5. By reducing outliers with the BPZ odds cut, the variation in source bin 5 is slightly reduced, although still higher than the nominal level.
There are some difference between the BPZ and FZBoost cases: the slope slightly grows from Y1 to Y5 in the BPZ case, whereas it stays consistent in the FZBoost case, but the two cases converge in Y5.
On the same figure, we mark the DESC SRD requirements for photo-z as a dark grey band at Δ z /(1+⟨ z ⟩)=±0.002 and a light grey band at Δ z /(1+⟨ z ⟩)=±0.005.
The shifts in mean redshift reach the limit of the requirements for Y1, and exceeds the requirement for Y10.
§.§ Width of the tomographic bin
Figure <ref> shows the change in the tomographic bin width parameters, σ_z and W_z, as defined in Section <ref> for the lens galaxies as a function of the i-band extinction-corrected coadd depth, m_5^ ex, in Y1, Y3, and Y5.
The width of the tomographic bin can change with depth due to the scatter in the photo-z vs spec-z plane. For example, a deeper sample may have a smaller scatter for the bulk of the sample, but include fainter objects that could result as outliers, resulting a more peaked distribution at the centre with pronounced long tails.
The left two columns of Fig. <ref> show the changes in the second moment, σ_z, for both the BPZ (first column) and FZBoost case (second column).
For BPZ, there is little change in this parameter for Y1 at different depth, but for Y3 and Y5, σ_z increases with depth. Including odds selection reduces the trend, and in some cases reverses it. For FZBoost, the trend is similar to BPZ, but bin 1 shows a particularly large variation by as much as ∼30%.
This is because σ_z is sensitive to the entire distribution, not just the peak, and outliers at high redshift can significantly impact this parameter. Fig. <ref> shows same p(z) distributions for Y3 in logarithmic scale, where the high redshift outliers are visible. Indeed, one can see an enhanced high-redshift population for bin 1 in the FZBoost case. The odds cut removes most of the outliers, so that σ_z is reflecting the change of the peak width with depth, hence giving the reversed trend.
The right two columns of Fig. <ref> show the changes in W_z. Given a tomographic bin, a larger W_z means a more peaked redshift distribution, hence a larger clustering signal.
One can see that W_z is more sensitive to the bulk of the p(z) distribution, as it increases with depth in most bins. We see that the variation in W_z is within 10% from the mean, with the largest variation coming from bins 2, 3, and 4. The highest and lowest tomographic bins, on the other hand, does not change much, despite their σ_z varying significantly with depth. For the BPZ case, adding the additional cut in the odds parameter reduces such trends in general, and the trend in the highest tomographic bin is reversed.
§ IMPACT ON THE WEAK LENSING 3X2PT MEASUREMENTS
We use the Y3 FZBoost photo-z as an example to showcase the varying depth effects, by propagating the number density and p(z) variation from the previous section into the weak lensing 3×2pt data vector. In Section <ref>, we describe how the mock large-scale structure and weak lensing shear maps are constructed with the inclusion of non-uniformity. In Section <ref>, we show case the measured 3×2pt data vector in both uniform and variable depth case.
§.§ Mock maps with varying depth
To construct the mock LSST catalogue, we use one of the publicly available Gower Street Simulations <cit.>. This is a suite of 800 N-body cosmological simulations created using PKDGRAV3 <cit.> with various wCDM cosmological parameters. The simulation outputs are saved as 101 lightcones in HEALPix format with N_ side=2048 between 0<z<49. To fill the full sky, the boxes are repeated 8000 times in a 20×20×20 array. For shells z<1.5, though, only three replications are required.
We use the particular simulation with ΛCDM cosmology: w=-1, h=0.70, Ω_m=0.279, Ω_b=0.046, σ_8=0.82, and n_s=0.97. The dark matter density contrast map, δ_m, is computed using particle counts at N_ side=512 (corresponding to a pixel size of 47.2 arcmin^2), and the corresponding lensing convergence map, κ, is produced with Born approximation using [<https://github.com/NiallJeffrey/BornRaytrace>] <cit.>. Finally, the shear map, (γ_1,γ_2) in spherical harmonic space, is produced via
γ_E,ℓ m = κ_E, ℓ m/ℓ(ℓ + 1)√((ℓ + 2)(ℓ - 1)),
and we transform γ_E,ℓ m as a spin-2 field, γ_ℓ m=γ_E,ℓ m + i γ_B,ℓ m, assuming zero B-mode. For more details see <cit.>.
We construct the lens and source shear maps as follows. In the noiseless case, given a lens (source) redshift distribution, p_i(z), for a tomographic bin i, we construct the lens density (source shear) map by M_i= ∑_j M_j p_i(z_j)Δ z_j, where j denotes the lightcone shells in the Gower street simulation, M_j denotes the map in this particular shell, and Δ z_j denotes the shell width.
The noisy maps are generated in the following way.
Lens galaxy counts in tomographic bin i on each pixel θ are drawn from a Poisson distribution. For a shell j, the Poisson mean is μ_j(θ)=n_ gal,j[1+bδ_m,j(θ)], where b is the linear galaxy bias and n_ gal,j=n_ galp_i(z_j)Δ z_j, with n_ gal being the average count per pixel in this tomographic bin. Here, we set b=1 to avoid negative counts in extremely underdens pixels.
However, notice that in a magnitude-limited survey, the galaxy bias is typically b>1 and evolves with redshift, not to mention the scale-dependence of bias on non-linear scales.
One approach to sample b>1 is to simply set negative counts to zero. However, this may introduce spurious behaviour in the two-point function of the field.
Given the main purpose here is to propagate the systematic effects due to depth only, we justify our choice by prioritizing the precision of the measured two-point statistics compared to theory inputs.
We assume the ensemble-averaged per-component shape dispersion to be σ_e=⟨√((e_1^2+e_2^2)/2)⟩=0.35, chosen to roughly match that measured in the Stage III lensing surveys <cit.>.
For a tomographic bin i, we first assign source counts in the same way as above, resulting in n̂_ source (θ) galaxies in pixel θ. We then randomly assign shapes drawn from a Gaussian distribution, 𝒩∼(0,σ_e), for each component n̂_ source (θ) times, and we compute the mean shape noise in each pixel. We end up with a shape noise map, which we then add to the true shear map for each tomographic bin.
To imprint the varying depth effects, we divide the footprint into 10 sub-regions containing the pixels in each of the i-band m_5^ ex deciles, and repeat the above procedure with distinct number density and p(z) for both the lens and source galaxies, according to the findings in previous sections. We do not assign depth-varying shape noise, following the finding in <cit.> that the shape noise is only a weak function of depth.
We also produce the noiseless cases for varying depth. For density contrast, we produce two versions: one with varying p(z) only, and one with additional amplitude modulation δ_m + Δδ, where Δδ+1=N_ gal/N̅_ gal, as shown in Fig. <ref>. The former is to used isolate the effect of varying p(z) only.
We adopt the cumulative number density of the photometric sample as a function of the i-band limiting magnitude given by the DESC SRD:
N(<i_ lim)=42.9(1-f_ mask)10^0.359(i_ lim-25) arcmin^-2,
where f_ mask accounts for the reduction factor for masks due to image defects and bright stars, and f_ mask=0.12 corresponds to a similar level of reduction in HSC Y1 <cit.>.
Hence, substituting i_ lim=24.6 for LSST Y3, the expected total number density is N(<24.6)=27.1 arcmin^-2.
This is slightly larger but comparable to the HSC Y3 raw number density of N=22.9 arcmin^-2 <cit.> at a similar magnitude cut of i_ lim<24.5 in the cModel magnitude.
We estimate the total lens galaxy number density for our sample by N_ lens=N(<24.5)f_ LS, where f_ LS=0.90 is the ratio between the total number of lens and source samples (averaged over depth bins) from our degraded Roman-Rubin simulation catalogue, hence N_ lens=24.4 arcmin^-2.
For each lens tomographic bin, we obtain the following mean number density: 3.93, 6.08, 5.66, 5.71, 3.03 arcmin^-2.
We also explore the case using a MagLim-like lens sample with a much sparser density in Appendix <ref>.
For source sample, it is the effective number density
n_ eff, rather than the raw number density, that determines the shear signal-to-noise. n_ eff accounts for the down-weighting of low signal-to-noise shape measurements, as defined in e.g. <cit.>. For LSST, n_ eff is estimated for Y1 and Y10 with different scenarios in Table F1 in the DESC SRD.
In the case adopted for forecasting, where the shapes are measured in i+r and accounting for blending effect, n_ eff is ∼60% of the raw number density for both Y1 and Y10.
We follow this estimation for Y3, hence adopting n_ eff=16.3 arcmin^-2 for the full source sample, and 3.26 arcmin^-2 for each tomographic bin. This is comparable, but slightly more sparse compared to HSC Y3, where n_ eff=19.9 arcmin^-2 <cit.>.
Meanwhile, we also generate a uniform sample for comparison, in which the number density and p(z) are given by the mean of the depth quantiles.
We assign uniform weights to lens and source galaxies.
§.§ Weak lensing 3x2pt data vector
We use NaMaster <cit.> to measure the 3×2pt data vector in Fourier space: C_ℓ^ gg, C_ℓ^ gγ, and C_ℓ^γγ for the lens and source tomographic bins. NaMaster computes the mixing matrix to account for the masking effects, and produces decoupled band powers. The HEALPix pixel window function correction is also applied when comparing the data with input theory.
We adopt 14 ℓ-bins in range [20,1000] with log spacing. Notice that the maximum ℓ is a conservative choice for C_ℓ^γγ, but is sufficient for our purpose to demonstrate the impact of variable depth on relatively large scales. For galaxy clustering and galaxy-galaxy lensing, we apply an additional scale cut at ℓ_ max=k_ maxχ(⟨ z ⟩)-0.5 following the DESC SRD, where k_ max=0.3h Mpc^-1, and χ(⟨ z ⟩) is the comoving distance at the mean redshift ⟨ z ⟩ of the lens tomographic bin.
We generate theory angular power spectra assuming spatial uniformity with the Core Cosmology Library[<https://github.com/LSSTDESC/CCL>] <cit.>. CCL uses HALOFIT <cit.> non-linear power spectrum and Limber approximation when computing the angular power spectra.
We compute the Gaussian covariance matrix using NaMaster with theoretical data vectors. The covariance includes mask effects, shot-noise, and shape noise power spectra. It should be noted that this is done assuming uniformity.
In the varying depth case, the true covariance contains extra variance, due to spatial correlation in the noise with the number count.
Also, the assumption of a purely Gaussian covariance is not completely true. On very large scales, non-Gaussian mode coupling at scales larger than the survey footprint results in a term called super-sample covariance <cit.>. Here we expect it to be relatively small because of the large sky coverage of LSST.
On small scales, non-linear structure formation also introduces non-Gaussian terms <cit.>. With the scale cuts adopted in C_ℓ^gg and C_ℓ^gγ we expect that such non-Gaussian contribution to be small.
The galaxy clustering angular power spectra measurements, C_ℓ^ gg, are shown in Fig. <ref>. The tomographic bin number is indicated in the upper right corner as (i,i) for bin i. The measurements for the uniform case are shown as red dots, and that for the varying depth case are shown in purple. The data points are shot-noise-subtracted.
We see a clear difference between the uniform and the varying depth cases at ℓ<100, and it becomes more significant at higher redshifts.
The impact at large scales is expected, as the i-band coadd depth varies relatively smoothly and the rolling pattern is imposed at relatively large scales.
The trend with redshifts is also expected, due to two main reasons. Firstly, the slope d(N_ gal/N̅_ gal)/dm_5 increases slightly with redshift, and is significantly larger for bin 5, as shown in the right middle panel of Fig. <ref>. This means that non-uniformity is most severe in these bins. Secondly, the clustering amplitude increases towards lower redshifts due to structure growth, hence the non-uniformity imprinted in δ_g is less obvious in lower redshift bins.
In practice, the number density fluctuations are mitigated via the inclusion of the selection weights, w(θ), such that the corrected density field is defined as δ_g(θ) = N(θ) / w(θ) N̅_w, where N̅_w=∑ N(θ) / ∑ w(θ) (see e.g. <cit.>). In addition, these weights will be used to compute the mode coupling matrix and shot noise, such that the varying number density is taken into account in the likelihood analysis.
A more subtle effect is the difference in redshift distribution at different depth. To isolate its impact, we compare the clustering power spectra from the noiseless sample varying p(z) only with that from the noiseless uniform case. The ratio of the measurements are shown as dashed lines in the first panel of Fig. <ref>. We find that once the non-uniformity in number density is removed, the variation in p(z) does not significantly bias the power spectra, and we recover the uniform case at better than 0.5%.
The galaxy-shear and shear-shear power spectra, C_ℓ^ gγ and C_ℓ^γγ, are shown in Figs. <ref> and <ref>, respectively. The source - lens and source - source combinations are indicated on the upper right as (i, j). In both cases, we only show the non-zero E-modes, and we check that the B-modes are consistent with zero.
For the galaxy-shear case, measurements from combinations i<j are not shown, because we do not include effects such as magnification or intrinsic alignment, hence these measurements are low signal-to-noise or consistent with zero.
We see that, overall, the impact of variable depth is much smaller compared to galaxy clustering.
In the galaxy-galaxy shear measurements, only combination (5,5) shows a significant χ^2 in the variable depth case, and the main deviations is at ℓ<100. This could be a joint effect where non-uniformity is largest in the highest redshift bin for both lens and source.
There is negligible difference in the shear-shear measurements for all other combinations given the measurement error.
To look at this further, we take the noiseless case and compute the ratio between measurements from the varying depth sample and the uniform sample. We show some examples along the diagonal, i.e., the (i,i) combinations, in the middle and right panels of Fig. <ref>. The off-diagonal measurements lie mostly within the variation range of the ones shown here. In case of C_ℓ^ gγ, we see that deviations are large at low ℓ when both density and p(z) is non-uniform (shown as solid line); when the density non-uniformity is removed (shown in dashed line), the results are more consistent within 5%. For C_ℓ^γγ, we see that the largest impact is from the highest tomographic bin reaching up to 0.5%.
These results are consistent with the analytical approach in <cit.>, where, in general, the varying depth effect in the redshift distributions is sub-percent and the weak lensing probes are less susceptible to these variations.
Our results are quite different from <cit.> (hereafter H20) for KiDS cosmic shear analysis in several aspects. H20 found that the largest impact comes from the sub-pointing, small scales, and for a KiDS-like set-up, the difference between the uniform and variable depth cases is 3% - 5% at an angular scale of θ=10 arcmin. Furthermore, the variable depth effect is stronger in lower redshift bins than higher redshift bins. Several differences in the analysis may contribute to these different results. Firstly, the non-uniformity in KiDS is rather different from that considered here: the KiDS footprint consists of many 1 deg^2 pointings, each having distinctive observing conditions due to that each field only received a single visit. This means that survey properties such as depth are weakly correlated at different pointings. One can write down a scale-dependent function, E(θ), to specify the probability of a pair of galaxies falling in the same pointing at each θ, and this essentially gives rise to the scale dependence of the variable depth effect in H20. For LSST, the above assumptions are not true, and E(θ) (if one can write it down) would take a very different form compared with that in KiDS.
Secondly, due to the single visit, there is a much larger variation in depth, number density, and Δ z in KiDS compared to this work (tomographic bin centre can shift up to Δ z∼ 0.2 in redshift, as shown in Fig.2 of H20). This means that the variable depth effects in KiDS as explored by H20 is significantly larger compared to this work.
This also explains their redshift dependence, because for KiDS, the average redshift between pointings varies the most in the lowest redshift bins. Lastly, although our ℓ_ max here corresponds to θ∼ 10 arcmin. the results are not directly comparable, as H20 conducted the analysis in real space, i.e. ξ_±(θ).
To sum up, the largest impact of varying depth comes from galaxy clustering, whereas the impact on weak lensing probes is much smaller. Higher redshift bins are more susceptible due to a higher sensitivity in number density and redshifts with depth. Given the mock LSST Y3 uncertainty, one can clearly detect bias in the power spectrum in galaxy clustering and the galaxy-galaxy shear bin (4,4), while all other combinations do not seem to have detectable impacts.
Furthermore, once the density non-uniformity is removed, the impact of varying depth is further reduced. There are several ways to mitigate number density variation, such as mode projection <cit.>, template subtraction <cit.>, iterative regression <cit.>, and machine learning methods using neural networks <cit.> and a Self-Organizing Map (SOM) <cit.>.
See <cit.> for a thorough review.
Notice that, despite these methods, it is difficult to guarantee a complete removal non-uniformity, and in some cases, clustering signal can also be reduced as a result.
Additional sky cuts to exclude problematic regions can also effectively reduce density variation, at the cost of losing sky coverage.
Finally, for the lens sample, a brighter magnitude cuts can also greatly reduce the variable depth effect (see Appendix <ref> for a MagLim-like lens selection), at the cost of sample sparsity.
Nevertheless, non-uniformity in p(z) only seems to be safely averaged out in the 2-point statics measurements.
§.§.§ Impact on spectroscopic calibration
Here, we consider another potential source of systematics arising from small spectroscopic calibration fields. Redshift calibration for photometric surveys such as LSST are usually done using small but deep spectroscopic surveys, e.g. C3R2 survey <cit.>. Each field in these surveys has a coverage of a few deg^2.
Suppose that a calibration field overlaps with a particularly shallow or deep region, the calibration (e.g. a trained SOM) could cause bias to the overall redshift distribution when it is generalised to the whole field. For example, a SOM trained in a shallow region will contain larger noise, which may increase the scatter for the overall sample. The lack of high redshift, fainter objects in the shallow region could also cause bias when the SOM is applied to objects in deeper regions.
The specific impact will depend on the calibration method and details of the calibration, which is beyond the scope of this paper.
Here, we qualitatively assess the impact via the difference in the 3×2pt theory vectors computed using the p(z) from a particular quantile and those computed using the mean p(z), as shown in Fig. <ref>. The solid lines show cases from the shallowest quantile, qtl=0, and the dashed lines show cases from the deepest quantile, where qtl=9, highlighting the worst case scenarios. For C_ℓ^gγ and C_ℓ^γγ, only cases where the tracers are in the same bin are shown, but the other lens - source combinations have a comparable variation.
We see that naively taking the p(z) from a quantile and assume it as the p(z) for the full sample can give rise to as much as 10% bias compared to the uniform case.
This effect is reduced by having multiple calibration fields across the LSST footprint. Currently, many of the calibration fields overlaps with the LSST Deept Drilling Field (DDF), which will be much deeper compared to the WFD. Impact of variable depth can then be mitigated via a two-tiered SOM calibration, mapping from the deep to the wide field <cit.>, and synthetic source injection <cit.>, mimicking the degradation of the deep field objects across the LSST footprint, as done in the DES Y3 analysis.
§ CONCLUSIONS
In this paper, we investigated and quantified the impact of spatial non-uniformity due to survey conditions on redshift distributions in the context of early LSST data. We used the Roman-Rubin simulation as the truth catalogue, and degraded the photometry using the LSST error model implemented in the RAIL package. The degradation utilizes the survey condition maps from the OpSim baseline v3.3 for the 1-year, 3-year, and 5-year LSST data. We run BPZ and FZBoost photometric redshift estimators on the degraded sample and use the photo-z mode to separate the samples into five lens and five source tomographic bins. Finally, we apply the LSST Gold selection and a signal-to-noise cut. Taking the extinction-corrected 5σ coadd depth of the detection band, i-band, as the primary source of non-uniformity, we quantify the impact in terms of three measures: the number of objects, the mean redshift of the tomographic bin, and the tomographic bin width.
We find that:
* The number of objects increases with the i-band depth in general, and at extreme depth values, the number of objects can vary by a factor of two. The trend is relatively consistent between cases using BPZ and FZBoost, although selecting odds≥ 0.9 for BPZ amplifies the trend. The largest correlation comes from the highest tomographic bin.
* The mean redshift in each bin increases with the i-band depth, with a variation of |Δ z /(1+⟨ z ⟩)|∼0.005 - 0.01. The lens samples show a relatively consistent trend across different tomographic bins, whereas for the source sample, the highest tomographic bin shows the largest variation. This reaches the limit of the requirements of 0.005 for Y1 as listed in the DESC SRD, and exceeds the requirement of 0.003 for Y10. At extreme depth variations, however, deviation in ⟨ z ⟩ could exceed Y1 requirements.
* The width of the lens tomographic bin is measured in terms of σ_z, which is sensitive to the entire redshift distribution, p(z), and W_z, which is sensitive to the peak of p(z), both varying at the level of 10% and slightly increases with year. We find that in general, σ_z increases with the i-band depth due to fainter objects included in the deeper sample. W_z also increases with the i-band depth, due to a more peaked bulk p(z) as a result of higher SNR in deeper samples, although the trend can be reversed in some cases.
As emphasized before, results derived for Y3 and Y5 are with particularly large rolling non-uniformity. Hence, the variations shown should be interpreted as an upper-limit for the early Rubin LSST static science. As shown in Appendix <ref>, if the final LSST lens selection is similar to the DES Y3 MagLim sample with a bright magnitude cut, then the expected variable depth impact will be milder than shown in our baseline cases.
We took the Y3 FZBoost photo-z as an example to propagate the impact of varying depth to the weak lensing 3×2pt measurements. To do this, we used one realization of the Gower Street N-body simulation, and generated lens galaxy maps and source shear maps with spatially varying number density and p(z). We measure the data vector in harmonic space using NaMaster, and also compare them with the theory expectation generated from the Core Cosmology Library. We find that the largest impact is on C_ℓ^ gg with the higher redshift bin measurements significantly biased. C_ℓ^ gγ is less sensitive to varying depth effects, although in the source - lens combination (4,4), there is a visible difference at low ℓ. C_ℓ^γγ shows no significant impact in all source - source combinations from varying depth, given the uncertainties in LSST Y3.
Finally, we also investigate cases where we do not include noise in the lens and source maps.
The difference between uniform and varying depth cases can be up to a few percent for C_ℓ^ gγ, and less than 0.5% for C_ℓ^γγ. Furthermore, by removing the density non-uniformity, and varying p(z) only with depth, one can reduce the bias in C_ℓ^ gg and C_ℓ^ gγ to sub-percent level.
Therefore, for early LSST analysis, it is crucial to account for the galaxy density variation, but the impact of varying p(z) seems to be negligible. We leave the investigation of an accurate mitigation strategy of the number density variation to future work.
Our current approach has some caveats.
Firstly, the fidelity of the colour-redshift relation in the Roman-Rubin simulation at z>1.5 is questionable. As already mentioned, the strong bifurcation of the blue objects at this high redshift may lead to worse (in the case of BPZ) or overly optimisic (in the case of FZBoost) performance when estimating the photo-z.
Secondly, we have adopted an analytic model to obtain the observed magnitudes in each band based on survey conditions. However, in reality, the observed magnitudes and colours also depend on the way they are measured. For example, for extended objects, cModel <cit.> and GAaP <cit.> methods are often applied. Although the photometry will be calibrated, the magnitude error may not be the same for different methods. This could introduce extra scatter in photo-z.
Thirdly, we have only tested on two major photo-z estimators, observing some level of differences in the results. For example, compared to BPZ, the FZBoost samples show more consistency between different tomographic bins regarding to the trend with i-band depth. Therefore, one should take the result as an order of magnitude estimate of the impact, but the specific trends are likely to differ for different photo-z methods.
Moreover, when propagating the effects to the data vector, we have made some simplifications. We considered a galaxy bias of b=1, and did not include systematics such as magnification bias or intrinsic alignments. This choice is to isolate the effect of varying depth on the pure lensing and clustering contribution, but it would be more realistic to include these effects.
Finally, we have not folded in the effects of blending, i.e. spatially nearby galaxies are detected as one object. This occurs when the surface density is high and the image is crowded, and could be significant for deep photometric surveys such as LSST.
The level of blending depends on both seeing and depth of the survey, hence, it could correlate with the variable depth effects discussed here.
The impact of blending on photo-z is the inclusion of a small fraction of ill-defined redshifts in the sample, increasing the photo-z scatter. Clustering redshift calibration, which measures galaxy clustering on small scales, can also be affected as these scales are most susceptible to blending.
Moreover, blending can affect shear measurements via e.g. lensing weights, hence introduce impact on galaxy-galaxy lensing and cosmic shear.
As such, <cit.> showed that approximately 12% of the galaxy sample in LSST is unrecognized blends, and can bias S_8 measurement from cosmic shear by 2σ.
Furthermore, so far our results are based on the p(z) of the true redshifts of the sample. In reality, we do not have access to this, and our theory curve will be based on the calibrated redshift distribution p_c(z), which itself can be impacted by non-uniformity based the calibration method.
For example, in many weak lensing surveys, a Self-Organizing Map (SOM) is used to calibrate redshifts by training on a photometric sub-sample with spectroscopic counterparts <cit.>. By taking sub-samples from a small calibration field (typically of a few square degrees) located in a particularly shallow region could result in a trained SOM that captures different magnitudes, redshifts, and SNR than that from a deep region, as quanlitatively shown in Section <ref>.
One remedy may come from calibration using clustering redshifts, which takes advantage of galaxy clustering of the target sample with a spectroscopic sample, spliced in thin redshift bins <cit.>. The nonphysical variation with depth will drop out in this method, giving unbiased estimate of p(z).
We have only explored the impact of variable depth on two-point statistics here, but there could be potential impact on statistics beyond two-point. For example, for weak lensing shear, a similar effect in manifestation is source clustering, where the number density of source galaxies n(θ̂, z) is correlated with the measured shear γ(θ̂) for a given direction θ̂ on the sky, because source galaxies are themselves clustered. Impact of source clustering is negligible in two-point statistics for Stage III surveys, but is detected significantly in several higher order statistics in the DES Y3 data <cit.>. Given that the variable depth effect also modulates n(θ̂, z) (hence imprinting a fake `source clustering'), there may be non-negligible impact on higher order statistics with LSST.
We leave these explorations to future work.
§ AUTHOR CONTRIBUTION STATEMENTS
QH: Contributed to the conceptualization, data curation, formal analysis, and writing of the draft. Contributed to the development of RAIL.
BJ: Contributed to the conceptualization, funding acquisition, project administration, and revisions of the text.
EC: Contributed to RAIL software core functionalities used in the analysis. Minor contributions to revisions of text.
JFC: Contributed to the development of RAIL and the model.
PL: Contributed to the development of the Roman-Rubin Diffsky simulation.
AIM: Contributed to RAIL software core functionalities used in the analysis. Minor contributions to revisions of text.
SS: Contributed to RAIL software used in the analysis, namely the BPZ and FlexZBoost algorithms used in estimation, along with general software development. Minor contributions to revisions of text.
ZY: Contributed to the development of RAIL and the model; provided reviewing comments for the manuscript.
TZ: Contributed to the development of RAIL, including the LSST error model, and RAIL's core API; provided reviewing comments for the manuscript.
§ ACKNOWLEDGEMENTS
QH and BJ are supported by STFC grant ST/W001721/1 and the UCL Cosmoparticle Initiative.
This paper has undergone internal review in the LSST Dark Energy Science Collaboration.
The authors thank the internal reviewers, Boris Leistedt and Markus Rau, for their thorough and insightful comments.
This work also benefited from helpful comments by Pat Burchat, Ofer Lahav, Rachel Mandelbaum, and Andrina Nicola.
The DESC acknowledges ongoing support from the Institut National de
Physique Nucléaire et de Physique des Particules in France; the
Science & Technology Facilities Council in the United Kingdom; and the
Department of Energy, the National Science Foundation, and the LSST
Corporation in the United States. DESC uses resources of the IN2P3
Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the
Centre National de la Recherche Scientifique; the National Energy
Research Scientific Computing Center, a DOE Office of Science User
Facility supported by the Office of Science of the U.S. Department of
Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities,
funded by UK BEIS National E-infrastructure capital grants; and the UK
particle physics grid, supported by the GridPP Collaboration. This
work was performed in part under DOE Contract DE-AC02-76SF00515.
JFC acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics Cosmic Frontier Research program under Award Number DE-SC0011665.
AIM acknowledges the support of Schmidt Sciences.
We acknowledge the use of arXiv and ADS for referneces, the use of Python libraries and software mentioned in the main text for data analysis and plotting, and Overleaf for the writing of this paper.
§ DATA AVAILABILITY
The methodology of generating mock LSST photometry with observing conditions is included in the RAIL pipeline[<https://github.com/LSSTDESC/rail_pipelines/tree/main/src/rail/pipelines/examples/survey_nonuniformity>].
The mock galaxy catalogues with varying depth effect are available upon reasonable request.
mnras
§ COMPARISON OF LSST ERROR MODEL ON DC2
The 5σ depth per visit, m_5, depends on a set of observing conditions in the following way <cit.>:
m_5 = C_m + 0.50 (m_ sky-21) + 2.5log_10(0.7/θ_ eff) +
1.25log_10(t_ vis/30)-k_m(X-1),
where C_m is a constant that depend on the overall throughput of the system, m_ sky is the sky brightness in AB mag arcsec^-2, θ_ eff is the seeing in arcsec, t_ vis is the exposure time in seconds,
k is the atmospheric extinction coefficient, and X is the airmass.
The default values of the parameters in the above equation per band are given in Table 2 in <cit.>. The magnitude error for N-years observation is computed by σ/Nn_ vis, where the mean number of visits per year n_ vis can be derived from Table 1 in <cit.>.
In this Appendix, we compare the LSST error model with the Rubin OpSim output as well as the Data Challenge 2 <cit.> dr6 magnitude error. We perform our tests on the specific OpSim version , and we use the 5-year observing conditions including: coadd 5σ point source depth (), single-visit 5σ point source depth (), sky brightness (), and number of visits ().
We begin by checking Eq. <ref> using OpSim MAF maps over the DC2 footprint. The various survey conditions m_ sky, θ_ eff, and X are taken as the median values over the 5-year period, and other parameters C_m, t_ vis, and k_m are taken as the default values from <cit.>. The results from Eq. <ref> are the 5σ PSF magnitude limit in each band per visit, and we compare it with two quantities: the median 5σ depth map, and the equivalent per-visit depth from the coadded map: m_5=m_5^ coadd-2.5log(√(N_ vis)), where N_ vis is the number of visits at each pixel. The results are shown in Fig. <ref>. We see that in general, m_5 predicted by Eq. <ref> tends to be brighter than that from OpSim, and the difference is larger considering the coadd depth than the median depth. It seems that except for i-band which has a slightly different slope from unity, the difference in all other bands can be fixed by introducing a correction to C_m. For example, for the median m_5 case, the shifts needed are δ C_m={ -0.053, 0.032, -0.063, 0.070, 0.057, 0.027 } for ugrizy, respectively.
We also explicity check whether the dependence of the airmass, seeing, and sky brightness are as expected in Eq. <ref> with the default parameters. This is shown in Fig. <ref>. In all these exercises, we test whether the dependencies of the particular survey condition with m_5 on the ensemble pixels, fixing all other dependence to a constant C which we fit to the ensemble. We see that the airmass and seeing are well captured by Eq. <ref>, although the dependence of m_5 on airmass is weak. The sky brightness relation is less well captured by Eq. <ref> especially for u and g. In general, however, we conclude that in absence of a depth map, one can estimate the unbiased m_5 for Rubin observation conditions using Eq. <ref> with a modification of the C_m parameters for each band.
We then check Eq. <ref> and <ref> with the DC2 DM catalogue, where the magnitude errors are obtained through the detection pipeline, thus supposed to be more realistic. In this case, we directly adopt the coadded depth as m_5. We also compute in the low SNR limit (Eq. <ref>) which allows us to check the fainter magnitudes. For the extended magnitude errors, we compare with the CModel magnitudes in DC2.
This is shown in Fig. <ref>. We see that there is reasonable agreement for the PSF megnitude errors in most bands, except for the u-band, where the LSST error model predicts larger error compared to that measured in DC2. However, it is also noticeable that the DC2 error seems to be underestimated when comparing the observed magnitude to the truth. It is also noticeable that the LSST error model also predicts consistently larger error at the bright end.
When we add the extended error from the size of the galaxy (Eq. <ref>), we find that the scatter of the magnitude error at fixed magnitude is quite a bit larger than that measured by the cModel in DC2.
Both the PSF magnitude error and the scatter for the extended error in DC2 can be matched by the LSST error model by simple scaling of the PSF magnitude error by a constant for each band, as well as scaling the galaxy size a_ gal, b_ gal. We emphasise that due to the known issues in the DC2 catalogue, we do not calibrate the LSST error model to DC2 in our analysis. However, it is worth bearing in mind what the differences are, and that one needs to calibrate the model with the real data.
§ LENS AND SOURCE TOMOGRAPHIC BIN DETAILS
This section includes some supplementary information for the lens and source tomographic bins for the mock photometry sample, as discussed in Section <ref>.
Figure <ref> shows a similar plot as Fig. <ref>, but with the y-axis in logarithmic scale, and extended to z=3. Only tomographic bins 1, 3, and 5 are shown for visual clarity. This scaling enhances the small, high-redshift population for both lens and source galaxies.
Table <ref> shows the summary statistics on photo-z performance for BPZ and FZBoost at the 10% shallowest i-band coadd depth (qtl=0) and deepest depth (qtl=9) for the 1-year, 3-years, and 5-years mock LSST data. The summary statistics are: median bias, standard deviation (STD), normalized Median Absolute Deviation (NMAD), and outlier fraction.
Tables <ref> and <ref> show the mean values of the various metrics over the depth quantiles, given the gold cut adjusted for each year. The metrics include mean galaxy number N̅_ gal and mean redshift of the tomographic bin ⟨ z ⟩ for both lens and source samples, and additionally the width metrics σ_z and W_z for lens samples. In the BPZ case, we include an additional case where we select objects with odds≥0.9.
§ VARIATION WITH OTHER SURVEY PROPERTIES
In the main analysis, we investigated the trend of galaxy number and redshift distribution as a function of the i-band coadd depth. We considered the i-band depth to be most impactful because it is the detection band, and fluxes in all other bands are measured with forced photometry based on the i-band detection. However, other survey properties can also be important. For example, u-band is important for the quality of photo-z estimation, so extreme variation in the u-band depth could cause additional scatter. The effective seeing could be another important factor, which directly impact the noise-to-signal for extended objects.
We investigate the variation of galaxy number density and photo-z properties with these other survey properties in this section.
Table <ref> summarizes the mean and standard deviation of the coadd depth in the other five LSST bands and the median effective seeing for Y3 survey properties from Rubin OpSim baseline v3.3. The other years show a similar trend, although Y1 has a larger scatter. We see that there is a strong correlation between the i-band depth and these other survey properties. On average, a deeper i-band quantile also contains deeper coadd depth in all other five bands, as well as a smaller median effective seeing, with more scatter in the latter.
To check the dependences of other survey properties, we sub-divide each of the i-band deciles into five sub-quantiles of another survey property (such as depth in another band), and check the variation of the metrics, i.e. number of objects N_ gal, mean redshift ⟨ z ⟩, and width of the redshift bin σ_z, with these properties. As a reference, we also compute and compare the variation with sub-quantiles of the i-band depth itself.
In this section, we show two representative examples for source tomographic bins determined by FZboost photo-z: the sub-quantiles in coadd u-band depth and the i-band seeing, for the fainest, median, and deepest i-band deciles: qtl=0, 5, 9.
In the results presented here, we over-plot the variation from the i-band depth sub-quantiles (as faint, dashed lines) on top of that from the other survey properties (as solid lines), for visual comparison. That is, one can read off the level of fluctuation from the deepest and shallowest u-band depth sub-bin, for example, and compare it with that from the deepest and shallowest i-band depth sub-bin.
It should be noted, however, that these reference i-band split cases have a different actual x-axis values from those shown in the plots.
The results are shown in Fig. <ref> and <ref> respectively. We see that in general, these trends are consistent with the i-band depth fluctuation for all three metrics: the deeper (smaller) the depth (seeing), the more objects included in the sample, the higher the mean redshift of the tomographic bin, and the larger σ_z.
Also, qtl=0 has a significantly larger variation compared to qtl=9 in most cases.
Compared to the trends in the i-band depth sub-bins, we see that the N_ gal variations are always less strong for other properties. This is understood as selections are primarily taken in i-band. The ⟨ z ⟩ variations for the u-band tightly follows the i-band, although the first bin can have slightly larger fluctuations. For seeing, on the other hand, the trend is quite different for qtl=0, where the smallest seeing does not always correspond to a higher mean redshift. This could happen because the seeing is not as well correlated with depth - there are more scatter in the coadd depth and seeing at the faint end. Finally, the variation in σ_z seems to be relatively minor in most cases.
From these exercises, we see that within each i-band decile, the number of objects and p(z) properties can still change significantly with other survey properties such as u-band depth and i-band seeing. Meanwhile, given that these quantities are also quite tightly correlated, we expect that a lot of these variations are also due to the co-variation of the i-band depth. Hence, our main analysis, by splitting into the i-band quantiles, should capture the level of variations of the metrics. However, if one wishes to apply this method in e.g. forward modeling, then co-variation of all bands need to be taken into account.
§ MAGLIM-LIKE LENS SAMPLE
In this section, we explore the impact of variable depth on a lens sample selected with the DES Y3 MagLim cuts <cit.>. Because this sample has a brighter cut, we relax the i-band signal-to-noise limit to SNR≥ 5. The sample is selected with
17<i<4 z_ phot+18,
where we use the FZBoost mode redshift as z_ phot. This cut reduces the number of lens sample significantly compared to our fiducial case, resulting in a total sample size of 3.67% of the baseline (Gold cut) lens sample. The true redshift distribution of each tomographic bin is shown in Fig. <ref>, where the dashed lines show those from the shallowest quantile, and the solid lines show those from the deepest. Notice that the distribution is less smooth due to the sparsity of the sample. Overall, thanks to the bright cut, the redshift distribution for each bin has a smaller tail compared to the baseline case, especially for the highest redshift bin.
Figure <ref> shows the metrics for the variable depth, namely, the galaxy number, mean redshift, and width of the tomographic bin, as a function of the i-band depth. The panels (a) - (d) has the same style as, and should be compared to Fig. <ref> - <ref>. Again, we see a significantly milder, but visible, trend of these metrics with depth, owing to the bright magnitude cut. This shows that the variable depth effect can be greatly reduced, but not completed removed, by introducing a bright cut at the cost of sample size.
Figure <ref> shows the effect propagated to the galaxy clustering two-point data vector, C_ℓ^gg. We followed the same procedure as in Section <ref>, and set the number density in each bin to be 0.135, 0.117, 0.156, 0.219, 0.267 arcmin^-2 to account for the reduction in the overall number density compared to the fiducial case. The impact of variable depth on C_ℓ^gg is also significantly reduced, especially for (4,4) and (5,5). However, the impact is not negligible still at ℓ<100.
|
http://arxiv.org/abs/2409.02636v1 | 20240904115953 | Mamba as a motion encoder for robotic imitation learning | [
"Toshiaki Tsuji"
] | cs.RO | [
"cs.RO",
"cs.SY",
"eess.SY"
] |
Mamba as a motion encoder
for robotic imitation learning
Toshiaki Tsuji^1
^1Toshiaki Tsuji is with the
Dept. Electrical Engineering, Electronics and Applied Physics, Saitama University, 255 Shimo-okubo, Saitama, 338-8570
E-mail: [email protected]
September 9, 2024
=============================================================================================================================================================================================================
§ ABSTRACT
Recent advancements in imitation learning, particularly with the integration of LLM techniques,
are set to significantly improve robots' dexterity and adaptability.
In this study, we propose using Mamba, a state-of-the-art architecture with potential applications
in LLMs, for robotic imitation learning, highlighting its ability to function as an encoder that
effectively captures contextual information.
By reducing the dimensionality of the state space, Mamba operates similarly to an autoencoder.
It effectively compresses the sequential information into state variables while preserving the essential
temporal dynamics necessary for accurate motion prediction.
Experimental results in tasks such as Cup Placing and Case Loading demonstrate that
despite exhibiting higher estimation errors, Mamba achieves superior success rates compared to
Transformers in practical task execution.
This performance is attributed to Mamba’s structure, which encompasses the state space model.
Additionally, the study investigates Mamba’s capacity to serve as a real-time motion generator
with a limited amount of training data.
§ INTRODUCTION
In the field of robotic manipulation, the application of imitation learning has been rapidly advancing in
recent years <cit.>.
Conventional robot control methods relying on manual coding necessitate complex motion planning
as well as accurate modeling of the environment and objects,
imitation learning offers the advantage of intuitively and efficiently teaching tasks to robots.
Furthermore, with the recent advancements in deep learning techniques, it is becoming possible to acquire
generalized control policies adaptable to various environments in robotics.
So far, various architectures have been proposed, such as behavior cloning <cit.>,
inverse reinforcement learning <cit.>, and GAIL <cit.>.
There is an increase in multimodal imitation learning that extends modalities <cit.>
and models representing complex tasks using hierarchical architecture <cit.> has a potential to extend imitation learning to complicated tasks.
Integration of force control and imitation learning is essential for enhancing performance of contact rich
tasks <cit.>.
Depending on the development of these imitation learning studies,
there is potential for robots to attain human-level dexterity
and adaptability even in tasks that have been considered difficult to automate.
This research aims to verify the significance of using deep state-space
models (SSMs) <cit.>, particularly Mamba <cit.>,
as a model for robotic imitation learning.
The rapid development of LLMs has not only impacted language tasks but also influenced fields
such as image and speech processing, drawing attention to their potential applications in robotics <cit.>.
In robotic imitation learning, LLMs are expected to contribute to mapping language instructions to physical
actions <cit.>.
Not only that, but they have also been applied to action generation and path planning due to their versatile knowledge
representation and multimodal integration capabilities <cit.>.
While many of these studies are based on Transformers with attention mechanisms,
Mamba is also a good candidate as a model that achieves equivalent long-term memory retention.
As Mamba is based on a continuous SSM, it could exhibit better characteristics than Transformers
as a model for robotic imitation learning, which deals with sequences of continuous sensor-actuator responses.
Addressing the challenges of limited training data and the need for quick responses,
this research develops a model with simple configuration suitable for real-time motion generation in robots.
We then examine how much capability this simplified model alone possesses as an IL motion generation model.
Previous studies have been designed on the premise of incorporating long sequences of states entirely into the model,
but to achieve both real-time responsiveness and adaptability to diverse environments, it is desirable for designers to be
able to handle and process past context and current state information separately.
Additionally, for data storage and real-time processing, it is preferable that each piece of information is appropriately compressed.
Therefore, focusing on Mamba's ability to be applied as an encoder that stores context as low-dimensional information,
we extend it into a model that assigns past contextual information to state variables.
§ RELATED STUDIES
§.§.§ Imitation Learning
Early imitation learning methods primarily focused on learning one-to-one mappings between state-action pairs.
Despite showing promising results in tasks such as autonomous driving <cit.> and robot control <cit.>,
these methods overlooked the rich temporal information contained in histories.
Therefore, numerous studies have successfully incorporated action sequences using RNNs <cit.>.
LSTMs <cit.> have been widely applied in many tasks where long-term dependencies need to be
considered.
Seq2Seq models adopting LSTM as both encoder and decoder allow for optimization of generated trajectories in latent space,
enabling dimensionality reduction of optimization problems, as their latent variables compress sequence information <cit.>.
However, limitations in memory retention have been pointed out as potentially problematic when applied to very long sequences <cit.>.
Transformers are widely applied in recent imitation learning due to their ability to model longer sequences
than LSTM while maintaining training efficiency through parallelization of sequence processing.
Having developed in language and image processing, multimodal imitation learning using Transformers to encode
both image and language sequences is being researched <cit.> and there are also
studies in robotic applications, too <cit.>.
As mentioned above, the Transformer is a powerful tool for robot imitation learning due to its high memory retention performance.
We propose a model that has the same level of memory retention as the Transformer and is suitable for improving robot task performance.
§.§.§ SSM
SSMs are also promising candidates to solve the issue in robotic imitation learning.
SSMs have long been applied to time series information prediction <cit.>.
Recently, there have been many attempts to combine SSMs with deep learning.
Watter et al. proposed an SSM combining locally linear state transition models
with nonlinear observation models using VAE <cit.>.
Gu et al. proposed HIPPO, a model applying SSMs to discrete neural networks <cit.>,
and extended it to S4 <cit.> and Mamba <cit.>.
These can achieve efficiency and performance surpassing Transformers in processing long-range dependencies.
It has long been known that CNNs are effective in extracting features necessary for motion generation from environmental information,
and as Mamba is reported to possess these characteristics, it is promising as an IL policy for robots,
with examples of application to robot imitation learning already emerging <cit.>.
Jia et al. demonstrated a method to extend Mamba to an encoder-decoder variant by augmenting input with learnable action,
state, and time embedding variables, enabling efficient learning and applying it to diffusion processes <cit.>.
Previous researches applying Mamba to robots have all been designed assuming complex contexts with large amounts of data,
resulting in large network structures with stacked layers of residual mechanisms.
This paper focuses on the fact that the context of robot manipulation tasks can be often represented by smaller number of words
compared to general sentences, and reduces the dimensionality of the state space. Considering learning with a small amount of
training data, we reduce the number of Mamba layers. We construct a model with such a low-dimensional simplified configuration
and examine how much capability it possesses as an IL motion generation model on its own.
Additionally, the dimensionality reduction of the state space means that the amount of contextual information can be reduced.
Utilizing this feature, we improve the model to generate actions using only the compressed contextual information and recent
state sequences.
§.§.§ Autoencoder
The model adopted in this research can be interpreted as utilizing Mamba as an autoencoder, given its low-dimensional SSM configuration.
This approach bears relevance to previous works employing autoencoders in robotics, such as <cit.>.
Some of these studies also have architectures that provide inputs to latent variables <cit.>.
The most significant difference of our study to these is the CNN and the selection mechanism.
These features are expected to enable selective extraction of crucial information while mitigating performance degradation in long sequences.
The model adopted in this research can be interpreted as using Mamba as an autoencoder, as it sets the dimensions of Mamba's
SSM low. Therefore, it has relevance to previous works using autoencoders in robotics
such as .
Some of them also have an architecture to give inputs to latent variables
In our study, the most significant difference lies in having CNN and a selection mechanism.
These characteristics are expected to enable selective extraction of only important information and suppress performance
degradation in long sequences.
§ PROPOSED METHOD
§.§ Architecture of State Space Model
The machine learning model implemented in this research is shown in Fig.<ref>(a).
Here, x_t and y_t are input and output vectors at time t, respectively, and T is length of the sequence.
Mamba traditionally inputs sequence information vectorized by a tokenizer into multiple overlapping Mamba layers, and then converts it back to words for output. In this research, the tokenizer is replaced with a linear layer because continuous
data is given for input.
Additionally, the input dimension to the Mamba block is reduced by this linear layer. The computation results of the low-dimensional Mamba block are then increased to the same dimension as the input through a linear layer for output.
To predict future trajectories, the output time is advanced compared to the input. Also, to avoid performance degradation due to padding in the Convolution layer, the input layer is made longer by the kernel length. While not strictly identical, the input and output are nearly equivalent to an autoencoder structure.
The configuration of the Mamba block is shown in Fig. <ref>.
It features a residual network with skip connections that perform identity mapping
(see the bottom arrow in Fig.<ref>), similar to the model in <cit.>,
but modified for application in robot trajectory generation.
Previous studies applying Mamba to robots <cit.> have made Mamba blocks
handle features enhanced by encoders or signal processing processes on input and output data.
In contrast, this research adopts a simpler model configuration consisting solely of Mamba blocks.
The input consists of 16-axis time series data:
8-axis angles and 8-axis torques corresponding to joint motors of the robot.
To simplify the model, the 3 degrees of freedom of posture are assumed to be fixed in this study.
The internal structure of the residual network consists of convolutional layers, SSM, and FNN layers.
The state variable h_t and output y_t of SSM at time t are derived using the
following equations.
{[ h_t = A̅h_t-1 + B̅x_t; y_t = Ch_t ].
A̅ = exp (ΔA),
B̅ = (Δ A)^-1(exp (Δ A) - I) ·Δ B.
In these equations, A, B, and C represent the state transition matrix, input matrix,
and output matrix, respectively, with the superscript bars indicating the parameters of the discretized equations.
x is the input to SSM, and the subscript t denotes the time step.
The parameters B, C, and Δ are variable and adjusted by an FNN through a Selection Mechanism.
§.§ Design of the proposed method
The configuration of SSM is based on <cit.>, while the dimensionality of the SSM is reduced
based on the assumption that the variation in robot movements is limited compared to the highly diverse patterns in natural language.
In our study, the SSM dimensionality was set to 4, achieving characteristics similar to an autoencoder.
In addition, this study shows that the performance of motion generation is sufficient even in an extremely simplified configuration.
The number of input and output linear layers n_o, n_i and the number of Mamba blocks n are 1, 3 and 1, respectively.
While Mamba implements functionality akin to the Attention mechanism
by incorporating a gated MLP (gMLP) within its block,
this study excludes gMLP due to its tendency to cause overfitting and unstable learning.
Most of the continuous-time theories in control and signal processing are based on the assumption that all past responses influence
the present, so this modification is not unreasonable for imitation learning in robotics.
The residual network outputs the estimated values of 16-axis position and force data N samples ahead.
In this study, the angles and torques of each joint are divided into 8 axes each.
During training, the time series data of position and force are inputted for each sample,
and the estimated position and force data are compared with the post-sample training data to obtain the error.
The loss is calculated from this error, and the model is trained through backpropagation.
Mamba has convolutional layers close to the input, and SSM recursively uses the low-dimensional information of
past histories as input for the next step, thus exhibiting features of both CNN and RNN.
Also, previous studies have provided the entire input sequence at once, while the model in this paper,
as shown in Fig. <ref>(b), is structured to provide the input sequence only for a certain period.
Here, w is the length of the short period input.
State variables generated in past steps are input to reflect the context before the input period in motion generation.
Since state variables are low-dimensional compared to input, the amount of information to be preserved is reduced.
It is often a problem in RNNs that their performance degrades in tasks that require consideration of long-term dependencies.
This challenge can be overcome by setting the matrix A appropriately.
As shown in Eq. (<ref>), matrix A is the main factor to determine the time constant of memory retention.
When the matrix A is learned as a neural network parameter, the time constant of memory retention
is learned to minimize the error function.
If designers want to assign the memory time constant arbitrarily, matrix A can be given as a fixed value.
Although there is a trade-off between the length of memory retention and
the frequency bands of extractable features,
it is possible to balance both by comprehensively setting high and low time constants
since the eigenvalues of matrix A can be set multiple times.
§ EVALUATION
§.§ Robot Configuration
Training data was collected through bilateral control of an 8-degree-of-freedom robot, as shown in Fig. <ref>.
The performance of imitation learning was evaluated using the follower robot.
The robot employed in this study was the CRANE-X7 by RT Corporation, which features a 7-axis manipulator
with a 1-axis gripper at the end. A single control PC was used to calculate inputs for both robots, performing
bilateral control for each axis in joint space
The control PC communicated with a machine learning PC equipped with a GPU via UDP, with operating cycles
of 100 ms and 2 ms, respectively.
The kernel size of the convolutional layers was set to 4, the loss function used was MSE, and the learning rate was set to 0.001. To prevent overfitting, training was halted when the loss function appeared to start increasing.
§.§ Evaluation Method
This paper presents offline and online evaluations of motion imitation learning, respectively.
The offline test aimed to quantitatively evaluate the model's trajectory prediction performance considering long-term dependencies.
Motions were generated using a model trained with training data obtained from the experimental system, and errors in the predicted
output were evaluated when test data from the experimental system was given as input.
We evaluated the model on three tasks of increasing complexity: simple up-and-down movements, cup placement after grasping, and case loading.
The up-and-down motion was first repeated for 10 seconds.
Next, the task was repeated twice and then stopped.
In the former trial, the next trajectory can be predicted based only on the short-term memory, while in the latter trial, long-term memory
is required to predict the stopping after repeated movements.
In addition, we tested cup placing and case loading as tasks that simulate actual work.
In the former task, the robot grasped an empty 7-cm-diameter, 9-cm-high cup placed in a fixed position and evaluated whether
the cup was placed in a specified 12-cm-square area without tipping it over.
The latter evaluated whether the subject could grasp a paper box 21 cm long, 3 cm wide, and 8 cm high, and place it in an upright position in a 22-cm-width case.
Since the clearances between the case and the box is 1 cm, the box often contacts the case during training, requiring the instructor to adjust the position accordingly.
Therefore, the task has training data with a higher variability of motion.
The robot in this study was not equipped with a vision system and reproduced a situation in which the position of the object was almost
identified by placing the object in position with an error of 5 mm or less.
Hyperparameters of LSTM and Transformer were determined with reference to the studies of
<cit.>, respectively, which have the closest control system configuration.
For all the tasks, the number of trials for training and test data were 18 and 6, respectively.
§.§ Offline Evaluation
The errors of each model are compared in the Table <ref>.
Here, in order to have the same evaluation as that used for the error function, the error is based on the integration of information in different units, angle (rad) and torque (Nm).
The Transformer model with full period input consistently demonstrated superior performance across all tasks,
whereas the Transformer with short period input showed performance degradation due to the unavailability of past history information.
This is supported by the fact that the degradation of the short period input transformer is relatively low for repetitive up-and-down movements, which do not require historical information.
Although the RMSE of Mamba was not as small as that of the Transformer with full period input, it was confirmed that a similar performance can be obtained with only short period input and contextual state variables.
Next, we examined the differences when the diagonal elements of matrix A were learned by a neural network versus when they were fixed.
Fig. <ref> shows the time responses of the state variable h_t for Mamba with w=20 under different A matrices.
To confirm whether feature values can be obtained from state variables as work progresses, arrows were added to indicate timings of work progression such as posture alignment, approaching the cup, grasping, and releasing.
In all examples, the diagonal elements were given values multiplied by 0.4 from the maximum value to ensure uniform variation between axes of A.
For the autoencoder to perform sufficiently well, it is desirable for the SSM, which corresponds to latent variables, to be dispersed throughout the space. Also, to extract task features with high performance, it is desirable for state variables to fluctuate in conjunction with characteristic input signals.
When the elements of A were small, it was confirmed that the SSM response values varied greatly over time. However, there was significant bias in the spatial distribution, and it was confirmed that the slope of the state variables was almost uniform, with changes dependent on input being small compared to the overall response values.
When the elements of A were large, the SSM values did not change significantly depending on time. In the results where the maximum values of A elements were 0.5 and 2.5,
the state variables were widely distributed in space and changed as the work progressed.
In other words, to extract low-dimensional features from input information into the state space, the elements of matrix
A need to be given appropriately to achieve suitable time constants.
Fig. <ref> compares the RMSE when the diagonal elements of matrix A are given as fixed values.
When matrix A is too large or too small, the prediction accuracy of robot motion deteriorates, but when the elements of
A are set within an appropriate range, high prediction accuracy is obtained.
The red dotted line represents the average prediction accuracy of 0.451±0.216 when the diagonal elements of matrix
A were learned as parameters of the neural network. When appropriate fixed values were given, the prediction accuracy
was better compared to when adjusted by machine learning.
This result suggests the advantage of giving matrix A as fixed values rather than learning it with a neural network,
especially when the amount of training data is limited, because it results in a model with lower degrees of freedom.
§.§ On-line evaluation
Table 2 shows the success rates when each task was performed 20 times on the actual machine.
To mitigate the substantial oscillations observed in the Transformer's output
a first-order LPF (low-pass filter) with a time constant of 0.3 seconds
was applied to the output to suppress vibrations, and the filtered values were given to the robot as command values to improve
the task success rate. As a result of this adjustment, the Transformer with full period input showed the smallest output RMSE and demonstrated
high success rates in actual machine tasks. However, a certain percentage of failures occurred in both Cup Placing and Case Loading tasks.
The main factors for failure were dropping of grasped objects due to vibration and unexpected contact with the environment
that was not present during training. These issues were more pronounced in the Transformer with short period input, resulting
in further decreased success rates.
On the other hand, when performing tasks using Mamba, there was almost no vibration, resulting in a 100% success rate for
Cup Placing, which has a large tolerance for position deviations. In Case Loading, there were two instances where improper contact
with the case prevented accurate placement in the desired location, but the overall success rate was higher than that of the Transformer.
This is thought to be due to the minimal vibration in Mamba's output, despite its RMSE being larger than the Transformer's.
Transformers are known to generally have weak inductive bias, which often brings positive effects.
However, in robot motion generation, this characteristic does not guarantee output continuity and tends to cause vibrations.
The reason why Mamba does not generate vibrations is that its SSM has a structure that integrates state variables,
incorporating the regularity of continuous output changes into the model.
This functions as an inductive bias, enabling efficient learning from data.
In robot trajectory generation, continuity based on dynamic constraints is crucial, so the inductive bias provided by SSMs
like Mamba can have a positive effect on robot imitation learning.
Furthermore, to evaluate real-time motion generation capabilities, Mamba's computation time was reduced to 25ms for experiment.
As a result of the sampling time being reduced to one-fourth of the original, motions were generated four times faster.
Although this acceleration increased vibrations due to dynamics inconsistencies and raised the failure rate,
the overall decrease in success rate was minimal, confirming Mamba's real-time adaptability and stability.
§ LIMITATION
The model proposed in this research generates motion based on state variables and
short-term sequences rather than using all sequences as input.
While this approach effectively reduces the network structure and the scale of data
that needs to be retained, it may impact the Selection Mechanism.
When all sequences are inputted, the Selection Mechanism captures long-term dependencies
across all inputs and filters out unnecessary information.
In the proposed model, information is filtered based on the values of state variables given
as past context and the input from a limited period.
The experimental results in this paper suggest that this does not hinder task performance;
however, the impact may vary depending on the task being applied, necessitating further investigation.
This study presents a modified version of Mamba, excluding gMLP, which has demonstrated high performance in experiment.
However, it remains unclear to what extent the exclusion
of gMLP offers advantages. The tasks addressed in this research are relatively simple and can be realized
within a context similar to System 1 of dual-process theory, which requires a less complicated reasoning.
For tasks requiring more advanced reasoning, akin to System 2, it is likely that gMLP could be necessary.
Therefore, it is essential to investigate the relationship between the amount of training data, the anticipated
modalities, the tasks to be imitated, and the appropriate model design, for developing design principles.
§ CONCLUSION
This study demonstrates that Mamba offers significant
advantages for robotic imitation learning.
By reducing the dimensionality of the state space, Mamba functions effectively as an autoencoder,
compressing past contextual information into low-dimensional latent variables.
This characteristic enables efficient learning and robust performance in motion generation.
The experiments revealed that, despite a higher RMSE compared to Transformers,
Mamba consistently achieved higher success rates in robotic tasks, owing to its ability to suppress
vibrations and maintain continuity in output trajectories. This highlights the importance of inductive
bias in robotic systems, where the integration of state variables within a low-dimensional SSM
framework contributes to more reliable and stable task execution.
Furthermore, Mamba's ability to generate real-time motions with reduced computation time underscores
its potential for deployment in dynamic environments where rapid and adaptive responses are critical.
These findings suggest that SSMs like Mamba, with their built-in structural advantages,
are well-suited for advancing the capabilities of robot imitation learning, providing a strong foundation for future developments in this field.
IEEEtran
|
http://arxiv.org/abs/2409.02347v1 | 20240904002457 | Understanding the Role of Functional Diversity in Weight-Ensembling with Ingredient Selection and Multidimensional Scaling | [
"Alex Rojas",
"David Alvarez-Melis"
] | cs.LG | [
"cs.LG"
] |
[
Understanding the Role of Functional Diversity in Weight-Ensembling with Ingredient Selection and Multidimensional Scaling
Alex Rojasyyy
David Alvarez-Melisyyy,comp
yyyHarvard University, Cambridge, MA, USA
compMicrosoft Research, Cambridge, MA, USA
Alex [email protected]
Machine Learning, ICML
0.3in
]
§ ABSTRACT
Weight-ensembles are formed when the parameters of multiple neural networks are directly averaged into a single model. They have demonstrated generalization capability in-distribution (ID) and out-of-distribution (OOD) which is not completely understood, though they are thought to successfully exploit functional diversity allotted by each distinct model. Given a collection of models, it is also unclear which combination leads to the optimal weight-ensemble; the SOTA is a linear-time “greedy" method. We introduce two novel weight-ensembling approaches to study the link between performance dynamics and the nature of how each method decides to use apply the functionally diverse components, akin to diversity-encouragement in the prediction-ensemble literature. We develop a visualization tool to explain how each algorithm explores various domains defined via pairwise-distances to further investigate selection and algorithms' convergence. Empirical analyses shed perspectives which reinforce how high-diversity enhances weight-ensembling while qualifying the extent to which diversity alone improves accuracy. We also demonstrate that sampling positionally distinct models can contribute just as meaningfully to improvements in a weight-ensemble.
§ INTRODUCTION
Model ensembling plays a crucial role in enhancing the performance and robustness of machine learning models. Combining the information learned by models pre-trained (or fine-tuned) with different configurations or on different tasks can reduce overfitting to any particular hyperparameter or dataset choice, leading to better generalization <cit.>. This traditional approach to model ensembling in machine learning relies on averaging the predictions of the various models, which suffers from high memory and requires the inference-time computational cost of many modern neural networks.
Recent work by <cit.> demonstrates a promising alternative. The “model-souping" approach directly averages the parameters of a host of models in weight-space. This weight-ensembling operation results in a single weight-average model (the WA), addressing many of the computational limitations of prediction-based ensembling. The highly non-convex neural network loss landscape suggests that this approach should fail; yet, the linear mode connectivity (LMC) property of such landscapes <cit.> demonstrates that the interpolation between models sharing a stable region of a weight-space training trajectory remains in a low-error region. An example includes when minima have been fine-tuned from a shared foundation model; minima in this setting reside in convex low-loss basins where weight-ensembling does not incur significant loss barriers <cit.>.
Weight-ensembling literature typically assumes access to multiple models of identical architecture fine-tuned from a shared initialization, though fine-tuning hyperparameters may vary. Prior work relies on a “greedy" approach to construct the ensemble, whereby models (the “ingredients”) are considered only once to be sequentially added to the “soup” based on validation set accuracy of a candidate WA; ingredients are individually thrown out if candidate performance declines <cit.>. -1 These greedy WA models have shown remarkable performance across
complex tasks such as ImageNet <cit.> and DomainBed <cit.>, as shown by <cit.> and <cit.> respectively.
While the empirical success of weight-ensembling is linked to its discovery of flatter regions of the loss landscape <cit.> and its incorporation of functionally diverse ingredients <cit.>, the mechanics of how the greedy algorithm elicits these phenomena is not clear. Additionally, it is unknown if the greedy algorithm is particularly well-suited to find diverse ingredient-sets, or whether there exists other methods better suited to this goal. -1
In this work, we study the link between functional diversity and the performance of various weight-ensembling methods to further understand the effectiveness of weight-ensembling. Grounded by discussion of existing bias-variance-diversity decompositions of prediction-ensemble error and subsequent analysis of weight-ensemble error, we can renegotiate how this bias-variance-diversity tradeoff is understood for weight-ensembling. To that end, we can reason about the utility of promoting several quantitative diversity measures by characterizing how three weight-ensembling algorithms leverage these relationships and perform. Two are novel: “greedier" serves as a costly, optimal benchmark for comparison to others, while “ranked" plays the role of a diversity-encouraging mechanism; “greedier" is adopted from the literature <cit.>, <cit.>. In summary, our contributions are:
* We develop the “greedier” algorithm for weight-ensembling, which has the flexibility at each iteration to add any model to the set of ingredients. Although computationally costly, greedier serves as a benchmark due to its optimality.
* We develop the “ranked” algorithm for weight-ensembling, which considers ingredients in order of decreasing diversity from the current WA and greedily selects the first candidate which improves performance
* We empirically show how two notions of diversity robustly explain differences in the selection mechanism of the “greedier” and “greedy” algorithms. The advantageous performance of greedier implies that the presence of such distances help improve WA accuracy efficiently. However, the most diverse selections made by the diversity-encouraging “ranked” algorithm perform less optimally than “greedier,” limiting the extent to which selecting for maximal diversity is useful.
* We introduce a form of qualitative visualization that provide additional insights on the connection between these weight-ensembling algorithms and loss landscapes.
§ RELATED WORK
Traditional prediction-ensembling theory depends on the harmonious combination of distinctive predictive mechanisms to reduce generalization error. Ensemble error was first shown to decompose into the average error of the ensemble members minus the ensemble ambiguity; the ambiguity is a positive value reflecting the variance of ensemble-member predictions around the ensemble prediction to serve as an early quantification of diversity <cit.>. By maximizing this notion of diversity, pracitioners hoped to reduce generalization error, motivating the next stage of deep ensembling approaches.
Empirical navigations seeking to maximize diversity within this bias-variance-diversity tradeoff are abundant. <cit.> prediction-ensemble independently initialized and trained neural networks, implicitly leveraging the diversity derived from the stochastics in those steps to reduce generalization error. <cit.> later describe how well-trained samples from two distinct such modes exhibit more diversity than do distinct samples from a single trajectory of training. Resulting from the strong performance of deep ensembles, approaches to further explicitly encourage diversity between ensemble members have ensued. For example, incremental improvement was demonstrated through ensembling over members trained using carefully selected hyperparameter ranges instead of shared hyperparameters <cit.>. Additionally, diversity-encouraging regularization of the loss during the joint-training of ensemble members has been explored were explored in <cit.> and <cit.>.
Bias-variance-diversity decompositions were generalized for arbitrary loss functions which admit bias-variance decompositions <cit.>. While previously, such a decomposition had motivated the ensembling of diverse low-bias neural networks, <cit.> underscore that much like the traditional bias-variance decomposition, the bias-variance-diversity decomposition must be examined as a tradeoff that must be carefully managed; simply maximizing diversity may have the negative externality of degrading individual-model performance and the ensemble itself by extension.
Recently, <cit.> provides a more thorough examination for shallow and deep networks. Deeper architectures trained on a loss that regularizes the joint member task-loss by rewarding diversity have run up against a member-performance member-diversity frontier. In this case, increasing diversity degrades individual member performance by increasing bias and thus adversely impacts the bias-variance-diversity tradeoff both in-distribution and out-of-distribution.
Given that the weight-ensembling results in one fixed mode for inference, how might error be reduced through the machinations of diversity? <cit.> prove a bias-variance-diversity tradeoff for weight-ensembles by applying a first-order approximation of the prediction-ensemble with the weight-ensemble – although this approximation decays under a quadratic locality asymptote. The authors then encourage diversity by fine-tuning models from a shared initialization, varying hyperparameters; the resulting weight-ensembles achieve state-of-the-art at the time on OOD tasks. Maximizing the average pairwise diversity between ingredients is credited for the success of the approach. But, the notion of the average pairwise diversity between members is disjoint from diversity of members from the ensemble prediction; the latter idea characterizes the ambiguity terms consistent with ensemble theory. Taken together, existing weight-ensemble analysis lacks a careful navigation of the bias-variance-diversity error decomposition as a tradeoff. With similar motivation to <cit.> 's work in prediction-ensembling, our work thus seeks to intricately analyze the balance of the bias-variance-diversity tradeoff in the case of weight-ensembles.
§ METHODOLOGY
§.§ Distance Measures Between Models
Functional diversity, as measured by the ratio-error <cit.> was claimed in <cit.> to be the driving force behind the improvements of weight-ensembling over standard (SGD-found) models. This stemmed from analysis claiming that functional diversity decorrelates model predictions, with the latter being a term residing in a bias-variance-covariance decomposition of generalization error. They also demonstrate a positive correlation of the average pairwise diversity of a set of models with the accuracy gain of the corresponding WA over the mean accuracy over the individual ingredients, a statistic which does not directly imply that there is a lack of functional redundancy in the collection, only in the average case of a pair. For this reason, when analyzing weight-ensembling algorithms iteration-by-iteration in this work, we also pay heed to the diversity between a selected candidate ingredient and the current WA (in the context of unselected ingredients' distances). Less lossily than average pairwise diversity, we analyze these pairwise relationships jointly in the visualization method.
Definition 2.1. We use the convention of ratio-error <cit.> to measure diversity following <cit.>. For models θ_A and θ_B, where N_uns and N_sha refer to the number of unshared and shared errors on a labelled dataset, we refer to the diversity distance as d_D (θ_A, θ_B) = N_uns/N_sha
Motivated by the finding of loss basins in fine-tuning <cit.>, convex regions in which models sharing initialization remain essentially linearly mode connected, we also use a Euclidean geometry-inspired measure to determine the extent to which specific weight-space geometry can explain weight-ensembling approaches. The Euclidean metric allows us to explore how different weight-ensembling traverse the basin and evaluate whether sampling candidates different parts of a loss basin sufficiently improves the WA, recasting the pursuit diversity as a weight-space traversal question.
Definition 2.2. As such, we define the Euclidean distance between two neural network parameters θ_A, θ_B ∈Θ to be d_E(θ_A, θ_B) = || vec(θ_A) - vec(θ_B) ||_2^2
§.§ The Greedy Weight-Ensembling Technique
The greedy souping method <cit.> sorts the individual models by decreasing validation accuracy. Starting from the single highest-accuracy model, we sequentially consider adding the remaining models, only adding to the ingredients list when a candidate WA improves training-domain validation accuracy and otherwise throwing out the failed candidate in a linear pass.
§.§ A Greedier Weight-Ensembling Technique
New to this work, the greedier weight-ensembling algorithm also initializes to the top validation-accuracy ingredient. At each step, we consider the inclusion of every remaining ingredient to the set, aggregating the candidate whose WA maximally performed to the set if we have outperformed the current set's WA accuracy. If no candidate set's WA has outperformed the current set WA's accuracy, the algorithm terminates. See Algorithm <ref> for granular details. The core difference between the algorithms is that instead of the greedy algorithm's one single linear pass through the models sorted by individual performance, the greedier algorithm can add models in any order if they still contribute positively to the soup. The similarity is that both algorithms initialize the ingredients list to the maximal performing model.
While this algorithm has a costly runtime, it serves to illuminate the measures which maximally explain the selection mechanism of the algorithm. This will help diagnose what drives the selection of new ingredients to understand what relationships between ingredients and the current WA contribute to maximal improvements in the WA. Treating greedier as a “gold-standard" benchmark also allows us to correlate other algorithms' selection mechanisms with their performance characteristics.
§.§ Ranked Weight-Ensembling
The next weight-ensembling algorithm introduced here, the ranked algorithm, initializes identically to previous algorithms. At each step, we sort remaining ingredients by decreasing distance from the current set's WA and proceed through these rankings considering the addition of each ingredient to the set individually. The first ingredient whose candidate set improves training domain accuracy is accepted, and the rejected models are stashed for the next iteration. See Algorithm <ref>. The intent of this method is to make salient the effect of biasing diversity into the selection mechanism, which allows us to tease out benefits which stem from the inclusion of diverse candidates and less of the confounding factors which may affect the greedier and greedy algorithms. Furthermore, considering the diversity between the current weight-ensemble and new ingredients, instead of between the ingredients as in <cit.>, is a more principled parallel to bias-variance-diversity decompositions in which the ambiguity terms reflect variance around ensemble predictions, not between members <cit.>.
§ EXPERIMENTS
§.§ Experimental Setting
We adopt the DomainBed setting <cit.> used by <cit.>, honing the focus of our experiments to the OfficeHome dataset <cit.>, a domain generalization dataset containing four test environments. We refer the reader to Section 3.2 of <cit.> which established the setting adopted here, including the random initialization fine-tuning setup for ResNet50
<cit.> trained on ImageNet <cit.> as a foundation model varying hyperparameters and randomness to obtain distinct fine-tuned models. Holding out one test environment at a time as OOD set, we run ten trials of fine-tuning on the three ID environments 40 models per trial. After the fine-tuning, we run the greedier, greedy, and ranked weight-ensembling algorithms. For the ranked method, we run two versions, one which ranks ingredients by ratio-error which we call “diversity-ranked” and the other using Euclidean distance called “Euclidean-ranked.”
§.§ Weight-Ensemble Algorithm Performance Results
To compare performance the performance of the methods, for each trial in all test environments we calculate the difference in accuracy at each iteration from the other weight-ensembling algorithms to the greedier method, visualizing the average difference in Figure <ref> beginning when each WA has 2 ingredients. This amounts to benchmarking the performance of other methods relative to the greedier method. As some algorithms terminate before the ultimate time step t=39, we propagate the terminal accuracy value forward through time in order to still be able to run our aforementioned calculation for subsequent time steps. Initially, the increasingly negative values show that the greedier algorithm gains accuracy faster than the ranked and greedy methods at the outset. As more ingredients are included past t=4, ranked and greedy methods start to recover, with ranked methods suffering fewer losses and rebounding faster. This rebound occurs after many greedier runs have flatlined due to termination. Both ranked performances are similar, initially falling behind greedier but slowly recovering later on; greedy follows a similar trajectory, thus demonstrating that the greedier algorithm utilizes fewer ingredients effectively. ID validation (“training") accuracy for the non-greedier methods finish below the accuracy of greedier at a statistically significant level, while for OOD accuracy (“testing"), greedy closes below greedier at a statistically significant level with the ranked algorithms' intervals just barely enveloping the performance of greedier in the upper bound.
§ EXPLAINING THE ROLE OF DIVERSITY
§.§ Distributions of Quantiles over Selected Models
We next probe the extent to which distance measures (between the current WA to the candidate calculated ID when prediction is necessary) were associated with each algorithm's selection mechanism by binning the quantiles of distances of selected candidates to the current WA at each iteration time t. Since both greedier and ranked have a discrete host of models from which they can add a candidate at each step (but only select one), we first bin the quantile of the distance instead of the distance itself to make a well-posed statement about whether the more or less distant ingredients at some step were selected. For example, if θ_j had been the second least diverse from the current WA and had gotten selected for inclusion at time t, we would have binned 2/Num Remaining at time t. We can thus view each addition of a candidate as the discrete choice which was most useful to the WA. A similar procedure may be repeated for the greedy algorithm, where we only advance t in the figure when candidates are actually accepted by greedy; if a candidate is skipped (getting thrown out for the remainder of the procedure), we do not advance t because in this visualization we are interested in the distances of selected ingredients with the current WA.
The distributions of the diversity-quantiles of selected models over time for each method is visualized in Figure <ref>, with mean trends compared side-by-side in Figure <ref> of the Appendix. Over the greedier algorithm's first few iterations t=1 to t=4, we see that the distribution of quantiles is skewed to select ingredients which are more-diverse than random chance, reinforced by the confidence interval in Figure <ref>. This comes in direct opposition to the greedy algorithm, which specifically selects less-diverse candidates from its first iteration t=1 up to iteration t=7 as seen in Figure <ref>. As expected, the ranked-diversity algorithm tends to select ingredients which are most diverse from the current WA.
Contextualizing early-stage diversity-quantile results with the ID and OOD accuracy gains that greedier and ranked make relative to greedy between t=2 to t=4 in Figure <ref>, this shows a clear association between selecting the most diverse points and rapid accuracy improvement. Yet, the algorithm designed to select the most diverse candidates, ranked-diversity, still underperforms the greedier method. This comes in spite of ranked-diversity having ingredient models which have the highest average pairwise diversity, the proxy of diversity used in <cit.> as we see in Figure <ref>. Although ranked-diversity's selection of more diverse ingredients seen in Figure <ref> led to ranked-diversity WAs having the highest average pairwise diversity as seen in Figure <ref>, the fact that the greedier method outperforms the ranked-diversity method while having a less diversity-inducing selection mechanism indicate that the greedier method benefits from some force beyond what is provided for by diversity. Diversity correlates with the benefits realized by the greedier algorithm, but it does not quite encapsulate the full power of the greedier algorithm because building it into the selection mechanism with ranked-diversity does not achieve competitive accuracy. That in the latter stages past t=5 of the greedier routine, both the diversity-quantiles of ingredients that we select for and the average pairwise diversity of the WA that is accepted seems to have saturated further evidence that the benefits of diversity are capped.
The quantiles of selected Euclidean distances are given by Figure <ref> in the Appendix. In the figure, we observe an even stronger association between high-quantile Euclidean distance and selection by our greedier algorithm, with each boxplot living well-above random chance up to t=4 when greedier's is making gains on algorithms' accuracies. The result is correlated with diversity selection, although clear differences in ranked-diversity and ranked-Euclidean distributions in Figures <ref>, <ref> indicate some decoupling between the selecting for divesity and Euclidean distance. The strong association of Euclidean distance and greedier decision-making demonstrate that sampling far-apart ingredients in a loss basin can be just as powerful as selecting for diversity when it is quantified by ratio-error.
§.§ Dynamics of Errors
In Section <ref>, we concluded that diverse ingredients are a beneficial component for a WA, although WAs may improve through other means and the benefit stemming from diversity is likely to saturate relatively quickly. In this section, we analyze the dynamics of how the errors that WAs make evolve as ingredients are added to the WA. The comparison of such dynamics across algorithms will make conspicuous the direct benefits and limitations of diversity for ID and OOD prediction, and how the greedier approach may exploit new ingredients better than ranked-diversity.
To this end, at each time step t (starting with t=1 with 2 ingredients) we split up the ID and OOD sets into data points disjointly into four sets: 1) points which the WA at time t had classified incorrectly but the time-t selected ingredient classified correctly (not yet included in the latter WA), denoted “t-incorrect ingredient-correct", and similarly the disjoint sets 2) “t-correct ingredient-incorrect", 3) “t-correct ingredient-correct", and 4) “t-incorrect and ingredient-correct". For each of these sets are interested in the probability for some data point that the ingredient's outcome takes hold in the t+1 WA: such as for 1) the probability that the time t+1 WA is correct given that the time t WA was incorrect and the ingredient was correct.
As in previous analysis, given a weight-ensembling method and experimental trial, we benchmark each series with respect to the greedier method's series to contrast the methods. We carry forward the terminal values of each series before averaging. In Figure <ref>, we plot the first 10 iterations of the differenced series corresponding to t-incorrect ingredient-correct. We plot the first ten only as the greedier method typically terminates within 10 time steps. For these early time periods, it is clear that the greedier algorithm makes relatively better use of its new ingredients in both the ID and OOD setting. Noticeably, we observe the importance of diversity for generalization in the OOD setting (right), where for the first several iterations algorithms (t=2, t=3 when the impact of adding a new ingredient is greatest due to weighting), selecting for diversity induces approximately a 10% greater probability of being the current-step WA's mistake corrected by the ingredient. The decreasing trend of other non-greedier beyond t=5 correspond to many cases in which greedier has terminated, and other methods (which may still be running) already include more ingredients and thus it is more difficult for any newly-added individual to strongly impact the result. Figures <ref>, <ref> in the appendix demonstrate that the selection of diverse candidates makes scenarios 2) and 4), in which new errors appear or existing ones are retained, less likely.
§ VISUALIZATION OF WEIGHT-ENSEMBLING
§.§ MDS Visualization of the Greedier Algorithm
We develop a visualization method to capture all pairs of distance relationships jointly without reducing the diversity to a single number, as does average pairwise diversity. We do so with Multidimensional Scaling (MDS), a dimensionality reduction algorithm designed to preserve pairwise distances.
After running a weight-ensembling algorithm on the set of k neural networks, we calculate all pairwise distances between the models evaluated in the experiment. We store these distances in a symmetric distance matrix, then use this matrix as a plug-in to MDS. We use metric MDS for Euclidean distance, and for diversity we use non-metric MDS. At each iteration t, we reveal in the decomposed space the candidate WAs that we considered at this time using our current set of ingredients and the remaining ingredients. Due to the structure of MDS, we thus visualize the pairwise distances between the candidate WAs, the current and past WAs, and the individual candidates which shed qualitative insight on the selection mechanism of the greedier algorithm.
We demonstrate the progression of the greedier algorithm through Euclidean distance in one example in Figure <ref> (in Appendix due to figure size). Consistent with the results of quantile-ranks in Euclidean space, we see for iterations t = 2, 3, 4 that we have selected points in space which were on the larger end of Euclidean distance from the current WA; at t=5 we have saturated accuracy and the algorithm has terminated. We can see convergence in decomposed weight space, reflecting the diminishing returns from adding more candidates after location in Euclidean space saturates. Turning to the diversity distance Figure <ref> we observe that the WA points do not converge as nicely as time passes through the algorithm, although we still manage to select candidates which tend to lie on the farther side from our current WA through 4 iterations. Finally, we observe the intuition allotted by the visualization technique: we have selected truly distinct points in weight space, as the selected candidates exhibit separation in both decomposed spaces, as opposed to the comparitively reductive average pairwise diversity or Euclidean distance in the literature.
§ DISCUSSION
We have introduced the greedier algorithm which at each iteration selects the ingredient which maximally improved the WA's ID validation performance. When treated as a “gold-standard," greedier's decision-making helps to uncover how relationships between ingredients and a WA are leveraged to best improve the WA's performance. We also propose the ranked algorithm, which at each iteration sorts ingredients by their distance from the WA and selects the first to improve ID validation accuracy. We can contrast greedier results with the ranked and greedy algorithms, using different behaviors to reason about the role that diversity plays in performance dynamics. Leveraging this structure, we identify that both high diversity distance and high Euclidean distance explain the selection method when performance improves the fastest, implying that selecting diverse or spaced out candidates contributes rapidly towards improving the WA. Yet, that the ranked-diversity algorithm does not match the greedier method ID or OOD limit the extent to which diversity plays a role in this performance. We finally introduce a method by which we can examine how our algorithms selection traverses a loss basin and whether our candidates are truly diverse by not reducing our distance relationships but rather leveraging pairwise structure in a decomposition for qualitative analysis.
§ IMPACT STATEMENT
This work provides a new weight-ensembling algorithm and explanations for why a WA procedure improves accuracy when able to choose which model to add to the list of candidates using the greedier algorithm. By shedding light on what may cause WA to improve, we provide new support to an existing avenue by which they can improve inference using the WA efficiently. Such work has the potential to improve deep learning algorithms in all facets of society, both for clearly positive (such as medicine) and more nuanced to negative (such as surveillance) purposes.
icml2024
§ QUANTILES AND DISTANCE DISTRIBUTIONS FROM ITERATIONS OF WEIGHT-ENSEMBLE ALGORITHMS TO SELECTED INGREDIENTS
In this section, we provide the boxplots of distances and quantiles of the selected models at each iteration using our weight-space geometric distance measures. We also plot their mean trends together for a closer comparison.
§.§ Distance Distributions
In Section <ref>, the quantiles of the distances of selected ingredients from a current soup has the advantage of elucidating within a set of ingredients whether the more or less diverse ingredients were utile. In contrast, we examine here the raw trends (without taking quantiles) to ensure the robustness of the analysis.
For example, we can explain the first iteration of this process through the lens of diversity distance. Following notation in Algorithm <ref>, at t=1 we know the current WA is equal to the model average(ingredients) = θ_1. Then if we selected θ_j to add to the ingredients using the greedier method, we store the result d_D(θ_1, θ_j) in our bin for t=1. We proceed this binning from t=1 to t=T_max where T_max is equal to the largest amount of time any greedier algorithm instance ran for. We then boxplot over each t. As in the quantile example, we have an analogous implementation for the greedy algorithm. These results are similarly in favor of the higher diversity and Euclidean distances being selected by the greedier algorithm in earlier iterations. In both but especially in the Euclidean res ult of Figure <ref> we see a steep drop-off in selected distances after t = 4. Such a dropoff is intuitive because as we roughly move towards center of the the points in weight space, our distance to unincorporated but likely related points will also decrease.
§.§ Diversity Distance
The distributions over selected-ingredient diversity distances is visualized in Figure <ref>. We also plot the mean trends in selected diversity distance with confidence intervals in one plot in Figure <ref>. Elaborating on Figure <ref>, we plot the mean trend in the quantiles of the distances of selected ingredients in one plot in Figure <ref>.
§.§ Euclidean Distance
The quantiles over selected-ingredient Euclidean distances is visualized in Figure <ref>. The distributions over selected-ingredient Euclidean distances is visualized in Figure <ref>. We also plot the mean trends in selected diversity distance with confidence intervals in one plot in Figure <ref>. Elaborating on Figure <ref>, we plot the mean trend in the quantiles of the distances of selected ingredients in one plot in Figure <ref>.
§ MDS EXTENDED
We demonstrate an example of the MDS visualization in Euclidean space (Figure <ref>) and diversity space (Figure <ref>).
Remaining experiments will be attached as supplementary material.
§ ANALYZING DIVERSITY AND ERRORS
§.§ Average Pairwise Diversity
Given a collection of models {θ_1, …, θ_k } it is unclear how to measure the total diversity of the collection because the ratio-error diversity is defined via pairwise relationships. As such, <cit.> choose to represent the diversity of the collection as the average pairwise diversity between any two distinct models in the collection. In these algorithms at any time step, we can use the ingredients selected so far to calculate the average pairwise diversity. In Figure <ref> we plot the average pairwise diversity up to time step 20.
In only the greedier algorithm at each time-step do we have access to the result of including each remaining ingredients with the current set. As such, we may calculate the average pairwise diversity of every WA from the time step and bin the quantile of the average pairwise diversity of the selected model. This allows us to see whether WAs with a higher or lower average pairwise diversity are selected. Figure <ref> visualizes the mean trend in the quantile of the new WA at each time step.
§.§ Dynamics of Errors
Similar to the benchmarking of the probability trend in Figure <ref>, we plot the results for 2) t-correct ingredient-incorrect in Figure <ref>, and 4) t-incorrect and ingredient-correct in Figure <ref>. We do not plot 3) t-correct ingredient-correct due to a lack of observable trend.
From Figure <ref>, we observe in the ID and OOD case for early iterations up to t = 4 the greedy method is signifcantly more likely than greedier to make a “new" error when the ingredient made the error as well. Most of the distribution of values for the diversity-ranked also lie below that of the greedy algorithm, although there is some intersection of the confidence intervals. This evidences that diversity between a WA and an ingredient which makes mistakes may contribute to being more robust to the next-step WA making the same error.
In Figure <ref>, we observe that the the diversity-selecting methods and the greedier algorithm lower probabilities of retaining a current WA's errors when the ingredient was also correct. This demonstrates that if an ingredient (though well-trained) is incorrect, the WA's performance on previously-misclassified points will benefit more from its inclusion if the model is diverse from the current WA.
§ ALGORITHMS
We formalize our greedier algorithm below as Algorithm <ref>. Notation for Algorithm <ref> is drawn from <cit.>.
|
http://arxiv.org/abs/2409.03290v1 | 20240905065509 | Gaussian Process Phase Interpolation for estimating the asymptotic phase of a limit cycle oscillator from time series data | [
"Taichi Yamamoto",
"Hiroya Nakao",
"Ryota Kobayashi"
] | nlin.AO | [
"nlin.AO",
"physics.data-an"
] |
Tokyo1]Taichi Yamamoto
[Tokyo1]organization=Graduate School of Frontier Sciences, The University of Tokyo,
city=Kashiwa, postcode=277-8561,
country=Japan
TokyoTec1,TokyoTec2]Hiroya Nakao
[TokyoTec1]organization=Department of Systems and Control Engineering, Tokyo Institute of Technology,
city=Tokyo, postcode=152-8552,
country=Japan
[TokyoTec2]organization=Research Center for Autonomous Systems Materialogy, Tokyo Institute of Technology,
city=Yokohama, postcode=226-8501,
country=Japan
Tokyo1,Tokyo2]Ryota Kobayashi
[email protected]
[Tokyo2]organization=Mathematics and Informatics Center, The University of Tokyo,
city=Tokyo, postcode=113-8656,
country=Japan
§ ABSTRACT
Rhythmic activity commonly observed in biological systems, occurring from the cellular level to the organismic level, is typically modeled as limit cycle oscillators.
The phase reduction theory serves as a useful analytical framework for elucidating the synchronization mechanism of these oscillators. Essentially, this theory describes the dynamics of a multi-dimensional nonlinear oscillator using a single variable phase model. In order to understand and control the rhythmic phenomena in the real world, it is crucial to estimate the asymptotic phase from the observed data.
In this study, we propose a new method, Gaussian Process Phase Interpolation (GPPI), for estimating the asymptotic phase from time series data. The GPPI method first evaluates the asymptotic phase on the limit cycle and subsequently estimates the asymptotic phase outside the limit cycle employing Gaussian process regression. Thanks to the high expressive power of Gaussian processes, the GPPI is capable of capturing a variety of functions. Notably, the GPPI is easily applicable even when the dimension of the system increases. The performance of the GPPI is tested by using simulation data from the Stuart-Landau oscillator and the Hodgkin-Huxley oscillator. The results demonstrate that the GPPI can accurately estimate the asymptotic phase even in the presence of high observation noise and strong nonlinearity. Additionally, the GPPI is demonstrated as an effective tool for data-driven phase control of a Hodgkin-Huxley oscillator.
Thus, the proposed GPPI will facilitate the data-driven modeling of the limit cycle oscillators.
Synchronization Limit cycle oscillators Phase reduction Machine learning Gaussian process regression
§ INTRODUCTION
Rhythmic activity is a ubiquitous phenomenon observed in a wide range of biological systems.
Examples include cortical networks in the brain <cit.>, circadian rhythms in mammals <cit.>, the human heart and respiratory system <cit.>, and animal gait <cit.>. Most mathematical models of rhythmic activity are based on limit cycle oscillators <cit.>.
Phase reduction theory <cit.> is a valuable tool for analyzing the synchronization of limit cycle oscillators. This theory represents the state of a multi-dimensional limit cycle oscillator using a single variable, called the phase, and describes their dynamics in a reduced phase model. Theoretical studies based on the phase model have elucidated the key factors underlying synchronization phenomena, including the periodic external force, the coupling between oscillators, and the common inputs <cit.>.
A fundamental challenge in the field of complex systems is to identify the mechanisms by which a real-world system achieves synchronization <cit.>.
While these theoretical studies offer explanations for synchronization phenomena, they require the precise knowledge of the mathematical models.
Consequently, previous studies have developed methods for identifying the phase model from real data (for a review, see <cit.>).
The most common approach for identifying the phase model from data is to estimate the phase sensitivity function (also known as the infinitesimal phase response curve), which characterizes the linear response property of the oscillator phase <cit.>.
However, when a limit cycle oscillator is subjected to strong perturbations, such as a strong impulse input, the approximation using the phase sensitivity function may be inaccurate.
Therefore, it is essential to obtain the asymptotic phase distant from the limit cycle in order to understand the synchronization property and to control the phase of the limit cycle oscillator subjected to strong perturbations. Recently, <cit.> proposed a method to obtain the asymptotic phase in a data-driven manner. However, it is still challenging to estimate the asymptotic phase of high-dimensional systems.
In this study, we propose the Gaussian Process Phase Interpolation (GPPI) method, which estimates the asymptotic phase from time series obtained from a limit cycle oscillator without assuming the mathematical model of the system. The GPPI method can be readily applied to systems with more than three dimensions, as the same algorithm is used regardless of the dimensionality of the dynamical system.
To validate the GPPI, we apply it to two limit cycle oscillators: the Stuart-Landau oscillator and the Hodgkin-Huxley oscillator.
Furthermore, we apply the GPPI to control the phase of the Hodgkin-Huxley oscillator by impulse inputs, which demonstrates the effectiveness of the GPPI for data-driven control.
This paper is organized as follows: Section 2 outlines the phase reduction theory and the asymptotic phase. Section 3 describes the proposed method, the GPPI, for estimating the asymptotic phase. In Section 4, the GPPI is validated using synthetic data obtained from the two limit cycle oscillators. In Section 5, we demonstrate data-driven control of a Hodgkin-Huxley oscillator by impulse input using the estimated asymptotic phase. Finally, Section 6 summarizes the results and discusses the conclusions.
§ PHASE REDUCTION OF LIMIT-CYCLE OSCILLATORS
We consider a limit-cycle oscillator described by
d/dt(t) = 𝐅((t)),
where (t)∈^d is the state vector of the system at time t and 𝐅:^d→^d is a smooth vector field representing dynamics.
We assume that the system has a stable limit cycle orbit (t) with the natural period and frequency, T and ω= 2π/T, respectively, which satisfies the condition: (t+T) = (t).
For any point in the basin of the limit cycle, an asymptotic phase can be determined <cit.>. The basin ℬ is defined as the set of initial conditions that converge to the limit cycle.
In the following, the asymptotic phase is simply written as the `phase'.
First, we assign the phase θ∈ [0,2π) on the limit cycle, where the phase θ= 0 and 2 π are considered to be identical.
We choose a point on the limit cycle, which is set to be the phase origin, θ=0, and we define the phase to increase at a constant rate ω, i.e., θ(t) = ω t (mod 2π).
As a result, the phase of the state (t) on the limit cycle follows
d/dtΘ((t)) = ω,
where the phase function Θ() gives the phase θ of the state on the limit cycle.
Moreover, the phase function can be extended so that Eq. (<ref>) holds for any orbit (t) in the basin of the limit cycle ℬ:
d/dtΘ((t)) = ω.
Here, we extended the domain of the phase function as Θ: ℬ→ [0,2π).
The above procedure reduces the d-dimensional differential equation (<ref>) to a simple one-dimensional equation (<ref>) for any (t) ∈ℬ. By integrating Eq.(<ref>), we obtain
Θ((t+τ)) - Θ((t)) = ωτ
( mod 2π).
We introduce the phase response function (PRF, also known as the phase response or resetting curve, PRC) <cit.>, which is essential for the analysis and control of synchronization phenomena.
The PRF is a function that describes the effect of an impulse perturbation applied to an oscillator in phase θ and is defined as
g(θ, ς ) = Θ((θ)+ ς ) - Θ((θ)) = Θ((θ)+ ς ) - θ,
where ς∈^d denotes the intensity and direction of the impulse applied to the system (<ref>) <cit.>, and
𝒳_0(θ) denotes the point with phase θ on the limit cycle.
If the impulse intensity ς is sufficiently small, the PRF can be approximated as
g(θ, ς ) = 𝐙(θ)·ς +O(ς^2)
where
𝐙(θ) = ∇Θ((θ))
is called the phase sensitivity function (PSF, also known as the infinitesimal phase resetting curve, iPRC). The PSF characterizes the linear response to a given weak perturbation for the state of phase θ.
§ PROPOSED METHOD
We propose a method, Gaussian Process Phase Interpolation (GPPI), for estimating the phase function Θ() from time series data of multiple orbits converging to the limit cycle.
The main idea is to obtain the phase function from the time series data using the phase equation (<ref>) and interpolate them by Gaussian process regression.
The procedure can be divided into the following three steps (Fig. <ref>):
1. Determine the phase function on the limit cycle.
2. Obtain the phase function outside the limit cycle.
3. Interpolate the phase function using Gaussian process regression.
§.§ Step 1: Determine the phase function on the limit cycle
In the first step, we obtain the value of the phase function on the limit cycle.
Let us assume that a sufficiently long time series, _0(t), is obtained from the limit cycle oscillator (<ref>) and that at time t=0, the orbit has converged sufficiently to the limit cycle.
We first estimate the natural frequency ω.
Let us consider a Poincaré section that intersects the limit cycle once.
The period and frequency of the limit cycle can be estimated using the following formulae: T̂=(s_n+1-s_1)/n and by = 2π / T̂, where s_1 and s_n+1 are the times of the first and (n+1)-th passage through the Poincaré section, respectively.
In order to determine the phase, a point _0(t_0) (t_0 ≥ 0) on the time series is taken as the origin of the phase, i.e., Θ(_0(t_0))=0. Subsequently, the phase on the limit cycle can be obtained using the phase equation (<ref>): Θ(_0(t)) = (t-t_0) for any t ≥ 0.
§.§ Step 2: Obtain the phase function outside the limit cycle
In the second step, we obtain the phase function on each data point outside the limit cycle.
Let us assume that we have collected n time series, represented by {_1(t), ⋯, _n(t) }, which converge to the limit cycle from its basin.
It should be noted that such time series can be collected, for example, by applying perturbations to a limit cycle oscillator and observing the convergence to the limit cycle.
We describe the procedure for obtaining the phase function on a data point that is outside the limit cycle. Let us consider a time series _i(t) whose last point _i(t_ end) is close enough to the limit cycle.
We can calculate the phase of the last point, Θ(_i(t_ end)), by applying
the linear interpolation of the data points on the limit cycle, Θ(_0(t)), obtained in Step 1.
Then, the phase at any time t on the time series can be calculated from the last point using the phase equation (<ref>) as
Θ(_i(t)) = Θ(_i(t_ end)) + ( t- t_ end ).
In this way, the phase of all points on the time series _i(t) are obtained. Similarly, the phase of each data point can be obtained by repeating this procedure for all time series (i= 1, 2, ⋯, n).
In this study, the entire time series is discarded if the last point is not sufficiently close to the limit cycle.
§.§ Step 3: Interpolate the phase function using Gaussian process regression
In the third step, we estimate the global phase function Θ() by employing Gaussian process regression.
The proposed method does not directly estimate the phase function, Θ(), but rather estimates two real-valued functions
s() := sinΘ(),
c() := cosΘ(),
and obtains the phase function from these functions.
First, we generate two datasets {(_i,s(_i))} and {(_i,c(_i))} (i=1, 2, ⋯, N) from the data {(_i,Θ(_i))} obtained in step 2 (Sec. <ref>).
Next, we estimate the two functions (s(), c()) from the datasets using Gaussian process regression.
Finally, the phase function is estimated as
Θ̂()=arctan(ŝ()/ĉ()),
where ŝ() and ĉ() are the estimates of Gaussian process regression.
Gaussian process regression is a non-parametric Bayesian approach for estimating a real-valued function f:ℝ^d→ℝ <cit.>.
Gaussian process regression has two main advantages:
1) it captures a wide variety of functions by leveraging a theoretically infinite number of parameters, and
2) it evaluates the uncertainty of the function by predicting the distribution of the function from the data.
From the data { (_1,y_1), ⋯, (_N,y_N) }, Gaussian process regression predicts the mean m(_*) and variance v(_*) of the function f(_*) for an unknown input _* as follows:
m(_*) = _̨*^⊤(K+σ^2I)^-1,
v(_*) = k(_*,_*) - _̨*^⊤(K+σ^2I)^-1_̨*,
where
K ∈ℝ^N× N,
K_ij = k(_i,_j),
_̨* ∈ℝ^N, _̨* = ( k(_1,_*), k(_2,_*), ⋯ , k(_N,_*) )^⊤,
∈ℝ^N, = (y_1, y_2, ⋯ , y_N)^⊤,
I ∈ℝ^N× N is the identity matrix,
and σ^2 is the variance of the observation noise, respectively.
The kernel function k(_i,_j) represents the correlation between the outputs (f(_i) and f(_j)).
Here, we adopt the Matern-5/2 kernel for the kernel function:
k(_i,_j) = σ_f^2
(1+√(3)r/σ_l)
exp(-√(3)r/σ_l),
where r= √((_i-_j)^⊤(_i-_j)) is the distance between the data points, and σ_f and σ_l are hyperparameters that determine the height and width of the kernel function, respectively.
The hyperparameters are determined by maximizing the log-likelihood function given by
logp(|_1,...,_N)
= -1/2^⊤(K+σ^2I)^-1
-1/2log(K+σ^2I)-d/2log2π.
It should be noted that the noise variance is set to σ^2=0.01.
In this study, we used Gaussian process regression algorithm implemented in MATLAB Statistical and Machine Learning Toolbox <cit.>.
§ TEST OF THE PROPOSED METHOD
In this section, we test the validity of the proposed method, GPPI, by applying it to two limit cycle oscillators, the Stuart-Landau oscillator and the Hodgkin-Huxley oscillator.
§.§ Evaluation methods
We test the proposed method by calculating the true phase function through exhaustive numerical simulation based on the mathematical model (1) and comparing it with the estimation results.
The true phase function is calculated as follows: First, we obtain the phase function on the limit cycle in the same way as in the proposed method (Step 1, Sec. <ref>). Then, for each state in a region in the basin, we simulate the dynamical system (<ref>) from until the orbit converges to the limit cycle. Finally, the phase function Θ() is obtained using the phase equation (<ref>) in the same way as in the proposed method (Step 2, Sec.<ref>).
In this study, we use a two-dimensional grid in the state space, and calculate the phase function at all grid points. It is noteworthy that we use noiseless simulation to calculate the true phase function.
Moreover, we test the proposed method by comparing the normalized phase response function (nPRF) with its estimate.
The nPRF for the x direction is defined as G_x, κ(θ) := g(θ, κ𝐞_x) / κ, where 𝐞_x is the unit vector in the x direction and κ is the intensity of the impulse.
The nPRF can be calculated from the estimate of the phase function Θ̂ as follows:
Ĝ_x,κ(θ) :=
Θ̂((θ)+ κ𝐞_x) - Θ((θ))/κ,
where (θ) denotes a state of the phase θ on the limit cycle.
The estimation accuracy of the nPRFs is evaluated using the coefficient of determination defined as follows:
R_Y^2 = 1 - ∑_i(Y(θ_i)-Ŷ(θ_i))^2/∑_i(Y(θ_i)-Y̅)^2,
where Y and Ŷ are the true and estimated values of the nPRF, respectively, and Y̅ is the average of Y over all the phase θ_i. The closer the coefficient of determination is to 1, the higher the estimation accuracy is.
§.§ Stuart-Landau oscillator
We evaluate the proposed method using the Stuart-Landau (SL) oscillator and compare it with an existing method, which we refer to as the Derivative Phase Regression method (DPR method) <cit.>. We also examine the robustness of these methods against observation noise.
The SL oscillator is described as follows:
ẋ = x - α y - (x - β y)(x^2+y^2),
ẏ = α x + y - (β x + y)(x^2+y^2),
where x, y are the state variables and α=2, β=1 are parameters. The numerical simulation was conducted using the fourth-order Runge-Kutta method with a time step of Δ t=0.005.
We generate synthetic data for validation as follows: an orbit is simulated based on Eqs. (<ref>) and (<ref>) for the time length of 2.5 from an initial state randomly sampled from the uniform distribution of [-1.6, 1.6]^2. It should be noted that the period of the limit cycle is 2 π. The time series data { x(t), y(t) } is collected if the final state is sufficiently close to the limit cycle. This procedure is repeated until we obtain n= 100 time series.
Finally, observation noise is added to the time series for testing the robustness against noise.
The observation noise is given by independent Gaussian random variables of mean 0 and standard deviation of η=0.005.
We first estimate the natural frequency (Step1, Sec. <ref>). To reduce the impact of observation noise, a moving average with a window width of 0.07 was calculated from the time series data.
The natural frequency is estimated as = 1.000, which is in agreement with the theoretical value of ω=1.0 derived from the model. The estimated frequency = 1.000 is also used in the DPR method.
Next, we estimate the phase function Θ from the time series using the proposed method, the GPPI, as well as the DPR.
To evaluate the performance of the proposed method for a small dataset, 1,000 data points are sampled with a coarse interval Δ t = 0.25, and are used as a training data set for Gaussian process regression.
In contrast, the DPR uses all simulation data, 50,000 points with sampling interval of Δ t = 0.005. The other parameters of the DPR were the same as those used in <cit.> (see Appendix A for details).
Figure <ref> compares the true phase function with its estimate by the proposed GPPI method and the DPR method from the time series.
While the both methods accurately estimate the phase function near the limit cycle, they cannot estimate the phase function around the origin (x,y)=(0,0) due to the lack of data.
In particular, in the boundary region, the GPPI provides a more accurate estimate of the phase function than the DPR (see Fig. <ref>a in Appendix B).
One potential explanation for the decline in the estimation performance of the DPR is the use of a polynomial function, which tends to diverge the absolute value of the function in the boundary region. Instead of the polynomial function, the GPPI employs Gaussian process regression for estimating the phase function.
Overall, the proposed GPPI method provides an accurate estimation of the phase function in the global region with smaller data compared to the DPR method (1,000 vs 50,000 data points).
We also compared the normalized phase response function (nPRF) calculated from the estimated phase functions with the true values. Figure <ref>a shows the true and estimated nPRFs obtained by applying impulses in four directions (+x,-x,+y,-y) at an intensity of 0.2.
Both methods yield accurate estimates of the phase response. However, the present GPPI method outperforms to the DPR method, with a higher coefficient of determination averaged over the four directions for the GPPI and the DPR (0.998 and 0.952).
It should be noted that the accuracy of the DPR is worse than the results in <cit.> (Figure 3 and Table 1). This discrepancy may be attributed to the small number of time series used for the estimation.
Furthermore, we examine the robustness of the methods against observation noise by generating new data sets with increasing noise levels, specifically, η=0.01, 0.05. The estimation procedure was repeated for both methods using the new data, including the estimation of the natural frequency.
Figures <ref>b and c show the estimation results for nPRFs. As the noise strength increases, the estimation error of the DPR method also increases. The DPR is unable to estimate the phase response from noisy time series (η=0.05).
In contrast, the GPPI is capable of estimating the phase response even in the presence of the substantial observation noise (η=0.05).
Table <ref> shows a comparison of the performance of the GPPI with that of the DPR based on the average of the coefficient of determination R_G^2.
These results demonstrate that the proposed GPPI method can estimate the phase function with less data than the DPR method and it is also robust against noise.
§.§ Hodgkin-Huxley oscillator
As a second example, we test the proposed method using the Hodgkin-Huxley model (HH model), which is a four-dimensional dynamical system. The HH model has a higher dimension than the SL oscillator and also exhibits stronger nonlinearity than the SL oscillator.
The HH model is one of the most important neuron models in neuroscience, which has been used to describe neural systems mathematically <cit.>. This model is written as a differential equation of four variables, namely, V, h, m, and n, as follows <cit.>:
V̇ = 1/C( -Na(V-Na)m^3h-K(V-K)n^4
-L(V-L) + I_b ),
ṁ = α_m(V)(1-m) - β_m(V)m,
ḣ = α_h(V)(1-h) - β_h(V)h,
ṅ = α_n(V)(1-n) - β_n(V)n,
where the functions α_m,h,n, β_m,h,n are written as
α_m(V) = 0.1(V+40)/1-e^-(V+40)/10,
β_m(V) = 4e^-(V+65)/18,
α_h(V) = 0.07e^-(V+65)/20,
β_h(V) = 1/1+e^-(V+35)/10,
α_n(V) = 0.01(V+55)/1-e^-(V+55)/10,
β_n(V) = 0.125e^-(V+65)/80,
and the parameters are set as follows: C= 1 [μ F/cm^2], Na = 120 [mS/cm^2], K= 36 [mS/cm^2], L= 0.3 [mS/cm^2], Na = 50 [mV], K = -77 [mV], L = -54.4 [mV], and I_b= 10 [μ A/cm^2].
The numerical simulation was conducted using the fourth-order Runge-Kutta (RK4) method with a time step Δ t=0.01 [ms].
A limit cycle that corresponds to periodic firing can be obtained from the HH model by setting the input current I_b above a certain threshold <cit.>. In this study, we focus on the oscillatory regime, where the HH model has a limit cycle.
Figure <ref> illustrates a single period of the limit cycle obtained from the HH model.
Following the literature <cit.>, the origin of the phase is defined as the peak of the voltage V.
The period and the natural frequency are estimated from a time series of 1,000 periods, which results in T̂=14.6 [ms] and =0. 429 [rad/ms]. It should be noted that these values are also used in the calculation of the true phase function, as we do not consider the observation noise in this example.
Here, we consider two strategies for collecting the time series data used to estimate the phase function.
The first strategy is to simulate the system from the initial state, which is randomly sampled from a uniform distribution of a hyperrectangle, as in the case of the SL oscillator. For the HH oscillator, we employ a hyperrectangle formed by V ∈ [-100, 50], m ∈ [0,1], h ∈ [0, 0.6], n ∈ [0.3, 0.8].
We refer to this strategy as uniform sampling. The uniform sampling strategy is able to collect data over a global region of the state space. However, as the dimension of the state space increases, the data points become increasingly sparse, thereby rendering the estimation of the phase function more difficult.
The second strategy is to simulate the system from the initial state generated by adding a small perturbation to the state on the limit cycle. For the perturbation of the HH oscillator, we employ a uniform random number in the range [-σ_x/10, σ_x/10], where σ_x is the standard deviation of the variable x ∈{V, m, h, n } along the limit cycle.
We refer to this strategy as vicinity sampling. While the vicinity sampling is an effective strategy for accurately estimating the phase function of regions in proximity to the limit cycle, it is difficult to apply this strategy for estimating the phase function of regions distant from the limit cycle.
We generate the orbits of time length 100 [ms] from 100 initial states obtained for each sampling strategy by simulating the HH model. Subsequently, the time series of each state variable is collected. It should be noted that, in uniform sampling, several time series that do not converge to the limit cycle are excluded from the analysis.
The number of data points thus prepared is approximately 10^6.
Given the high computational cost associated with Gaussian process regression, it is necessary to reduce the number of data points to a range of 10^3 to 10^4. Accordingly, the training data used for Gaussian process regression is extracted from the data set in the following manner.
First, we extract the initial part of the time series, which is relatively distant from the limit cycle.
Specifically, the initial 20 [ms] of the time series with a sampling interval of 1 [ms] are extracted in the uniform sampling, and the initial 6 [ms] of the time series with a sampling interval of 0.3 [ms] are extracted in the vicinity sampling.
Furthermore, due to the rapid change in the state variables before and after a spike, the data density in the region corresponding to the spike tends to be relatively low. To ensure sufficient data collection in the spike region, we employ a smaller sampling interval. Specifically, the sampling interval is decreased to 1/10 during the spike period, t_f-4 ≤ t ≤ t_f+3 [ns], where t_f is the spike time defined as the time when the voltage V exceeds 0 [mV] <cit.>.
Finally, 100 points from the limit cycle are added to the training data to supplement the phase information.
Figure <ref> compares the true phase function obtained from the HH model with its estimate by the proposed GPPI method using the two sampling strategies: uniform and vicinity sampling. The limit cycle of the HH model approximately lies on a surface that can be written as follows <cit.>:
m = m_∞(V) := α_m(V) / {α_m(V) + β_m(V)},
h = 0.8 - n.
We therefore compare the phase function in the region V∈[-100,50], n∈[0.3, 0.8] on this surface.
Although the data obtained from the vicinity sampling is distributed over a small region in proximity to the limit cycle, this strategy still worked well for estimating the phase functions of a more global region (see Fig. <ref>b in Appendix B).
It should be noted that neither method is able to estimate the phase function in the vicinity of the equilibrium (V∼-60 [mV], n∼0.5), where the phase varies in a complex way <cit.>.
Next, we examine the estimation performance of normalized phase response functions (nPRFs). Figure <ref> compares the estimate of the nPRF from the HH oscillators, which shows that the vicinity sampling provides better results than the uniform sampling.
Additionally, the difference between the nPRF for the positive and negative impulses reflects the nonlinearity of the HH oscillator. While the HH oscillator has a strong nonlinearity in the n direction, this nonlinearity can be captured by the vicinity sampling strategy. It should be noted that the intensity of the impulse used to calculate the nPRF is sufficiently low to fall within the range of the neighborhood sampling.
Furthermore, we evaluate the estimation accuracy of the nPRF using the average of the coefficients of determination (Table <ref>).
As can be seen in Fig. <ref>, the coefficients of determination of the vicinity sampling are much higher than those of the uniform sampling, except for the direction in h; both sampling methods provide accurate estimates of the nPRF in the h direction.
§ DATA-DRIVEN PHASE CONTROL USING IMPULSE INPUT
As an application of the estimated phase function, we examine the control of the phase of a limit cycle oscillator using an impulse input.
As in the previous section, we consider the Hodgkin-Huxley (HH) oscillator, but we consider a scenario that is more feasible in the experiments.
Let us suppose that a time series (V(t),m(t),h(t),n(t)) can be measured from the HH oscillator that is stimulated by an impulse input with the direction of +V.
Our objective is to control the phase of the oscillator from the time series without assuming any knowledge of the mathematical model (Eqs. <ref>, <ref>, <ref>, and <ref>).
While the kinetic variables (m(t),h(t),n(t)) cannot be recorded in experiments, we can estimate them from the experimental data V(t) <cit.>.
A total of 90 time series are generated by applying impulses of intensity Δ V =1, 2, and 3 [mV], respectively, to 30 points on the limit cycle of the HH oscillator. It should be noted that comparable time series data can be obtained in experiments where neurons are stimulated with repeated impulses.
We first estimate the phase function Θ() using the proposed method (GPPI) from the time series data. The training data for the Gaussian process regression was prepared by sampling at 0.1 [ms] intervals from the initial 3 [ms] of the time series. We then calculate the normalized phase response functions (nPRFs) in the +V direction from the estimated phase function.
Figure <ref>a compares the true and estimated nPRFs for impulse intensities of Δ V = 1.5 and 2.5 [mV], as well as the phase sensitivity function Z_V:= ∂Θ/∂ V((θ)).
It should be noted that Figure <ref>a shows the prediction performance of the proposed method for the impulses that were not included in the training data set.
In addition, the phase sensitivity function Z_V was not estimated from the data, and it was obtained from the numerical simulation of the HH model with an impulse intensity of Δ V=0.1 [mV].
For a strong impulse (Δ V =2.5 [mV]), the true nPRF deviates from Z_V due to nonlinearity of the oscillator, whereas the GPPI is able to estimate the phase response accurately from data.
Next, we synchronize the phases of two uncoupled HH oscillators (oscillator A and B) by using impulse inputs in the +V direction to the oscillator A.
The phases of the two oscillators (θ_A(t) and θ_B(t), respectively) are obtained from the observed data. The phase information is used to determine the intensity and timing of the impulse in order to align the phases of the two oscillators.
Due to the limitation of the impulse intensity (0 ≤Δ V ≤ 3 [mV]), multiple impulses are required if the phase difference between the oscillators is too large. In such cases, the next impulse was added to the oscillator after sufficient time has elapsed to converge to the limit cycle.
We examine a strategy for controlling the phase of the oscillator based on the phase response function (PRF): g(θ, Δ V 𝐞_V). The strategy is as follows:
First, if the PRF indicates that it is possible to align the phases between the oscillators with a single impulse, we adopt the minimum strength of the impulse that achieves the complete synchronization.
Second, if the PRF indicates that the phases cannot be aligned with a single impulse, we adopt the impulse that brings the phase difference closest to zero. In other words, if the phase of oscillator A is too far ahead (behind), the impulse that delays (advances) the phase the most is selected.
This control strategy based on the PRF obtained from the estimated phase function: ĝ(θ,𝐞_VΔ V):= Θ̂((θ)+𝐞_VΔ V) - θ, was applied to the HH oscillator.
Figures <ref>b and c compare the performance of the phase control based on the estimated phase function with that based on the phase sensitivity function, where the PRF is approximated with the phase sensitivity function: ĝ(θ,𝐞_VΔ V) ≈ Z_V(θ)Δ V. Again, the phase sensitivity function was not estimated from the data, and it was obtained from the HH model.
Given that the maximum phase change due to an impulse is approximately 0.5, it is expected that a single impulse is sufficient when the initial phase difference is 0.3 (Fig. <ref>b and c: left). It can be also expected that two impulse inputs are required when the initial phase difference is 0.8 (Fig. <ref>b and c: right).
Figures <ref>b and c confirm that the phase of the HH oscillator can be controlled as expected from observation data alone, without the knowledge of the HH model.
While the phase can be also controlled using the phase sensitivity function Z_V, an additional impulse is required to achieve the complete synchronization.
In particular, the method based on the phase sensitivity function becomes inefficient for delaying the phase (Fig. <ref>b).
This is because the phase sensitivity function Z_V overestimates the PRF when it takes negative values (Fig. <ref>a), yielding too strong input for the control.
This result implies that the data-driven control can be achieved by using the proposed method (GPPI), and
the GPPI method achieves superior performance to the method based on the phase sensitivity function obtained from the HH model.
§ DISCUSSION
In this study, we have proposed a method, Gaussian Process Phase Interpolation (GPPI), for estimating the asymptotic phase (i.e., the phase function) using time series data obtained from the limit cycle oscillator. This method is based on Gaussian process regression and is applicable even when the dimension of the dynamical system is increased.
We have applied the method to the Stuart-Landau (SL) oscillator and the Hodgkin-Huxley (HH) oscillator. Our results demonstrate that the GPPI method can accurately estimate the phase function even in the presence of substantial observation noise and strong nonlinearity from relatively small dataset.
Furthermore, we have demonstrated that the phase function estimated by the GPPI method is effective for data-driven phase control with impulse input.
As previously stated in the Introduction, the DPR method proposed in <cit.> is an exiting method with the same objective as the proposed method. We have shown that the GPPI method can estimate the phase function with fewer data points (Fig. <ref>) and is more robust to the observation noise than the DPR (Fig. <ref> and Table <ref>).
The GPPI differs from the DPR mainly in two respects, which contribute to the improvement. The first difference is in the regression method. The GPPI employs Gaussian process regression, whereas the DPR is based on the polynomial regression. Gaussian processes have a higher expressive power than polynomial functions.
The second difference is in the object of the regression. The GPPI fits the functions of the asymptotic phase, i.e., sinΘ(), cosΘ(), whereas the DPR fits a differential equation that the phase satisfies (see <ref>).
One advantage of the GPPI is that it can be applied to time series with a large sampling interval, because it does not require the derivative of the state variables.
The Phase AutoEncoder (PAE), recently proposed in <cit.>, is comparable to the GPPI method in that it estimates the phase function from the time series. Moreover, it can be applied to limit cycle oscillators from high-dimensional dynamical systems.
One advantage of the PAE is its capacity to learn the phase function from a large amount of data. However, it requires the tuning of a large number of hyperparameters, including the number of neurons and layers, as well as the weights of individual loss functions, which is typically done through a process of trial and error.
In contrast, the proposed method has only two hyperparameters (σ_f and σ_l), which are optimized by maximizing the likelihood function.
It has been shown that the asymptotic phase can also be interpreted as the argument of the Koopman eigenfunction <cit.>.
The existing methods for estimating Koopman eigenfunctions, including Dynamic Mode Decomposition (DMD) and its extensions <cit.> are applicable for estimating the asymptotic phase.
However, there remain challenges for estimating the asymptotic phase,
as an accurate estimate of the Koopman eigenvalue (i.e., the natural frequency) is required.
In contrast, we separately estimated the accurate natural frequency by using the Poincaré section in the present method.
As an application of the phase function estimation, we have considered the problem of controlling the phase of an oscillator by impulse input.
Specifically, we consider the HH oscillator and control the phase by using the pulse in the +V direction, assuming that the time series {V(t), h(t), m(t), n(t) } is available.
While it is possible to stimulate the neuron using such an impulse in an experiment <cit.>, the variables (m, h, and n), referred to as kinetic variables, cannot be measured directly.
Nevertheless, it is possible to infer the kinetic variables from the measured data <cit.>.
Consequently, the control of limit cycle oscillators in the real world, such as neurons and the human heart and respiratory system, using the proposed method would be one of the important directions of future research.
A limitation of this study is the computational cost required for Gaussian process regression. A potential way to improve the estimation accuracy is to increase the amount of data.
However, the computational complexity of Gaussian process regression is O(N^3), where N is the number of data points, rendering it an impractical method when data size increases.
Indeed, the HH oscillator required approximately 10 times as many data points as the SL oscillator for the phase function estimation, which resulted in a computation time more than 100 times longer.
A promising avenue for future research would be to improve the Gaussian process regression algorithm for more efficient phase function estimation. For example, we could use fast approximation algorithms for Gaussian process regression <cit.>, including the FIC method <cit.>.
For high-dimensional limit cycle oscillators, the data distribution in the state space may be non-uniform for the following reasons: 1) The dynamical system may possess multiple time constants, resulting in fast and slow parts of the limit cycle orbit, 2) The shape of the limit cycle may become complex in state space.
It is essential to obtain the data points so that their distribution is close to uniform distribution when using the GPPI method. This is because the Gaussian process regression implicitly assumes a uniform distribution of the data.
While the GPPI can accurately estimate the phase function of the HH oscillator (Fig. <ref> and <ref>), it requires proper data sampling.
When we collect all the time series, the data tends to be concentrated in the vicinity of the limit cycle. Additionally, the data density is relatively low in regions where the state variable changes rapidly, such as the spike in the HH oscillator.
It would be an interesting topic of future research to extend the GPPI so that it can be easily applied to a variety of limit cycle oscillators.
In this study, we have presented the Gaussian Process Phase Interpolation, a method for estimating the phase function of the limit cycle oscillator from time series data. We have demonstrated the utility of the proposed GPPI method by applying it to simulated limit cycle oscillators. Our future goal is to apply the GPPI to limit cycle oscillators in the real world and develop a methodology for controlling their synchronization dynamics. To this end, we will refine the phase estimation method so that it can handle more complex oscillatory systems.
§ ACKNOWLEDGEMENTS
We thank Norihisa Namura for sharing us the code used in <cit.>. We thank Hiroshi Kori and Massimiliano Tamborrino for fruitful discussion.
This study was supported by
the World-leading Innovative Graduate Study Program in Proactive Environmental Studies (WINGS-PES), the University of Tokyo, to T.Y.,
JSPS KAKENHI (Nos. JP22K11919 and JP22H00516) and Japan Science and Technology Agency CREST (No. JPMJCR1913) to H.N., and
JSPS KAKENHI (Nos. JP18K11560, JP21H03559, JP21H04571, JP22H03695, and JP23K24950) and AMED (Nos. JP21wm0525004 and JP223fa627001) to R.K.
§ DERIVATIVE PHASE REGRESSION METHOD
We briefly describe the derivative phase regression method (DPR method) proposed by <cit.>.
The DPR method estimates two functions, s():=sinΘ() and c():=cosΘ(), as the proposed method (the GPPI method).
In contrast to the GPPI, which regressed the functions (s() and c()) directly using Gaussian processes,
the DPR regresses the time derivatives of these functions using the polynomial function.
Here, we describe an overview of the DPR method (see <cit.> for details).
Taking the time derivatives of the the functions (s() and c()) using Eq. (<ref>), we obtain
∇ s((t)) ·(t) = ω c((t)),
∇ c((t)) ·(t) = -ω s((t)),
where ∇ denotes the gradient of a function, and · denotes the inner product.
Let us approximate these functions with a polynomial function,
s() = ∑_ia_kf_k(),
c() = ∑_ib_kf_k(),
where f_i() is a polynomial function of the state variables (x_1, x_2, ⋯ ,x_d).
The parameters { a_k, b_k} are determined to satisfy Eqs. (<ref>) and (<ref>), which is achieved by minimizing the error function E( {a_k}, { b_k } ),
E:= ∑_i(∑_ka_k∇ f_k(_i)·_i - ω∑_kb_kf_k(_i))^2
+ ∑_i(∑_kb_k∇ f_k(_i)·_i + ω∑_ka_kf_k(_i))^2
.
We also require a constraint to fix the origin of the estimated phase function ( Θ(_0)= 0 ), i.e.,
s(_0) = ∑_ia_kf_k(_0) = 0,
c(_0) = ∑_ib_kf_k(_0) = 1.
The minimization problem of the error function E( {a_k}, { b_k } ) with the constraint (Eqs. <ref> and <ref>) is a quadratic programming. Thus, we can determine the parameters { a_k, b_k} using an efficient solver, such as quadprog in MATLAB.
Finally, the phase function Θ() is obtained from the estimate of the functions (s(), c()) using Eq. (<ref>).
§ ESTIMATION ERROR OF THE PHASE FUNCTION
We evaluate the estimation performance of the asymptotic phase function by calculating the error: E()= |Θ̂() - Θ()|, where Θ̂() and Θ() represent the estimated and true phase functions, respectively.
Figure <ref>a shows the data used for estimation and the estimation error in the phase function for the Stuart-Landau oscillator, which corresponds to Fig. <ref> (Sec.<ref>). While the DPR method used all the 50,000 data points for estimating the phase function, the proposed method (GPPI) used only 1000 data points sampled from the data. The strength of the observation noise was η=0.005.
Figure <ref>b shows the data used for estimation and the estimation error in the phase function for the Hodgkin-Huxley oscillator, which corresponds to Fig. <ref> (Sec.<ref>).
elsarticle-harv
|
http://arxiv.org/abs/2409.02967v1 | 20240904040513 | A Statistical Derivation of Bekenstein-Hawking Entropy for Schwarzschild Black Holes | [
"Naman Kumar"
] | gr-qc | [
"gr-qc"
] |
School of Basic Sciences, Indian Institute of Technology, Bhubaneswar, 752050
[email protected]
§ ABSTRACT
A microscopic derivation of the Bekenstein-Hawking entropy for the Schwarzschild black hole was presented earlier by using a non-trivial phase space. It was argued that the Schwarzschild black hole behaves like a 1D quantum mechanical system. In this paper, we show that if we assume the phase space to obey the holographic principle and take the microscopic particles inside the quantum gravitational system to be ideal bosonic gas, we can derive the Bekenstein-Hawking entropy. The assumption of the phase space to follow the holographic principle such that the Schwarzschild black hole behaves as a 2D system is very much in the spirit of our understanding of black holes than their behavior as a 1D system. However, the argument suggests that the black hole be treated as a system with the equation of state P=ρ.
*Keywords:
Bekenstein-Hawking Entropy, Statistics of Black Holes, Holographic principle, 2D phase space.
A Statistical Derivation of Bekenstein-Hawking Entropy for Schwarzschild Black Holes
Naman Kumar
====================================================================================
§ INTRODUCTION
It is well known that black holes have thermodynamic behaviour. Hawking<cit.> first showed that the area of black holes is a non-decreasing function of time
dA/dt≥0
Bekenstein<cit.> argued that this similarity can be attached to the thermodynamic concept of entropy and proposed the equation
S=η k_BA/l^2_p
where η was an undetermined dimensionless constant. When Hawking<cit.> derived the temperature of the black hole, the speculations turned robust, and the entropy was obtained as
S=k_BA/4l^2_p
thereby fixing the dimensional constant η. This is the famous Bekenstein-Hawking entropy. This result is very interesting as it suggests that the entropy scales as the area rather than the conventional volume. This equation further led to the idea of the holographic principle<cit.>, which suggests that the information about the volume of space is stored on its boundary. The best realization of the holographic principle is the AdS/CFT correspondence<cit.>. In<cit.>, a microscopic derivation of the Bekenstein-Hawking entropy for Schwarzschild black hole was presented by assuming a non-trivial phase space. The independent quantized modes were calculated by gc^3Vdp/2π Għ^2 rather than the conventional Vd^3p⃗/(2πħ)^3, thereby effectively treating the 3+1 Schwarzschild black hole as a 1D quantum mechanical system. For example, in the conventional case, the logarithm of the partition function of a photon gas system is given by<cit.>
lnΞ=2V/(2πħ)^3∫ln(1-e^-β cp)d^3p=π^2/45c^3ħ^3V/β^3
It was further argued that the black hole be viewed as a system with the equation of state P=ρ. This equation of state has also been attached with black hole thermodynamics earlier (see <cit.> and references therein). Each point in the phase space represents a state of the system. In the spirit of the holographic principle, we take the phase space to be 2D such that the state of the system is essentially contained completely on the 2D phase space. The paper is organized as follows. We first review the earlier derivation presented in<cit.>. Then we derive the Bekenstein-Hawking entropy formula using a statistical approach. Finally, we discuss the motivation behind statistical derivation based on the emergent nature of gravity and information limit due to the holographic principle.
§ A BRIEF REVIEW OF EARLIER DERIVATION
The following assumptions were made: First, the particles inside the quantum gravitational system are bosonic and massless. Second, they obey the energy-momentum relation ϵ=pc with p=|p⃗| and third, the calculation of quantized modes is done by gc^3Vdp/2π Għ^2. Next, the Schwarzschild black hole of radius R in 3+1 dimensions is modelled as a system consisting of these particles. The logarithm of the partition function, in this case, is given by
ln Z=-gc^3V/π Għ^2∫_0^∞ln(1-e^-β cp)dp=gπ c^2V/6Għ^2β
Then we obtain the expressions for entropy and energy as
E=-∂/∂βZ=gπ k_B^2c^2/6Għ^2VT^2
S=k_B(ln Z+β E)=gπ k_B^2c^2/3Għ^2VT
The pressure is given by
P=k_BT∂ln Z/∂ V=gπ k_B^2c^2/6Għ^2T^2
Comparing with ρ=E/V gives the equation of state as P=ρ. The Komar mass as the gravitational source corresponds to (ρ+3P)V, which gives
M=4E=2gπ k_B^2c^2/3Għ^2VT^2=2gπ k_B^2c^2/3Għ^24/3π R^3T^2
Taking M to be the mass of the black hole and choosing g=9 along with R=2GM/c^2 for a Schwarzschild black hole gives the Hawking temperature, T=ħ c^3/8π Gk_BM. Substituting into Eq.(<ref>), we get the Bekenstein-Hawking entropy.
§ A STATISTICAL DERIVATION BASED ON
HOLOGRAPHIC PRINCIPLE
We make the following assumptions about the microscopic particles inside a quantum gravitational system. First, the particles are ideal bosonic gas with mass m such that their partition function follows Bose-Einstein statistics and they obey the energy-momentum relation of an ideal gas which is given by ϵ=p^2/2m where p=|p⃗|. Second, the calculation of independent quantized modes follows the holographic principle such that the states of this system are completely specified by a 2D phase space and is evaluated by gV/l_pd^2p⃗/(2π)^2ħ^2=gV√(c^3/Għ)d^2p/(2πħ)^2, where g a dimensionless constant is introduced to include any other degrees of freedom and the gravitational system is effectively 2D with area A=V/l_p. We model the Schwarzschild black hole of radius R in 3+1 dimensions as a system consisting of these particles.
The logarithm of the partition function(Z) is
ln Z=-√(c^3/Għ)2gV/(2πħ)^2∫_0^∞ln(1-e^-p^2/2mk_BT)d^2p⃗
It is important to remark that if we consider the black hole to consist of a massive particle (ideal bosonic gas in this case), then 2D phase space follows inevitably, as can be verified from Eq.(<ref>) otherwise, we can not recover the Bekenstein-Hawking entropy. Eq.(<ref>) can be re-written as
ln Z =-√(c^3/Għ)2gV/(2πħ)^2∫_0^∞ 2πln(1-e^-p^2/2mk_BT)pdp=√(c^3/Għ)2gV/(2πħ)^2π^3m/3β
where β=1/k_BT and V=4/3π R^3. We, therefore, get the expressions for the energy and entropy as
E=-∂/∂βln Z=√(c^3/Għ)2gπ^3k_B^2/3(2πħ)^2mVT^2
S=k_B(ln Z+β E)=√(c^3/Għ)4gπ^3k_B^2/3(2πħ)^2mVT
The pressure is given by
P=k_BT∂ln Z/∂ V=√(c^3/Għ)2gπ^3m/3(2πħ)^2k_B^2T^2
Comparing with ρ=E/V gives the equation of state as P=ρ.
Komar mass as the source then gives M=4E. Thus, we get
Mc^2=4E=√(c^3/Għ)8gπ^3k_B^2/3(2πħ)^2mVT^2
Taking gm=9ħ/cl_p=9m_p where m_p is Planck mass, and V=4/3π R^3 such that R=2GM/c^2 for a Schwarzschild black hole, we finally obtain using Eq.(<ref>)
T=ħ c^3/8π Gk_BM
which is the Hawking temperature. Putting this value in Eq.(<ref>) we get
S=k_Bπ R^2c^3/Għ=k_BA/4l_p^2
where l_p^2=Għ/c^3. Thus, we obtained the Bekenstein-Hawking entropy by assuming that the phase space follows the holographic principle. On comparing Eq.(<ref>) and Eq.(<ref>) we get the relation
2TS=M
This is the Smarr formula for the 3+1 dimension Schwarzschild black hole. It is important to emphasize here the importance of Eq.(<ref>). Deducing this equation in this microscopic derivation is necessary to derive the correct form of the Bekenstein-Hawking Entropy; otherwise, we will end up with the wrong coefficient even if we obtain the correct Hawking temperature. Another thing to note is that the equation of state P=ρ is critical to get the correct Smarr formula, and as discussed earlier, the failure to obtain the correct Smarr formula leads to the wrong coefficient. This suggests that the black hole is to be viewed as a system with the equation of state w≡P/ρ=1, as already argued in <cit.> and references therein. The mass m of each microscopic particle is then interpreted as m=m_p and the dimensionless constant g=9.
§ CONCLUSION AND DISCUSSION
In this paper, we successfully derived the Bekenstein-Hawking Entropy of a Schwarzschild black hole by considering it to consist of ideal bosonic gas at the microscopic level with a phase space that obeys the holographic principle. It is remarkable that the correct Bekenstein-Hawking temperature can be exactly derived from such a simplistic picture. Moreover, in<cit.>, the authors reproduced correct Bekenstein-Hawking entropy by taking w≡P/ρ=1 although the reasons were not fully understood. In<cit.>, the authors managed to exactly "derive" this result. Our derivation in a different setting strongly supports this idea. The phase space is taken to be two-dimensional such that all the information about the bulk system is contained in it. This essentially means treating the black hole as a two-dimensional system at the microscopic scale. The earlier derivation presented in<cit.> treated black holes as a 1D system where the quantum particles were taken to be equivalent to a photon gas. Both these pictures do not seem to be very realistic. On the other hand, treating the black hole as a 2D system is very much in the spirit of our understanding of black holes. The assumption of quantum particles to be ideal boson gas is much more realistic too. It is also widely believed that gravity and spacetime have emergent origins. Verlinde<cit.> argued that gravity is an entropic force arising from the tendency of a microscopic theory to maximize its entropy. This shows that there is a maximum limit on the amount of information in a volume of space. This is also the result of the holographic principle. In this view, it becomes important to microscopically derive the Bekenstein-Hawking entropy. This strongly motivated us to study a statistical derivation based on the holographic principle. This derivation also suggests that the black hole be treated as a system with the equation of state w≡P/ρ=1. A series of works called holographic cosmology has been studied based on this equation of state<cit.>. We are hopeful that these results will inspire further investigation.
§ CONFLICT OF INTEREST
The author declares no conflict of interest.
unsrt
|
http://arxiv.org/abs/2409.03368v1 | 20240905091444 | Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications | [
"Tong Bu",
"Maohua Li",
"Zhaofei Yu"
] | cs.NE | [
"cs.NE"
] |
Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications
Tong Bu Equal contributors
School of Computer Science
Peking University
Maohua Li [1]
School of Artificial Intelligence and Automation
Hohai University
Zhaofei Yu Corresponding author
Institution for Artificial Intelligence
Peking University
=============================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs) due to their advantages of fast inference and low power consumption. However, the lack of efficient training algorithms has hindered their widespread adoption. Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts. Even commonly used ANN-SNN conversion methods necessitate re-training of ANNs to enhance conversion efficiency, incurring additional computational costs. To address these challenges, we propose a novel training-free ANN-SNN conversion pipeline. Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training. The conversion pipeline includes a local-learning-based threshold balancing algorithm, which enables efficient calculation of the optimal thresholds and fine-grained adjustment of threshold value by channel-wise scaling. We demonstrate the scalability of our framework across three typical computer vision tasks: image classification, semantic segmentation, and object detection. This showcases its applicability to both classification and regression tasks. Moreover, we have evaluated the energy consumption of the converted SNNs, demonstrating their superior low-power advantage compared to conventional ANNs. Our training-free algorithm outperforms existing methods, highlighting its practical applicability and efficiency. This approach simplifies the deployment of SNNs by leveraging open-source pre-trained ANN models and neuromorphic hardware, enabling fast, low-power inference with negligible performance reduction.
§ INTRODUCTION
Recent advancements in large models have significantly reshaped the landscape of deep learning technology, industry, and the research community. These advanced models, characterized by their massive scale and unprecedented zero-shot generalizability in downstream tasks, greatly impact our daily lives. Nevertheless, the pursuit of larger models raises concerns about high energy consumption for model inference and training. The deployment of large models on resource-constrained devices has also become a challenge. Spiking Neural Networks (SNNs), known for their fast inference and low power consumption, offer a potential solution to these issues as an alternative to Artificial Neural Networks (ANNs).
Spiking neural networks are a well-established type of neural network that mimics the function of biological neurons and are considered the third generation of ANNs <cit.>. Unlike conventional ANNs, SNNs utilize more complex spiking neurons as their basic components and employ diverse encoding methods. Spiking neurons have two main characteristics: they encode information as discrete events (binary spikes or action potentials) and generate output based on their inputs and current internal state. These characteristics enable spiking neurons to process spatio-temporal information and use sparse representation, attracting increasing attention from researchers.
With the development of the neuromorphic computing hardware <cit.>, SNNs can be deployed on neuromorphic chips and further applied in power-limited scenarios <cit.>. However, the lack of efficient training algorithms has hindered the widespread application of SNNs. Recent learning methods have made progress in training deep CNN-based SNNs <cit.> or large spiking transformers using supervised learning algorithms <cit.>.
Although SNNs obtained through supervised training require little inference time and have comparable performance to ANNs with the same network architecture, the memory and time cost of training an SNN, which is linearly proportional to the inference time-step, can be multiple times higher than that of an ANN. This makes it impractical to train large energy-efficient models.
ANN-SNN conversion is another feasible way to train SNNs <cit.>. ANN-SNN conversion algorithms are designed to obtain SNNs from pre-trained ANNs with little computational cost. However, the converted SNNs often need more inference time-steps to achieve comparable performance as ANNs, which can increase inference latency and energy consumption. One possible solution is to retrain a modified ANN and then convert it to an SNN, which significantly improves SNN performance while reducing inference time-steps <cit.>. Although retraining-based ANN-SNN conversion methods perform better in most tasks, the cost of retraining an ANN is not negligible. In Figure <ref>-a/b, we compare the supervised training method <cit.>, hybrid training method <cit.>, training-required ANN-SNN conversion method <cit.> and our training-free ANN-SNN conversion method. We find that the training-free conversion method can efficiently train SNNs with very little computational overhead while balancing performance and inference speed. In Figure <ref>-c, we further clarify the difference between training-free and training-required conversion. Training-free methods allow for the conversion of SNNs from open-source pre-trained ANN models, reducing or even eliminating dependence on GPU resources.
Therefore, in this paper, we are looking for an algorithm that can directly get high-performance SNNs from pre-trained ANNs without additional training or finetuning. This simplifies the deployment of spiking neural networks by directly obtaining SNNs from the open-source pre-trained ANN models and directly applying SNNs to the neuromorphic chip.
In this paper, we provide a solution for the training-free conversion of high-performance and low-power SNNs. We mainly focus on developing a general conversion method, applicable to different models and tasks. By improving the data-based threshold balancing algorithm, we utilize a more effective local learning approach for fast ANN-SNN conversion without relying on costly backpropagation-based training or fine-tuning.
Our contribution can be summarized as follows:
* We introduce the concept of training-free ANN-SNN conversion. Such conversion methods are able to obtain high-performance SNNs from pre-trained ANN models without any additional training or fine-tuning. Due to the advantage of low training cost, the training-free algorithms are more feasible solutions for developing large-scale SNNs.
* We propose an efficient training-free algorithm, theoretically deriving the upper bound of the conversion error between the original ANN and the converted SNN. To minimize the conversion error bound, we introduce a local threshold balancing method for threshold value optimization, which can efficiently convert pre-trained ANN model into their SNN counterparts using very little computational resources.
* We successfully scale our conversion framework to three typical computer vision tasks. In conventional image classification tasks, our training-free algorithm outperforms other training-free conversion-based methods. Additionally, we implement ANN-SNN conversion on multiple regression-based vision tasks, including semantic segmentation
and object detection tasks, demonstrating the practical applicability of the training-free ANN-SNN conversion algorithm.
§ RELATED WORKS
Researchers have endeavoured to explore effective SNN training approaches for decades. Due to the brain-like characteristics of SNNs, early research mainly focused on the unsupervised Spike-Timing-Dependent Plasticity (STDP) algorithm, inspired by the Hebbian learning principles <cit.>. These studies mainly aimed to utilize the STDP algorithm to train shallow networks and design unsupervised feature extractor <cit.>. At the same time, supervised training approaches have been developed to achieve high-performance SNNs. <cit.> were pioneers in applying the backpropagation algorithm to train SNNs, introducing a temporal-based backpropagation method that uses timestamps as intermediate variables during gradient backpropagation. In another approach, Wu et al. <cit.> treated SNNs as recurrently connected networks and utilized Back Propagation Through Time (BPTT) for training, termed spatial-temporal backpropagation (STBP). Unlike these methods, Cao et al. <cit.> proposed an ANN-SNN conversion method that achieves high-performance SNNs from pre-trained ANNs. Currently, research efforts are primarily focused on enhancing the efficiency of training algorithms for high-performance SNNs, leveraging the three methods mentioned above.
Based on the training cost, those learning methods of SNNs can be ascribed into two types, training-free learning methods and training-required learning methods. The training-required methods, including direct training of SNNs <cit.>, conversion from re-trained ANNs<cit.> and finetuning for converted SNNs <cit.>, usually requires the participation of the back-propagation algorithms on the GPUs.
The training-free learning methods are conversion-based algorithms that directly convert the pre-trained ANN into SNN without additional training or finetuning. <cit.> mapped the weights of a light CNN network to an SNN and reported a great improvement of the energy efficiency based on the hardware analysis. Since the conversion-based method have nature advantage for fast training of neuromorphic-hardware-compatible spike-based networks, researchers began to explore advanced conversion techniques targeting for high-performance SNNs. <cit.> propose a weight and threshold balancing method to avoid performance degradation. <cit.> further extend the conversion-based learning to LIF neurons. The theoretical framework of the conversion-based algorithm has been proposed by <cit.>. A more effective threshold balancing method, robust normalization, is proposed to balance the inference latency and accuracy <cit.>. <cit.> convert ANN to temporal coding SNNs and promote accuracy and further use the reset-by-subtraction mechanism instead of the reset-to-zero mechanism to prevent information loss in <cit.>. The conversion-based method can also be applied to regression tasks such as object detection, as <cit.> successfully converted a spike-based object detection model. <cit.> suggest adding initial membrane potential for better performance and low latency. <cit.> use a light conversion pipeline to search optimal threshold and add a bias term at each inference step. Although performance is limited, those lightweight training methods are able to acquire SNNs from pre-trained ANNs with negligible cost of multiple iterations of inference.
§ PRELIMINARIES
§.§ Neuron Models
The core components of conventional ANNs are point neurons, where their forward propagation can be represented as a combination of linear transformations and non-linear activations, that is
a^l=f(w^l a^l-1).
Here, w^l is the weights between layer l-1 and layer l, a^l is the activation vector in layer l, and f(·) denotes the non-linear activation function, commonly set as the ReLU function or its variants.
The neurons in SNNs are spiking neurons with temporal structures. Like most ANN-SNN conversion methods <cit.>, we employ the commonly used Integrate-and-Fire (IF) neuron <cit.> in this paper.
For computational convenience, we use the following discrete neuron function:
v^l(t^-) =v^l(t-1)+w^ls^l-1(t)θ^l,
s^l(t) = H(v^l(t^-)-θ^l),
v^l(t) = v^l(t^-)-θ^l s^l(t).
Here s^l(t) describes whether neurons in layer l fire at time-step t and the trigger is that the membrane potential before firing v^l(t^-) reaches the firing threshold θ^l, as represented by the Heaviside step function H(·). For membrane potential after spike firing v^l(t), we consider the “reset-by-subtraction” mechanism <cit.> to alleviate information loss. Specifically, after emitting a spike, the membrane potential is subtracted by the threshold θ^l instead of decaying to the resting potential.
§.§ ANN-SNN Conversion
The pivotal idea of ANN-SNN conversion is to map the activation of analog neurons to the postsynaptic potential (or firing rate) of spiking neurons. Specifically, by accumulating the neuron function (Equations (<ref>) and (<ref>)) from 1 to T and dividing both sides by time-step T, we obtain:
1/T(v^l(T)-v^l(0))=1/T∑_t=1^Tw^ls^l-1(t) θ^l-1 -1/T∑_t=1^Tθ^ls^l(t).
This equation reveals a linear relationship between the average firing rate of neurons in adjacent layers by defining average postsynaptic potential as r^l(t) = ∑_i=1^t s^l(i) θ^l /t:
r^l(T)=w^l r^l-1(T)-1/T(v^l(T)-v^l(0)).
As T → +∞, we can generally assume that the conversion error approaches zero. Therefore, using the equations presented, a trained ANN model can be converted to an SNN by replacing ReLU neurons with IF neurons, which is the primary principle of ANN-SNN conversion. Note that Equations (<ref>) and (<ref>) are not identical, indicating that some conversion error typically remains.
§.§.§ Threshold Balancing
The most common methods to reduce the conversion errors discussed above include threshold balancing <cit.> and weight scaling algorithms <cit.>.
Both approaches aim to adjust the synaptic weights or neuron thresholds according to the neuron input distribution range, thus minimizing the clipping error. Previous studies have shown that weight scaling and threshold balancing are equivalent in effect and can achieve high-performance SNNs <cit.>. For the sake of simplicity, we adopt the threshold balancing method here, which involves determining the optimal threshold value for each layer, allowing the weights to remain consistent between the ANN and SNN models.
There are mainly two types of threshold balancing algorithms: model-driven and data-driven algorithms. The former calculates the maximum activation value from the given model weights, and uses it as the threshold. The latter involves sampling a subset of data from the training set for inference, obtaining a statistical distribution of activation values, and then determining the threshold with specific algorithms. Data-driven algorithms are typically more effective and have been widely used in recent works <cit.>. For example, one commonly used training-free conversion method is robust normalization, which sets the threshold at the 99-th percentile of the total activity distribution. The algorithm presented in this paper also utilizes a data-driven approach.
§ METHODS
In this section, we present our training-free ANN-SNN conversion pipeline. First of all, we define the conversion error and derive its upper bound. After that, we propose a local threshold balancing method based on a local learning rule. Additionally, we also introduce the channel-wise threshold balancing technique, which refines the scaling parameter to enhance overall performance without altering the behavior of IF neurons.
§.§ Conversion Error Bound
We follow the representation of the conversion error from <cit.> and define the layer-wise conversion error e^l in layer l as the 2-norm between the postsynaptic potential of neurons in SNNs and the activation output of neurons in ANNs, that is
e^l = ‖PSP(ẑ^l; θ^l) - ReLU(z^l)‖_2.
Here, ẑ^l = w^l r^l-1 denotes the average input current from layer l-1 in SNNs, and z^l = w^l a^l-1 represents the activation input from layer l-1 in ANNs. Both the ANN model and SNN model share the same weights, denoted as w^l. The ReLU(z) represents the output of nonlinear ReLU activation function while PSP(ẑ^l; θ^l)=r^l(T) represents the average output synaptic potential at the last time-step T with given average input ẑ^l.
Since our goal is to minimize the error of the final outputs in most tasks, we can define the conversion error between the ANN and SNN models as the layer-wise conversion error in the last layer L, denoted as e_model = e^L. A straightforward approach is to use the conversion error between models as the loss function and optimize it, which can be viewed as a variant of knowledge distillation. However, this training method is similar to the supervised training of SNNs, which is costly in terms of time and memory. To address this, we scale the conversion error and estimate the error upper bound, thereby simplifying the task. We present the following theorem.
Theorem 1 The layer-wise conversion error can be divided into intra-layer and inter-layer errors:
e^l ⩽‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l)‖_2^intra-layer error + ‖w^l ‖_2 e^l-1^inter-layer error.
Given that both ANN and SNN models receive the same input in the first layer, resulting in e^0 = 0, the upper bound for the conversion error between models in a L-layer fully-connected network is
e_model=e^L ⩽∑_l=1^L (∏_k=l+1^L‖w^k ‖_2 ) ‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2
The detailed proof is provided in the Appendix. Theorem 1 indicates that the conversion error is bounded by the weighted sum of the error in each layer. Based on Theorem 1, we will derive the local threshold balancing algorithm in the next section.
§.§ Local Threshold Balancing Algorithm
The target of ANN-SNN conversion is to minimize the conversion error and achieve high-performance SNNs. An alternative approach involves optimizing the error bound, defined as:
min_θ𝔼_x^0 ∈𝒟[ ∑_l=1^L (∏_k=l+1^L‖w^k ‖_2 ) ‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2 ].
Here θ represents the set of threshold values to be optimized across all layers, that is, θ={θ^1,θ^2,...,θ^L}. x^0 represents the input data sample drawn from the dataset 𝒟. This approach leverages data-driven conversion, minimizing the expectation of the conversion error bound over the data distribution. To reduce computational costs during optimization, we made some simplifications. Firstly, we use a greedy strategy to optimize the threshold θ, allowing the optimization process to be performed layer by layer,
which is
For each layer l, min_θ^l𝔼_x^0 ∈𝒟[ (∏_k=l+1^L‖w^k ‖_2 ) ‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2 ].
Secondly, we simplify the function PSP(·; θ) by neglecting its temporal dynamics and approximating it with the clipping function ReLUX(·; θ). Previous works have demonstrated that the PSP and clipping functions can be identical given sufficient number of time-steps <cit.>. We utilize the squared norm in the objective function for smoother optimization. The optimization problem then becomes:
For each layer l, min_θ^l𝔼_x^0 ∈𝒟 (∏_k=l+1^L‖w^k ‖_2 ) ‖ReLUX(ẑ^l;θ^l) - ReLU(ẑ^l)‖_2^2,
where ReLUX(·; θ) = min (max(0, x), θ).
Intuitively, we can employ the stochastic gradient descent algorithm to optimize the threshold. Since ∏_k=l+1^L‖w^k ‖_2 can be considered as a constant when the weights are fixed for each layer, this term can be absorbed into the parameter learning rate. The final update rule for the local threshold balancing algorithm at each step is:
Δθ^l = - 1/N∑_i=0^N 2(ẑ^l_i-θ^l)H(ẑ^l_i-θ^l),
θ^l ←θ^l - ηΔθ^l.
Detailed derivation is provided in the Appendix. Here Δθ^l is the value of the step size during optimization and η is the learning rate or update step size. N is the number of elements for the input vector while ẑ_i denotes each element in the vector ẑ^l. H(·) is the Heaviside step function.
In practice, as shown in Figure <ref>, we jointly optimize the threshold for each layer by sampling a small set of image samples from the training dataset. Before starting the conversion process, we first replace all ReLU functions by ReLUX functions and set the initial value of the threshold to zero. We then randomly sample data batches from the training dataset and run model inference with these data samples. The thresholds are locally updated during the inference process until they converge. Note that the parameter θ^l is updated locally, ensuring that the computational cost of the conversion process is similar to the inference of the ANN model over the given data samples.
§.§ Channel-wise Threshold Balancing
During conversion, similar to <cit.>, we employ a channel-wise threshold balancing method, applicable to both fully connected and convolutional layers.
Different from previous conversion algorithms where all neurons in each layer share a single threshold value, our method utilizes the channel-wise threshold approach, where all neurons in each channel share one threshold value for convolutional layers while each neuron has its independent threshold value for fully connected layers. Specifically, for convolutional layers, we channel-wisely determine the threshold value, while for fully connected layers, we optimize the threshold value element-wise.
After optimization, the thresholds for each channel can be individually absorbed into the corresponding channel of the convolutional kernel or the weight vectors of the fully connected layers. This channel-wise (or element-wise) operation ensures that the behavior of the IF neuron remains unaffected. This algorithm aims to more precisely determine the optimal threshold value while preserving the fundamental properties of the spiking neuron.
§.§ Pre-Neuron Maxpooling Layer
The conversion of the max pooling layer is also a challenging problem that needs to be addressed. As a pivotal feature in the process of down-sampling input representations, the max pooling layer is a fundamental component in convolutional neural networks and is widely used in most convolutional architectures. However, due to the binary nature of spiking neurons, max pooling layers cannot effectively perform downsampling in feature maps. Typically, during a single max-pooling operation, multiple elements may have the same value, preventing the extraction of the most salient features. This often leads to the avoidance of using max pooling as a downsampling layer in SNNs. In most previous ANN-SNN methods, the max pooling layer in ANN is often replaced by an average pooling layer, and the model is re-trained or fine-tuned before conversion.
To convert architectures that contain max pooling layers, we propose a simple yet effective method. We replace all max-pooling layers with neuron layers, allowing the pre-neuron floating-point input to go through the downsampling layer before being input into the neurons.
The order of the max pooling layer and ReLU layer does not affect the output results.
max_i ReLU(z) = ReLU(max_iz), when max(z)>0.
The detailed proof is provided in the Appendix. Theorem 2 guarantees that such an operation does not affect the final performance. When deployed on neuromorphic chips or FPGAs, the max pooling operation can be performed through a comparator, preventing an increase in power consumption due to additional floating-point multiplication.
§ EXPERIMENTS
We conduct extensive experiments to demonstrate the effectiveness of our method and highlight its potential practical value. We showcase the advantages of our approach by comparing it with other training-free algorithms and conduct ablation experiments to validate the effectiveness of our local threshold balancing algorithm. Subsequently, we apply our algorithm to three classical visual tasks: image classification, semantic segmentation, and object detection. To underscore the feature that our conversion method does not require additional training, we utilize open-source pre-trained ANN models as the original ANN models for conversion, with most pre-trained models provided by TorchVision <cit.>.
We highlighted the feasibility of our algorithm across both classification and regression tasks, demonstrating its superiority over other conversion algorithms on various datasets.
In addition to the described algorithms, our conversion process incorporated other methods to further enhance the performance of the converted SNN. Firstly, during inference, we applied the membrane potential initialization algorithm <cit.>, setting the threshold membrane potential to half before each inference. We also utilized the delayed evaluation approach when collecting the spike counts of the output layer, ensuring performance with fewer inference steps <cit.>. The pseudocode for our conversion pipeline and the detailed training settings are provided in the Appendix.
§.§ Effectiveness of the Conversion Pipeline
We first demonstrated the effectiveness of the local threshold balancing algorithm using the ResNet-34 architecture on the ImageNet dataset. We compare three different SNNs obtained from the Robust-Norm <cit.> algorithm, our proposed layer-wise local threshold balancing algorithm, and our proposed channel-wise threshold balancing algorithm. As a commonly used training-free ANN-SNN conversion method, we considered the Robust-Norm as the baseline, which sets the 99-th percentile activation value as the threshold.
Figure <ref> illustrates the accuracy change of the three converted SNNs with respect to the inference time-steps.
The performance of our proposed channel-wise local threshold balancing method (brown curve) consistently outperforms the Robust-Norm method (blue curve) and the local threshold balancing method without channel-wise operation (yellow curve). This indicates an excellent balance between fast inference and high performance. Moreover, compared to the robust norm method (blue curve), the peak performance of the SNNs obtained from our local threshold balancing method (both brown and yellow curves) is much closer to the original ANN accuracy, which demonstrates the effectiveness of the proposed training-free conversion method.
We further investigate the convergence speed of the local threshold balancing method by estimating the conversion error through the accumulation of clipping errors in each layer at each iteration. Figure <ref> illustrates how the conversion error changes with respect to the iteration steps of the local learning process. The blue curve gradually converges to a value near zero, indicating that the conversion error generally decreases as the number of iterations increases. This demonstrates the good convergence of the proposed local threshold balancing method.
Moreover, we evaluate the influence of the number of iteration steps on the threshold balancing method.
We converted five different SNNs from the pre-trained ResNet-34 model, varying the number of iterations of the local threshold balancing method from 1000 to 5000.
The batch size during local threshold balancing is consistently set to 100. Figure <ref> shows the final accuracy of the different converted SNNs. As the number of iterations increases, the peak performance of the converted SNNs at different inference steps approaches the accuracy of the original ANN. Furthermore, these SNNs demonstrate better performance at low time-steps when fewer iteration steps are used. On the ImageNet dataset, with only 1000 iterations of local threshold balancing, the performance gap between the SNN at 512 inference time-steps and the ANN is approximately 1%. With 5000 iterations, this difference is further reduced to around 0.2%. Notably, the accuracy surpasses 65% within just 55 time-steps, achieving a balance between inference time and performance.
§.§ Test on Image Classification task
We evaluate the performance of our conversion method on the classification task using the ImageNet dataset, employing different architectures including ResNet and MobileNet. To highlight the advantages of our algorithm, we conduct comparisons with previous training-free algorithms, showcasing its superior performance. When using the ResNet34 architecture, the SNNs converted by our algorithm exceeds 70% accuracy within 128 time steps. In contrast, the SNNs converted by the previous training-free algorithm <cit.> and the calibration-required conversion algorithm <cit.> require longer time-steps to achieve comparable accuracy. Our algorithm can achieve nearly loss-less conversion, as the performance gaps between the converted SNNs and the original ANNs are always less than 2% when the inference time-steps exceed 256 for all architectures. The experiments on the ImageNet dataset demonstrate the superior performance of our training-free conversion algorithm and highlight its potential for further applications in other vision tasks.
In addition to the evaluation on classification performance, we also demonstrate the energy-saving advantage of our method by theoretically estimating the energy efficiency of our converted SNN. We use the same energy consumption estimation approach as <cit.>. With ResNet-34 architecture on ImageNet, SNN converted from our algorithm can achieve an energy efficiency at 622FPS/W while maintaining 90% performance. In comparison, the energy efficiency of original ANN is only 22FPS/W, which is 28 times lower than that of the converted SNN. For detailed energy consumption analysis, please refer to the appendix.
§.§ Test on Semantic Segmentation and Object Detection Tasks
Previous works on ANN-SNN conversion have primarily focused on the image classification task. Here, we extend the proposed training-free ANN-SNN conversion method to semantic segmentation and object detection tasks. For the semantic segmentation task, we evaluate our method on two commonly used datasets: Pascal VOC 2012 <cit.> augmented by SBD <cit.> and MS COCO 2017 <cit.>. We employ various semantic segmentation models, including FCN <cit.> and DeepLabV3 <cit.>, provided by TorchVision <cit.> and other open-source repositories. As shown in Table <ref>, our method is compatible with tackling semantic segmentation tasks without any preparatory training. Specifically, we achieved a 52.42% mIoU with the FCN model using a ResNet-34 backbone in 64 time-steps on the Pascal VOC dataset and a 60.54% mIoU with the DeepLabV3 model using a ResNet-50 architecture on the more complex MS COCO 2017 dataset. Given that pixel-level classification tasks require precise model output, this excellent performance highlights the effectiveness and generalizability of the proposed conversion method.
For object detection tasks, we use the benchmark dataset MS COCO 2017 <cit.> with open-source models from TorchVision <cit.>. We employ the fully convolutional object detection method RetinaNet <cit.> and FCOS <cit.>, and convert the pre-trained ANN models to SNNs using the proposed method. Table <ref> shows detailed performance under different inference time-steps. Within 64 time-steps, the converted SNN can achieve 25.1% mAP, which represents a significant improvement compared to previous exploration <cit.> of training-free ANN-SNN conversion for object detection. The previous method required over 1000 time-steps, which is detrimental to real-time detection and energy efficiency.
§.§ Combination with Other Conversion Algorithm
The conversion method we propose offers a lightweight, plug-and-play solution that capable of integrating with other conversion algorithms to achieve enhanced performance. To validate its versatility and efficiency, we conducted experiments comparing our method with two established post-conversion calibration algorithm, LCP and ACP <cit.>. The results, presented in Table <ref>, showcase the performance of various configurations. Rows 1 to 5 illustrate the performance of our standalone method, the LCP method, our method combined with LCP, the ACP method, and our method combined with ACP, respectively. Notably, the composite approaches, which involve combining our method with LCP or ACP, consistently outperform single conversion methods across all timesteps, highlighting the advantages of using our approach in conjunction with existing techniques.
§ CONCLUSION AND LIMITATION
This paper introduces the concept of a training-free ANN-SNN conversion algorithm, which can directly convert pre-trained ANN models into SNNs without requiring GPU-based training, thus achieving extremely low-cost SNN learning. The significant reduction in training overhead highlights the potential for the rapid deployment of large-scale SNNs in various scenarios. We also present a specific training-free conversion algorithm and evaluate its performance across multiple visual tasks, demonstrating its effectiveness and generalizability.
However, there is a trade-off between performance and training cost, with the overall performance of our algorithm being slightly weaker compared to other training-based SNN learning methods. Additionally, while this work focuses on the conversion of convolutional and fully connected layers, future efforts will aim to extend the training-free conversion approach to attention blocks and other complex layers.
plainnat
§ PROOF FOR ERROR BOUND
The layer-wise conversion error can be divided into intra-layer and inter-layer errors, expressed as follows:
e^l ⩽‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l)‖_2^intra-layer error + ‖w^l ‖_2 e^l-1^inter-layer error.
Given that both ANN and SNN models receive the same input in the first layer, resulting in e^0 = 0, the upper bound for the conversion error between models in a L-layer fully-connected network is
e_model=e^L ⩽∑_l=1^L (∏_k=l+1^L‖w^k ‖_2 ) ‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2
According to the definition of the conversion error, we have
e^l =
‖PSP(ẑ^l) - ReLU(z^l)‖_2
=
‖PSP(ẑ^l) - ReLU(ẑ^l) +
ReLU(ẑ^l) - ReLU(z^l)‖_2
⩽‖PSP(ẑ^l) - ReLU(ẑ^l)‖_2 +
‖ReLU(ẑ^l) - ReLU(z^l)‖_2
Here we will first prove that ‖ReLU(ẑ^l) - ReLU(z^l)‖_2 ⩽‖ (ẑ^l - z^l)‖_2. we consider four different situations of z^l_i and ẑ^l_i, which are single elements in z^l and ẑ^l, respectively.
{ if ẑ^l_i ⩾ 0, z^l_i ⩾ 0, then (ReLU(ẑ^l_i) - ReLU(z^l_i))^2=(ẑ^l_i - z^l_i)^2
if ẑ^l_i ⩾ 0, z^l_i ⩽ 0, then (ReLU(ẑ^l_i) - ReLU(z^l_i))^2=(ẑ^l_i - 0)^2 ⩽ (ẑ^l_i - z^l_i)^2
if ẑ^l_i ⩽ 0, z^l_i ⩾ 0, then (ReLU(ẑ^l_i) - ReLU(z^l_i))^2=(0 - z^l_i)^2 ⩽ (ẑ^l_i - z^l_i)^2
if ẑ^l_i ⩽ 0, z^l_i ⩽ 0, then (ReLU(ẑ^l_i) - ReLU(z^l_i))^2=(0 - 0)^2 ⩽ (ẑ^l_i - z^l_i)^2
.
Therefore, for each element in vector z^l and ẑ^l, we can summarize that ∀ i, (ReLU(ẑ^l_i) - ReLU(z^l_i))^2 ⩽ (ẑ^l_i - z^l_i)^2, which can further derive
‖ReLU(ẑ^l) - ReLU(z^l)‖_2 ⩽‖ (ẑ^l - z^l)‖_2.
Back to the main theorem, we continue to rewrite the conversion error bound as
e^l ⩽‖PSP(ẑ^l) - ReLU(ẑ^l)‖_2 +
‖ (ẑ^l - z^l)‖_2
⩽‖PSP(ẑ^l) - ReLU(ẑ^l)‖_2 +
‖w^l (PSP(ẑ^l-1) - ReLU(z^l-1))‖_2
⩽‖PSP(ẑ^l) - ReLU(ẑ^l)‖_2 +
‖w^l ‖_2 ‖PSP(ẑ^l-1) - ReLU(z^l-1) ‖_2
⩽‖PSP(ẑ^l) - ReLU(ẑ^l)‖_2^intra-layer error + ‖w^l ‖_2 e^l-1^inter-layer error.
Note that ‖w^l ‖_2 in Equation <ref> is the matrix norm (p=2) or spectral norm of the weight matrix w^l, and the derivation from <ref> to <ref> holds true because of the property of the spectral norms. From the inequality above, we can find that the layer-wise conversion error is bounded by the intra-layer error, which is layer-wise error when both the analog neuron and spiking neuron receive the same input, and the inter-layer error, which is proportional to the layer-wise error in the previous layer.
We will then further derive the conversion error between models, which is the conversion error in the last output layer. For simplicity, we define the intra-layer error in each layer as ε^l.
e^L ⩽‖PSP(ẑ^L) - ReLU(ẑ^L)‖_2 + ‖w^L ‖_2 e^L-1
= ε^L + ‖w^L ‖_2 e^L-1.
Also, since we use direct input coding for SNNs, there will be no conversion error in the 0-th layer and the conversion error in the first layer will be e^1 = ‖PSP(ẑ^1) - ReLU(ẑ^1)‖_2. Therefore, we can iteratively derive the error bound for an arbitrary layer and the error bound for the final output should be
e^L ⩽ε^L + ‖w^L ‖_2 e^L-1
⩽ε^L + ‖w^L ‖_2 ε^L-1 + ‖w^L ‖_2 ‖w^L-1‖_2 e^L-2
...
⩽∑_l=1^L(∏_k=l+1^L‖w^k ‖_2 ) ε^l
⩽∑_l=1^L (∏_k=l+1^L‖w^k ‖_2 ) ‖PSP(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2
§ PROOF FOR UPDATE RULE
The final update rule for the local threshold balancing algorithm at each step is:
Δθ^l = - 1/N∑_i=0^N 2(ẑ^l_i-θ^l)H(ẑ^l_i-θ^l),
θ^l ←θ^l - ηΔθ^l.
As we have mentioned in the main text, our goal is to:
∀ l, min_θ^l (∏_k=l+1^L‖w^k ‖_2 ) ‖ReLUX(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2^2.
We can utilize the gradient descent method, which is to iteratively update the threshold value by subtracting the first-order derivative of threshold, which is
Δθ^l
= ∂ (∏_k=l+1^L‖w^k ‖_2 ) ‖ReLUX(ẑ^l; θ^l) - ReLU(ẑ^l) ‖_2^2/∂θ^l.
We consider each element ẑ^l_i in vector ẑ^l, for each i we have
∂ (∏_k=l+1^L‖w^k ‖_2 ) ( ReLUX(ẑ_i^l; θ^l) - ReLU(ẑ_i^l) )^2/∂θ^l
= {
- (∏_k=l+1^L‖w^k ‖_2) · 2(ẑ^l_i-θ^l) if ẑ^l_i>θ^l
0 if ẑ^l_i ⩽θ^l
.
= - (∏_k=l+1^L‖w^k ‖_2) · 2(ẑ^l_i-θ^l)H(ẑ^l_i-θ^l)
Therefore, the derivative for θ^l should be
Δθ^l
= ∂ (∏_k=l+1^L‖w^k ‖_2 ) 1/N∑_i=0^N ( ReLUX(ẑ_i^l; θ^l) - ReLU(ẑ_i^l) )^2/∂θ^l
= - (∏_k=l+1^L‖w^k ‖_2 ) 1/N∑_i=0^N 2(ẑ^l_i-θ^l)H(ẑ^l_i-θ^l).
Since (∏_k=l+1^L‖w^k ‖_2 ) is an constant with fixed weight matrix, we absorb this term into learning rate parameter η. Therefore, we can finally derive the update rule as
Δθ^l = - 1/N∑_i=0^N 2(ẑ^l_i-θ^l)H(ẑ^l_i-θ^l),
θ^l ←θ^l - ηΔθ^l.
§ PROOF FOR PRE-NEURON MAX POOLING LAYER
The order of max pooling layer and ReLU activation layer does not affect the output results.
maxReLU(z) = ReLU(maxz), when max(z)>0.
Since ReLU(x)=max(x, 0), we can rewrite the left hand side as max (ReLU(z)) = max (max(z, 0)) = max(z). Similarly, the right-hand side can be written as max(max(z), 0) = max(z), which is equal to the left-hand side.
§ DETAILS FOR EXPERIMENTS
§.§ Full Conversion Pipeline
In this section, we present the pseudo-code of the full conversion pipeline in Algorithm <ref>. At the beginning of the overall conversion process, we first initialize the model by replacing all activation layers with the ReLUX function and set the initial value of thresholds at each layer as 0. The model uses the same weight as the pre-trained ANN model and all modules are set to inference mode. During the threshold balancing algorithm, we will sample minibatches from the training dataset and input it into the model. The threshold value will be automatically optimized during the forward propagation.
When evaluating SNNs, our conversion process incorporates other methods to further enhance the performance of the converted SNN. Firstly, we apply the membrane potential initialization algorithm <cit.>, setting the membrane potential as half of the threshold value before each inference iteration. We also utilize the delayed evaluation approach to obtain the average spike count of the output layer, ensuring performance with fewer inference steps <cit.>.
§.§ Image Classification
When conducting experiments on the ImageNet dataset, we use the pre-trained models from TorchVision. During the threshold balancing process, we employ several commonly used data augmentation methods, including RandomResizeCrop, RandomHorizontalFlip, ColorJitter, and tensor Normalization. The iteration step number of the threshold balancing process is 1000 unless mentioned. During the evaluation of converted models, we only Crop and Resize the model into 224x224 normalized images. The delayed evaluation step length is set to a fixed 16 time-steps.
§.§ Semantic Segmentation
For the experiments of Pascal VOC 2012 dataset, the weights of the original ANN models are from open-source Github repositories. The data preprocessing operations during both threshold balancing process and inference process include resizing the data into 256x256 image and normalizing the data value. The delayed evaluation step length is set to half of the inference step length. The iteration step number of the threshold balancing process is set to 5 traversals of the training set for FCN and 6 traversals of the training set for DeepLab. Moreover, Pascal VOC 2012 is augmented by the extra annotations provided by SBD, resulting in 10582 training images.
For the experiments of the MS COCO 2017 dataset, the weights are directly downloaded from TorchVision. When performing data preprocessing, We first resize the input data into 400x400 images and normalize the images. The delayed evaluation length is set to half of the inference step length. The iteration step number of the threshold balancing process is set to 3 traversals of the training set. Note that these weights were trained on a subset of MS COCO 2017, using only the 20 categories that are present in the Pascal VOC dataset. This subset contains 92518 images for training.
§.§ Object Detection
For our object detection experiments, we utilized pre-trained weights from TorchVision. During the threshold balancing process, we use similar dataset augmentation as SSD <cit.>. The input images are augmented by RandomPhotometricDistort, RandomZoomOut, RandomIoUCrop, and RandomHorizontalFlip. The optimization iteration step numbers is set to 1000 unless mentioned. During the evaluation of converted models, we only Crop and Resize the model into 224x224 normalized images. The delayed evaluation length is set to half of the inference step length.
§ VISUALIZATIONS ON OBJECT DETECTION AND SEMANTIC SEGMENTATION
In Figure <ref> and Figure <ref> we present the visualization of the semantic segmentation and object detection results. In each row of the figure, we illustrate the visualization of ground truth, original image (only for semantic segmentation), results from original ANN and results from converted SNN at different time-steps.
§ ENERGY CONSUMPTION ANALYSIS
Since the low-power consumption is one of the advantages of SNNs, we calculate the average energy consumption of the converted SNNs and compare it with the energy consumption of the ANN counterparts. We employed a method similar to previous work, estimating the energy consumption of the SNN by calculating the number of Synaptic Operations (SOP). Since the total spike activity of the SNN increases proportionally with the inference time, we define SOP90 and SOP95 as the metrics for converted SNNs on ImageNet. The SOP90/95 denotes the average synaptic operation per image when the accuracy of the converted SNN exceeds 90%/95% of the original ANN while SNN90-FPS/W denotes the total number of frames per joule when the performance of the converted SNN exceeds 90%. In order to further estimate the energy consumption, we utilize the average energy efficiency of 77fJ/SOP for SNN and 12.5pJ/FLOP for ANN <cit.> to calculate the required energy for one frame. The detailed comparison is demonstrated in the table below.
For ImageNet classification tasks, using the same ResNet-34 architecture, the SNN is over 28 times more energy efficient than the original ANN while maintaining 90% performance of the original ANN, achieving an estimation of 622 FPS/W energy efficiency while deployed on neuromorphic hardware. It is worth noticing the SNN can be easily obtained by converting open-source pre-trained ANN models with negligible training cost and then deploy on specific hardware for energy-saving purpose.
|
http://arxiv.org/abs/2409.03536v1 | 20240905135132 | Physics-informed Neural Networks with Fourier Features for Seismic Wavefield Simulation in Time-Domain Nonsmooth Complex Media | [
"Yi Ding",
"Su Chen",
"Hiroe Miyake",
"Xiaojun Li"
] | physics.geo-ph | [
"physics.geo-ph"
] |
Preprint submitted to IEEE
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
0000–0000/00$00.00 2024 IEEE
Physics-informed Neural Networks with Fourier Features for Seismic Wavefield Simulation in Time-Domain Nonsmooth Complex Media
Yi Ding, Su Chen, Hiroe Miyake, Xiaojun Li
Manuscript received ****, *** 2024; This work was supported in part by National Natural Science Foundation of China (Grant Nos. 52192675, U1839202), and in part by the China Scholarship Council 202306540044 (Corresponding author: Su Chen.)
Yi Ding is with the Key Laboratory of Urban Security and Disaster Engineering of the Ministry of Education, Beijing University of Technology, Beijing 100124, China, and also with the Earthquake Research Institute, The University of Tokyo, Tokyo, 113-0032, Japan (e-mail: [email protected]).
Su Chen and Xiaojun Li are with the Key Laboratory of Urban Security and Disaster Engineering of the Ministry of Education, Beijing University of Technology, Beijing 100124, China, and Xiaojun Li is also with the Institute of Geophysics, China Earthquake Administration, Beijing 100081, China (e-mail: [email protected]; [email protected]).
Hiroe Miyake is with the Earthquake Research Institute, The University of Tokyo, Tokyo, 113-0032, Japan (e-mail: [email protected]).
5 September 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Physics-informed neural networks (PINNs) have great potential for flexibility and effectiveness in forward modeling and inversion of seismic waves. However, coordinate-based neural networks (NNs) commonly suffer from the "spectral bias" pathology, which greatly limits their ability to model high-frequency wave propagation in sharp and complex media. We propose a unified framework of Fourier feature physics-informed neural networks (FF-PINNs) for solving the time-domain wave equations. The proposed framework combines the stochastic gradient descent (SGD) strategy with a pre-trained wave velocity surrogate model to mitigate the singularity at the point source. The performance of the activation functions and gradient descent strategies are discussed through ablation experiments. In addition, we evaluate the accuracy comparison of Fourier feature mappings sampled from different families of distributions (Gaussian, Laplace, and uniform). The second-order paraxial approximation-based boundary conditions are incorporated into the loss function as a soft regularizer to eliminate spurious boundary reflections. Through the non-smooth Marmousi and Overthrust model cases, we emphasized the necessity of the absorbing boundary conditions (ABCs) constraints. The results of a series of numerical experiments demonstrate the accuracy and effectiveness of the proposed method for modeling high-frequency wave propagation in sharp and complex media.
Physics-informed neural networks (PINNs), Seismic wave propagation simulation, Fourier feature neural networks, Absorbing boundary conditions (ABCs), Spectral bias.
§ INTRODUCTION
Improving the accuracy and stability of the inner domain and artificial boundaries in the simulation of complex wave propagation problems is a fundamental and common requirement for important inversion techniques <cit.>. For example, full waveform inversion (FWI) is widely used in oil and gas exploration to detect underground structures from artificial earthquake seismograms. By minimizing the differences between observed data and simulated data from the wave equations, FWI can provide high-resolution estimates of subsurface parameters. With the successful application of machine learning in many fields, there is increasing interest in utilizing the approximation capabilities of machine learning techniques for seismic inversion <cit.>. However, the lack of labeled training datasets is a common challenge in applying deep learning to solve most geoscience problems. Semi-supervised or unsupervised learning stands as a prospective avenue to surmount this obstacle, leveraging prior constraints to formulate unsupervised loss functions for the training of deep neural networks <cit.>.
With the universal approximation theorem of neural networks <cit.> and advances in automatic differentiation, deep learning tools are introducing a new trend, offering an alternative approach for solving partial differential equations (PDEs) through the combination of hidden layers to provide nonlinear approximations. The physics-informed neural networks (PINNs) proposed by Raissi et al. <cit.> seamlessly integrate observation and governing physics and train the neural network by minimizing a physics-informed loss function constructed based on PDE residuals and initial/boundary conditions (I/BCs). PINNs provide new perspectives and ideas for addressing the challenges faced by traditional numerical methods in complex domains <cit.>, and solving inverse problems <cit.>. In the geophysical forward and inverse problems, PINNs have been successfully applied to the Eikonal equation <cit.>, the Maxwell equation <cit.>, wave equations for isotropic and anisotropic media in both time <cit.>, and frequency domains <cit.>.
Current research on PINNs methods for modeling wave propagation behavior has received widespread attention, but there have also been certain challenges in applying vanilla PINNs to seismic wave propagation problems. Firstly, some studies have reported that PINNs suffer from point-source singularity when solving wave equations in the frequency and time domains. Alkhalifah et al. <cit.> addressed the scatter wavefield of the Helmholtz equation based on analytical background wavefields to avoid the singularity arising from high sparsity. To solve the time-domain wave equations, the early initial wavefields obtained by analytical <cit.> or conventional numerical methods <cit.> can be utilized to provide information about the source. This scheme leads to additional initial conditions loss components, which can be adjusted using a loss weight adaptive algorithm <cit.> to balance the backpropagation gradients between different loss terms.
On the other hand, neural networks tend to learn low-frequency functions and encounter difficulties in learning high-frequency functions, a phenomenon known as "spectral bias" <cit.>. Tancik et al. <cit.> introduced the Fourier feature network, which employs Fourier feature mapping to transform the input coordinates into a series of high-frequency sinusoidal waves, enhancing the neural network's capability to learn high-frequency functions. Subsequently, the Fourier feature neural networks have been introduced into physics-informed learning <cit.>, then utilized to solve the multi-frequency Helmholtz equation <cit.> and the acoustic wave equation <cit.>.
In conventional discrete numerical algorithms, the discrete equations of motion of the nodes need to be integrated step-by-step using appropriate time integration methods to ensure that the solutions are obtained in temporal order. In contrast, PINNs simultaneously predict the solution over the entire spatio-temporal domain, which may violate temporal causality and tend to converge to erroneous solutions <cit.>. The inherent implicit bias in PINNs hinders the learning process associated with long-term complex dynamic processes. A sequential training strategy via time domain decomposition has been introduced to improve the ability of PINNs to model complex wave propagation behavior <cit.>.
Specifying proper initial conditions (ICs) and boundary conditions (BCs) for a PDE is essential to have a well-posed problem. The original formulation of PINNs utilized a “soft constraints” manner to incorporate I/BCs. Emerging studies have rigorously guaranteed the implementation of hard-embedding I/BCs and avoided the issue of imbalanced gradients in multi-component loss functions <cit.>. Absorbing boundary conditions (ABCs) are commonly used to model wave propagation in infinite or semi-infinite media under classical numerical methods. At present, a consensus has not been reached regarding the necessity of including ABCs in the solution of the wave equation using PINNs. When addressing the Helmholtz equation in non-smooth media, the incorporation of perfect matching layer conditions into the loss function is proposed to enhance the coupling of the real and imaginary components of the wavefield <cit.>. Additionally, for modeling seismic wave propagation in 2D elastic media, an absorbing boundary condition is integrated into the network as a soft regularizer to address truncated boundaries <cit.>. However, the potential of PINNs for modeling seismic wavefields in the time domain has not been fully explored, especially concerning complex media and high-frequency wave behavior in the time domain.
Motivated by the foregoing developments, this work proposes a novel unified PINNs architecture for time-domain seismic wavefield simulation in complex media. To alleviate the singularity issue of point sources, we first inject the source using the smoothed Gaussian spatial mapping to approximate the Dirac delta function. Additionally, a fast wave velocity model prediction module is proposed to combine with the stochastic gradient descent (SGD) training strategy, randomly sampling training points at each training step. This allows PINNs to effectively capture information about the seismic source.
To ensure an accurate implementation of the zero-initial condition, the hard embedding scheme proposed by Sethi et al. <cit.> is introduced. We carry out a series of ablation experiments to compare the impact of activation functions and training strategies on the proposed framework, as well as the performance of different families of distributions of Fourier feature parameters. Taking the Marmousi and Overthrust models as examples, we show that non-smooth complex medium velocity models lead to inaccurate wavefields when ABCs are not considered. The paraxial approximation technique of the acoustic wave equations is introduced into the loss function as a soft regularization to solve the wave propagation problem in the infinite domain. Combining ABCs and time-domain decomposition training strategies, we demonstrate the potential and versatility of the proposed framework.
§ THEORY
§.§ Problem Setting
In this paper, we aim to investigate the potential of Fourier feature PINNs for modeling wave propagation in complex velocity media. The propagation of seismic waves can be simulated in the time domain by solving the 2D acoustic wave equation as follows:
∂^2 u(𝐱, t)/∂ t^2 = c^2(𝐱) ∇^2 u(𝐱, t)+s(t) G(𝐱, 𝐱_s)
s(t) = M_0(1-2(π f_0(t-t_0))^2) exp((π f_0(t-t_0))^2)
G(𝐱, 𝐱_s) =exp(-1/2𝐱-𝐱_s/α_2^2)
where 𝐱={x,z} is a vector of spatial coordinates for 2D media, 𝐱_s={x_s,z_s} denotes the coordinate of the source. c(𝐱) is the velocity model and u(𝐱,t) is the displacement wavefield in the time domain. The source time function s(t) is injected via a smooth space dependent field G(𝐱,𝐱_s) of Gaussian shape. The kernel width α controls the spatial mapping range of the point source, M_0 is the amplitude. A Ricker wavelet is used as the source time function, with a center frequency of f_0 and a delay time of t_0=1/f_0.
The initiation of a point-like source at a single grid point can be readily accomplished utilizing the finite differences method. However, this is no longer suitable within the PINNs method. Similar issues are observed with pseudo-spectral methods called the Gibbs phenomenon, as the Fourier transform of a spike-like function creates oscillations that damage the accuracy of the solution <cit.>. We use the Gaussian function in (<ref>) to define the space-dependent part of the source, injecting it with a smooth spatial distribution of Gaussian shape to alleviate singularity issues associated with point sources.
§.§ Hard embedding initial conditions in PINNs
Differing from data-driven neural networks that require a large amount of data to establish the mapping between input and output, the optimization process of PINNs relies only on known physical principles, without the need for any labeled data. Considering the 2D acoustic wave equations described by (1), the residual ℛ_r(t_r^i,𝐱_r^i) and loss function ℒ_r(θ) corresponding to the PDEs are
ℛ_r(t_r, 𝐱_r) =∂^2û_θ/∂ t_r^2-c^2∇^2û_θ+s(t_r) G(𝐱_r, 𝐱_s),
ℒ_r(θ) =1/N_r∑_i=1^N_r|ℛ_r(t_r^i, 𝐱_r^i)|^2,
where û_θ is the unknown potential solution approximated by the deep neural networks. {t_r^i, 𝐱_r^i}_i=1^N_r denotes the collocation points of the PDEs residual, with the number being N_r.
In the vanilla PINNs framework, I/BCs are imposed as loss terms in the target function in a "soft constraint" manner. The gradient imbalance problem of different loss terms may lead to the failure of neural network training, and the weight parameters of different loss terms must be carefully determined. Meanwhile, the optimization process cannot guarantee that the soft constraint loss term is completely zero. The output of NNs is directly the displacement solution of the second order wave equations, which can directly satisfy the initial and boundary conditions related to u. To ensure a unique solution to the wave equations, it is imperative to rigorously define the initial conditions. Before the seismic source excitation, the system is in a quiescent state with zero initial conditions. The neural network output f_θ(t,𝐱) can be transformed as
û_θ(t, 𝐱)=f_θ(t, 𝐱) · t^2,
which guarantees that û_θ=∂û_θ / ∂ t=0 at t=0, thus realizing hard embedding with zero initial conditions. Here t^2 is used as the approximate distance function (ADF) and is not unique. The employed ADF must satisfy two required criteria: 1) The ADF must be equal to zero at the initial moment t=0, and 2) for any t>0, the ADF and its gradient must be smooth and non-zero.
§.§ Fourier Feature Mapping
With the Neural Tangent Kernel (NTK) theory <cit.>, the "spectral bias" pathology of neural networks can be rigorously analyzed and elucidated. The neural network is biased to first learn the target function along the eigendirections of NTK with larger eigenvalues, and then learn the remaining components corresponding to the smaller eigenvalues <cit.>. However, NTK theory suggests that this is because standard coordinate-based multilayer perceptrons (MLPs) correspond to kernels with a rapid frequency falloff, which effectively prevents them from learning the high-frequency functions <cit.>.
In PINNs methods, MLPs are generally used as universal approximators to learn the solutions of PDEs. The Fourier feature NNs accept the coordinates as input and map them to a feature space with a Fourier feature mapping before passing them to the hidden layer. Let v={t,𝐱}∈ℝ^d denote the input coordinates. The Fourier feature mapping γ(v) performs a high-dimensional frequency mapping on the input coordinates, which is then fed into the MLPs. Following the original formulation of Tancik et al. <cit.>, the Fourier feature neural network is
γ(v) =[[ cos (2 π𝐁v); sin (2 π𝐁v) ]],
H^(1) =ϕ(W^(1)·γ(v)+b^(1)),
H^(l) =ϕ(W^(l)·H^(l-1)+b^(l)), l=2, ⋯, L,
f_θ(v) =W^(L+1)·H^(L)+b^(l),
where W^(l)∈ℝ^d_l× d_l-1, 𝐛^(l)∈ℝ^d_l are the trainable weight and bias of the lth layer, respectively. ϕ denotes the nonlinear activation function. Fourier basis frequency 𝐁∈ℝ^m× d can be sampled from different distribution families (Gaussian, Laplacian, uniform), leading to different ways of embedding Fourier features. For example, in a Gaussian Fourier feature NNs, each entry of 𝐁 is i.i.d. sampled from a normal distribution 𝒩(0,σ^2) with the standard deviation σ>0. m is the number of basis functions in the set of Fourier basis functions. m is the number of basis functions in the Fourier basis set. During the network training process, 𝐁 can be kept constant or optimized alongside the weights and biases as the trainable parameter θ. In all the cases considered in this work, we set 𝐁 as a fixed parameter and let m=256.
The standard deviation σ is a hyperparameter that directly governs the scale of the distribution of the Fourier base frequencies and the resulting eigenspace of the NTK. It is usually specified empirically or requires an expensive hyperparameter search to determine. The performance of FF-PINNs is strongly influenced by the hyperparameter σ, and optimal performance is only achieved when σ is tuned to a certain range. However, for specific problems, we can obtain prior frequency information for the specific problem from existing boundary conditions, source time functions, or observed data to avoid costly wide-range hyperparameter searches. For specific problems, prior frequency information can be obtained from available I/BCs, source time functions, or observations to avoid extensive and expensive hyperparameter searches.
§.§ Absorbing boundary conditions constraints
ABCs play a crucial role in the numerical simulation of wave propagation problems. To achieve the goal that all outgoing waves are not reflected by boundaries, the computed boundary motions should take into account all the complexities of the outgoing waves. The paraxial boundary conditions are derived by rational approximation of the 2D acoustic-wave dispersion relation <cit.>. For example, ck_z / ω=-√(1-s^2) with respect to the variable s=ck_x=ω in the negative z direction. Thus, the paraxial approximation technique can be strictly used for constructing ABCs for 2D acoustic wave equations.
In PINNs, it is straightforward to impose paraxial-approximation-based ABCs at the boundary in a soft constraint manner. ABCs in continuous form are treated directly in PINNs, the relevant derivatives are computed by automatic differentiation without the need for numerical discretization. In this study, a second-order expression of the Clayton-Engquist acoustic paraxial approximation boundary <cit.> is considered. The computational model is a rectangular region, the coordinate origin is located in the upper left corner of the model, and the boundary directions are parallel to the x and z coordinate axes, respectively. Then the residuals of the second-order Clayton-Engquist boundary conditions can be expressed as
x -axis positive:
ℛ_x l=∂^2û_θ/∂ x_l∂ t_c+1/c∂^2û_θ/∂ t_c^2-c/2∂^2û_θ/∂ z_c^2,
x -axis negative:
ℛ_x b=∂^2û_θ/∂ x_b∂ t_c-1/c∂^2û_θ/∂ t_c^2+c/2∂^2û_θ/∂ z_c^2,
z -axis positive:
ℛ_z l=∂^2û_θ/∂ z_l∂ t_c+1/c∂^2û_θ/∂ t_c^2-c/2∂^2û_θ/∂ x_c^2,
z -axis negative:
ℛ_z b=∂^2û_θ/∂ z_b∂ t_c-1/c∂^2û_θ/∂ t_c^2+c/2∂^2û_θ/∂ x_c^2 .
The loss function now consists of the PDEs residual loss defined in (3) and the ABCs residual loss ℒ_ce,
ℒ(θ) =λ_rℒ_r(θ)+λ_c eℒ_c e(θ)=λ_r/N_r∑_i=1^N_r|ℛ_r|^2
+λ_c e/N_c e∑_i=1^N_c e (|ℛ_x l|^2+|ℛ_x b|^2+|ℛ_z l|^2+|ℛ_z b|^2)
where {x_b,z_b,x_l,z_l} denotes the collocation points from the spatial boundaries with the number being N_ce, the subscript b is the lower bound and l is the upper bound. x_c, z_c and t_c are randomly sampled collocation points across the x, z and t axes, respectively. Fig. <ref> illustrates the distribution of boundary collocation points for a rectangular domain with ABCs imposed. λ_r and λ_ce are the weight hyperparameters of the corresponding loss function terms, which are set to 1 in the subsequent cases. Algorithm <ref> demonstrates the training pipeline of the proposed framework.
§.§ Experimental Setup
Activation functions. The choice of activation functions has a significant effect on the training dynamics and task performance. Popular activation Rectified Linear Unit (ReLU) is deficient for high-order PDEs since its second-order derivative is zero. The utilization of the hyperbolic tangent (Tanh) activation function has been prevalent in previous research pertaining to PINNs. However, it has been observed that Tanh encounters challenges when tasked with learning high-frequency and periodic functions <cit.>. In the field of Implicit Neural Representation (INR), various activation functions have been suggested to help coordinate-MLPs encode high-frequency signals, including periodic (sinusoidal <cit.>) and nonperiodic (e.g., Gaussian <cit.>) functions. In addition, Swish <cit.>, an activation function discovered by combining exhaustive and reinforcement learning-based search, achieves impressive performance in many challenging tasks. We carry out a detailed experimental comparison in Section <ref> to provide guidance on the choice of activation functions.
Random sampling. In comparison to batch gradient descent (BGD), stochastic gradient descent (SGD) <cit.> significantly reduces the memory requirements and the computational cost of each iteration. Moreover, according to our observation, the use of random sampling plays an important role in the training efficiency and model performance of the selected points, whereas the use of full-batch gradient descent to train PINNs may lead to overfitting of PDE residuals. The continuously changing random sampling across the entire domain ensures that the neural network gains sufficient understanding of the point source region, even if it is sparse relative to the entire spatial domain, enabling PINNs to fully capture information about the seismic source. Section <ref> introduces the velocity surrogate model combined with SGD algorithm. A detailed comparison of the performance between SGD and BGD algorithms will be conducted in the ablation study in Section <ref>.
Optimizer. In this study, only Adam <cit.> was used as the default optimizer, random sampling from the spatio-temporal domain at each epoch. We set the initial learning rate to 5e^-3 and an exponential decay with a decay rate of 0.9 for every 1000 decay epochs. Since the L-BFGS <cit.> strongly relies on the historical gradient to approximate the inverse Hessian matrix, when these gradients are computed using collocation points the process can be unstable. The stable implementation of quasi-Newton updating in a multi-batch setting continues to garner significant attention in the academic literature <cit.>. Finally, dense layers will be typically initialized using the Glorot scheme <cit.>.
Time domain decomposition strategy. In this study, we have employed a time-domain decomposition strategy to address the wave propagation problem in complex media. The time-domain decomposition decomposes the difficult optimization problem over the whole time period into several relatively simple problems, which significantly reduces the optimization difficulty of learning the complete evolution of the dynamical system. Specifically, if the entire simulation time domain is [0, T], the solution of PDEs is sequentially learned in the time period [0, t_i](i =1, 2, … and 0<t_1<t_2<⋯ T). The training parameters of the previous time period are passed to the next time period to provide good parameter initialization.
Evaluation. Predictions û_θ for PINNs/FF-PINNs and ĉ_θ for the wave velocity surrogate model are evaluated using the relative ℓ_2 error
ϵ_ℓ_2=√(∑_i=1^n(ŷ_θ-y_ref)/∑_i=1^n y_ref)
where ŷ_θ is the prediction result of the neural networks. and y_ref represents the ground truth. n is the number of test sampling points.
In order to assess the accuracy of the proposed method for solving the wave equation, the results of the finite difference method (FDM) are used as reference data. We use unsplit convolutional perfectly matched layer (PML) boundary condition <cit.> for staggered-grid second-order finite difference modeling of the time-dependent 2D acoustic wave equation. For the infinite medium wave propagation problem, we add 10-grid PML boundaries to each of the four boundaries of the rectangular domain to prevent spurious boundary-reflected waves.
§.§ The Fourier feature velocity surrogate model
Previous studies have avoided the tendency of neural networks to converge to a zero solution by ensuring that sufficient collocation points are added near the source location. However, this approach results in a significant amount of computation and may lead to erroneous attention mechanisms during the training process, as the NNs excessively focus on optimizing around the source while neglecting other areas. According to our experiments, random sampling in the spatio-temporal domain in each iteration and applying SGD can sufficiently learn the information of the source, thus avoiding additional sampling in the region around the source.
Due to the heterogeneity of the model, the wave velocity corresponding to the randomly sampled spatial coordinates must be obtained quickly in each epoch to participate in the calculation of the PDE residual loss. To solve this problem, we use a supervised neural network for fast prediction of wave velocity models. The velocity model prediction network is trained using coordinate-velocity labeling pairs before the simulation starts. The input to the network is the spatial coordinates 𝐱={x,z} and the output is the corresponding wave velocity ĉ_θ. Upon completion of the velocity network training, the prediction of velocity can be executed in each subsequent PINNs/FF-PINNs training epoch, rendering the impact on training time negligible.
The predictive efficiency and accuracy of the velocity model neural network are crucial to the overall workflow. Since neural networks tend to learn low-frequency functions, a reasonable conjecture is that they may struggle to learn the detailed structure of complex velocity models. We use a comparative experiment with the Overthrust velocity model to demonstrate the importance of Fourier feature mapping. The Fourier feature NNs and fully connected NNs followed the same hyperparameter settings. We trained 5-layer neural networks with 20 neurons per hidden layer for 100,000 epochs using the Adam optimizer, the activation function used is Swish. Scaling of the wave velocity and spatial coordinates by the same proportion was implemented according to the principle of wave velocity normalization.
Table <ref> shows the comparison of the relative ℓ_2 errors in the predictions of fully connected NNs and Fourier feature NNs with different σ. As can be seen from Table <ref>, the using of Fourier feature mapping improves the prediction accuracy by about an order of magnitude. The relative ℓ_2 error of the prediction of the fully connected NNs is 2.166e^-2, while that of the Fourier feature NNs with σ=15 is 1.483e^-2. A comparison of the absolute errors of the predictions of fully connected NNs and σ=15 Fourier feature NNs with the true velocity model is shown in Fig. <ref>. For complex velocity models with sharp interfaces, fully connected NNs are unable to learn high-frequency information and thus give blurred predictions because of spectral bias pathology. On the other hand, the Fourier feature NNs showed low sensitivity to sigma in the tests, achieving low errors (ϵ_ℓ_2<2e^-3) over a wide range (σ∈{5, 25}). The Fourier feature velocity surrogate model proposed in this section provides a robust and accurate prediction of the medium model for the next stage of the solution process.
§ ABLATION STUDY
We conduct a detailed ablation study through a homogeneous medium case, which is used to evaluate several aspects that researchers and practitioners should consider when solving the wave equation using FF-PINNs. The discussion highlights the importance of choosing appropriate activation functions, random sampling strategies, and scaling of Fourier feature mappings.
The region of the computational model is x∈[0, 600] m, z∈[0, 600] m, with c=500 m/s. The total computation time is 0.9 s. Since the velocity spatial independence in the case of homogeneous media, it can be explicitly defined during the iterative process, obviating the need for the Fourier feature velocity surrogate model mentioned in Section <ref>. We do not consider ABCs in this case, thus the loss function contains only the PDE loss term. In the vanilla PINNs method and FF-PINNs, a fully connected neural network with 5 hidden layers and 50 neurons per layer was used. The number of collocation points for the PDEs residual is N_r=3,0000. The results are averaged from 5 independent trials optimized by the Adam optimizer for 10, 000 iterations.
§.§ Activation functions and random sampling strategies
The central frequency of the Ricker source time function considered is f_0=10 Hz, and α=0.02. We used the SGD algorithm to evaluate the performance of some of the activation functions discussed in Section <ref>, and show all the results in Table <ref>. It is worth noting that the commonly used Tanh activation function performs poorly (ϵ_ℓ_2=0.6212) even when sigma is set to the proper value. In addition, Swish outperforms Gaussian and Sin in all cases, with Gaussian slightly outperforming Sin. The best performance (ϵ_ℓ_2=0.0398) was achieved with Swish activation when σ=1 for the Gaussian Fourier feature network. As discussed in Section <ref>, the BGD strategy cannot be used in this problem, probably due to insufficient learning of information about the point sources. The results in Table <ref> validate this finding. Fig. <ref> shows a comparison of the wavefield snapshots of the predicted solutions of FF-PINNs with the FDM results for five moments. All subsequent experiments herein employ Swish activation and SGD training strategy.
§.§ Fourier mapping
The parameter 𝐁 determines the preferred frequency range learned by the Fourier feature network described by (5), and thus the reasonable selection of the distribution of 𝐁 is a prerequisite for accurate prediction by FF-PINNs. In Gaussian Fourier feature neural networks, the scale factor σ controls the range of the 𝐁 distribution. Previous works that utilized FF-PINNs to simulate the wave equation in the frequency <cit.> and time domains <cit.> sampled 𝐁 from a uniform distribution with zero as the axis of symmetry. We use Fourier feature mappings sampled from different distribution families (Gaussian, Laplacian, and uniform) and sweep over the standard deviation of each distribution. The Gaussian distribution is 1/√(2π)σe^-(x-μ)^2/2σ^2, where σ is the standard deviation. The Laplacian distribution is 1/2be^-|x-μ|/b, where b>0 is a scale parameter, which is sometimes referred to as the “diversity”. In the Laplacian distribution, the standard deviation σ=√(2)b. The standard deviation of a uniform distribution with distribution range [-b_max,b_max] is b_max / √(3). Building on the previous section, a source with f_0=10 Hz, α=0.02 was considered in this section.
Fig. <ref> shows the results of hyperparameter sweeps. The scatter plots of the results induced by three distribution families exhibit a characteristic "U-shaped" pattern. Both excessively small and excessively large values of σ lead to comparatively large prediction errors. Lower σ may make it difficult to learn information about point sources, while higher coding frequencies can introduce salt-and-pepper artifacts. Consequently, both excessively small and large values result in comparatively large prediction errors. Ideally, an appropriate σ should be selected such that the bandwidth of NTK matches that of the target signals. This not only accelerates the training convergence but also improves the prediction accuracy. However, the spectral information of the solution may not be accessible when solving forward PDEs. In this work, we perform a proper hyperparameter search starting from σ=1. From the ablation study in this section, it can be observed that the exact sampling distribution family is much less important than the distribution’s scale (standard deviation), consistent with the findings of Tancik et al. <cit.> in their experiments on 1D target signals. In the subsequent experiments, we only use Gaussian Fourier features PINNs to model the wave forward propagation.
§ EXPERIMENTAL RESULTS
In this section, we evaluate the representational capacity of the designed FF-PINNs architecture in modeling the propagation of acoustic waves through complex infinite media. Before conducting FF-PINN simulations, we construct a velocity surrogate model using coordinate-velocity label data. We train Gaussian Fourier feature NNs with 5 hidden layers, each containing 20 neurons, employing an Adam optimizer with a learning rate of 5e^-3 for 100,000 epochs, where σ=15. All the numerical implementations are coded in Jax <cit.> and performed on an NVIDIA Tesla V100 GPU card (32G) in a standard workstation.
The accuracy impact due to ABCs is shown in Table <ref>, where the number of the ABCs collocation points sampled randomly from every boundary in each training iteration is N_ce=2000. The term "w/ ABCs" in Table <ref> refers to the second-order Clayton-Engquist acoustic paraxial approximation boundary applied on all four boundaries of the rectangular domain.
§.§ 4-layer velocity model
A 4-layer velocity model case is designed to assess the ability of FF-PINNs to learn the wavefield solutions in horizontal layered media. As shown in Fig. <ref>a, the computational model is a rectangular domain with a width of 1.2 km and a height of 1.2 km. We have set up four layers distributed evenly along the depth, with velocities of 0.6 km/s, 0.8 km/s, 1.0 km/s, and 1.4 km/s respectively. A Ricker source excitation with f_0=5 Hz and α= 0.01 is applied at {x_s,z_s}={0.6, 0.24} km in the spatial domain. The total calculated time for wave propagation is 1.2 s.
The FF-PINNs contain 5 hidden layers, 80 neurons per layer, and a standard deviation σ=1. The PDE residual collocation points with N_r=80,000 were randomly sampled from the spatio-temporal domain in each training iteration. Fig. <ref> illustrates the comparison between the results from the staggered-grid finite difference method and FF-PINNs, after 30,000 training epochs using the Adam optimizer. Seismic waves propagating through layered velocity interfaces give rise to reflection and transmission phenomena, as well as compression and expansion of the waves. Under the same experimental setup, only considering the effect of ABCs soft regularization constraint. The relative ℓ_2 error of the FF-PINNs prediction with ABCs is 0.0614, while the relative ℓ_2 error without ABCs is 0.0763. Since the wave velocity model is not complex, FF-PINNs without considering ABCs can also learn the behavior of transmitted outgoing waves. Moreover, the additional ABCs loss term may slow down the optimization of the PDEs loss term in the early training epochs, which can explain the small difference in the errors. Fig. <ref> illustrates the comparison between the FF-PINNs predictions incorporating ABCs and the results obtained through FDM. Upon examining the wavefield snapshots presented in Fig. <ref>, it becomes evident that the FF-PINNs adeptly capture distinct physical phenomena occurring near the medium interfaces.
§.§ Random mixed Gaussian velocity model
We further increase the complexity of the wavefield by using three source terms with different α values and considering a random mixed Gaussian velocity model. Based on a background wave velocity of c_0(𝐱)=1.0 km/s, the following random mixed Gaussian model is incorporated
G_c(𝐱)=M_cexp(-1/2𝐱-𝐱_c/α_c_2^2),
where M_c represents the amplitude, which is set as a random number between [-3, 3]. The location 𝐱_c is uniformly distributed between [0, 1]. The α_c is set to 1. The resulting random mixed Gaussian velocity model (c_0(𝐱)+G_c(𝐱)) is illustrated in Fig. <ref>b.
We consider the Ricker wavelet time function with a duration of 1.0 s and a central frequency of 10 Hz. The kernel width α of the three sources {s_1,s_2,s_3} are 0.02, 0.03, and 0.025, and the source locations 𝐱_c={x_c,z_c} are {0.3,0.3} km, {0.5, 0.5} km and {0.65, 0.65} km, respectively. The FF-PINNs with standard deviation of σ=1 is used to approximate the displacement solution. The neural networks consist of 8 hidden layers with 80 neurons per layer. During each training iteration, the number of PDE residual collocation points randomly sampled from the spatio-temporal domain is N_r=80,000. After 30,000 epochs of training using the Adam optimizer, the relative ℓ_2 errors of the predicted solutions for FF-PINNs with and without ABCs are 0.0676 and 0.0976, respectively. Table <ref> shows the corresponding comparison. Fig. <ref> shows the comparison between the FF-PINNs prediction with ABCs and the FDM results. It is clear that FF-PINNs learn the complex wave propagation dynamics generated by the three sources, and the predicted solution matches the ground truth very well.
§.§ 2D Marmousi model and Overthrust model
Next, we apply the proposed framework to the extracted Marmousi model and Overthrust model, as shown in Fig. <ref>. To test the capability of FF-PINNs in handling velocity interfaces with sharp variations, we did not apply any smoothing to the velocity models. For the extracted Marmousi model, the model size is 176 × 138, and the spatial sampling interval is 16 m in both x and z directions. The source time function is the Ricker wavelet at the center of the model with f_0=10 Hz and α= 0.01 located. For the extracted Overthrust model, the model size is 201×100 and the spatial sampling interval is 25 m. The center frequency f_0 of the Ricker wavelet is 15 Hz and α=0.02. The total computational time for wave propagation in both cases is 1.0 second.
The FF-PINNs with standard deviation of σ=1.5 is used to approximate the displacement solution. The neural networks consist of 5 hidden layers with 128 neurons per layer. The number of PDE residual collocation points randomly sampled from the spatio-temporal domain is N_r=80,000.
Due to the increased complexity of the Marmousi and Overthrust models under consideration, the sequential training strategy of time-domain decomposition introduced in Section <ref> is applied to both of these cases. This training strategy is consistent with the laws of physics for the time-dependent dynamical systems. We initially conducted 5,000 pre-training epochs using the Adam optimizer within the time range of [0, 0.3] s, followed by 10,000 epochs of training within the time ranges of [0, 0.4] s and [0, 0.5] s, and finally, 20,000 epochs within the range of [0, 0.6] s to obtain the solution for the entire time domain.
We show in Table <ref> the accuracy comparison between the Marmousi case and the Overthrust case with and without ABCs. With ABCs, the relative ℓ_2 errors are 0.0781 and 0.0813 for the two cases, respectively. However, without ABCs, the errors significantly increase to 0.1806 and 0.2128, respectively. Figs. <ref> and <ref> show the snapshots of the predictions of FF-PINNs (with and without ABCs) compared to the FDM-simulated wavefields for the 5 moments of the Marmousi and Overthrust models. Compared to the incident waves, the refracted and reflected waves are much smaller in scale. This poses a significant challenge for fully-connected neural networks as approximators in PINNs. As can be observed, the continuous fine-grained time-domain decomposition training strategy and the soft regularization of ABCs have enabled FF-PINNs to capture the details of waves, yielding high-fidelity wavefield simulation results. When the ABCs soft regularization constraint is not imposed, spurious reflected waves are found to be generated around the boundaries. This leads to very large errors near the boundaries, which further extend to the full spatial domain, severely affecting the accuracy performance of FF-PINNs. On the contrary, the proper application of ABCs can effectively attenuate or even eliminate unreasonable errors at the boundaries, leading to a clear and accurate wavefield solution. This provides strong evidence supporting the necessity of imposing ABCs within the proposed framework.
§ DISCUSSION
Boundary conditions. This work focuses on the necessity and applicability of ABCs in solving 2D acoustic equations in infinite complex media using FF-PINNs. Free surface boundary conditions are not imposed in this work, mainly because of the possible failure of soft-constrained zero-stress boundary conditions <cit.>, which is still an open problem to be solved by the community. Hard-embedded boundary conditions are a reasonable and promising approach. For second-order wave equations, hard embedding of homogeneous Dirichlet boundary conditions (fixed boundaries) is straightforward for PINNs. However, hard embedding of the homogeneous Neumann boundary conditions cannot be achieved by directly modifying the neural network output. Developments in mesh-free methods and the deep learning community provide theoretical support for potential implementations <cit.>.
Efficacy of sharp velocity models. When dealing with non-smooth complex media, PINNs will tend to give smooth wavefields. This may be attributed to two reasons: 1) “spectral bias” pathologies that make learning high frequencies challenging, and 2) commonly used activation functions, such as Tanh, tend to produce smooth output results <cit.>. The FF-PINNs and Swish activation function used in this work significantly enhanced the efficacy and accuracy of neural networks in handling sharp velocity models. It is worth mentioning that updating the randomly sampled collocation points (SGD strategy and velocity surrogate model) at each training epoch is crucial for mitigating point source singularity and learning the details of small scattered and reflected waves. Thus, the proposed approach is an extensible, unified framework with high resolution and reasonable accuracy. Extension to time-domain elastic wave equations solving is a natural progression, and we will present related research in future work.
Computational efficiency. The second-order ABCs in our proposed framework involve additional second-order derivative calculations, which will increase the training cost to some extent. Nevertheless, the previous numerical case comparisons have provided evidence for the necessity of imposing ABCs. Recent studies have added source location as an additional input parameter to the network <cit.> and explored transfer learning <cit.> to improve simulation efficiency. However, computational efficiency remains a major limitation for the application of PINNs methods to large-scale computational scenarios. PINNs method requires the evaluation of PDEs residuals over a large number of sampling points on the internal domain, and the computation of higher-order derivatives through automatic differentiation can be time-consuming. Discrete learning methods <cit.> and neural spectral methods that learn PDE solutions in the spectral domain <cit.> have the potential to alleviate the computational burden of learning the propagation of seismic waves on a large scale.
Better frequency manipulation. The conventional initialization method for Fourier feature NNs based on empirical or hyperparameter sweep should still be used, which is a major shortcoming of this work. If observations are available, the initial values of the Fourier feature parameters can be manipulated based on the spectrum information of the target function. Furthermore, our framework embeds the spatio-temporal coordinates into the high frequency space with the same scale factor σ. However, the "frequencies" in the temporal and spatial domains may be different, or even the gap between them may be significant. In such cases, spatio-temporal multi-scale Fourier feature embedding <cit.> may be a promising idea. The frequency of the Fourier feature mapping determines the frequency of the NTK feature vectors, which can provide a more fundamental guidance of the training dynamics through the perspective of the NTK theory <cit.>. In conclusion, how to rigorously establish a general theory of the relationship between frequency and Fourier features is an open problem that requires theoretical support from more future research.
§ CONCLUSION
In this study, we present a unified framework of Fourier feature PINNs for modeling high-frequency wave propagation in sharp and complex media. The accuracy comparison of the activation functions and sampling strategies in dealing with the point source problem is discussed in detail. The combination of the Swish activation function and the SGD strategy achieves the best performance in all the experiments. The pre-trained wave velocity surrogate model provides fast and accurate wave velocity predictions in each training epoch. Fourier feature neural networks sampled from different families of distributions (Gaussian, Laplace, and uniform) have similar error distribution patterns. The standard deviation of the distribution family determines the range of coordinate frequency mapping and is an important parameter affecting the accuracy of FF-PINNs. For the case of non-smooth complex media, the imposition of ABCs avoids spurious boundary-reflected waves and significantly improves the ability of NNs to learn detailed waves. Realistic assessments demonstrate the efficacy of the proposed framework compared to the vanilla PINNs, particularly in scenarios involving high-frequencies and non-smooth, heterogeneous real-world models.
§ ACKNOWLEDGMENTS
The authors are grateful for the research funding provided by the National Natural Science Foundation of China (NSFC, Grant Nos. 52192675, U1839202).
IEEEtran
|
http://arxiv.org/abs/2409.03470v1 | 20240905123151 | Improving Uncertainty-Error Correspondence in Deep Bayesian Medical Image Segmentation | [
"Prerak Mody",
"Nicolas F. Chaves-de-Plaza",
"Chinmay Rao",
"Eleftheria Astrenidou",
"Mischa de Ridder",
"Nienke Hoekstra",
"Klaus Hildebrandt",
"Marius Staring"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.HC",
"cs.LG"
] |
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ ABSTRACT
Increased usage of automated tools like deep learning in medical image segmentation has alleviated the bottleneck of manual contouring. This has shifted manual labour to quality assessment (QA) of automated contours which involves detecting errors and correcting them. A potential solution to semi-automated QA is to use deep Bayesian uncertainty to recommend potentially erroneous regions, thus reducing time spent on error detection. Previous work has investigated the correspondence between uncertainty and error, however, no work has been done on improving the “utility" of Bayesian uncertainty maps such that it is only present in inaccurate regions and not in the accurate ones. Our work trains the FlipOut model with the Accuracy-vs-Uncertainty (AvU) loss which promotes uncertainty to be present only in inaccurate regions. We apply this method on datasets of two radiotherapy body sites, c.f. head-and-neck CT and prostate MR scans. Uncertainty heatmaps (i.e. predictive entropy) are evaluated against voxel inaccuracies using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves. Numerical results show that when compared to the Bayesian baseline the proposed method successfully suppresses uncertainty for accurate voxels, with similar presence of uncertainty for inaccurate voxels. Code to reproduce experiments is available at <https://github.com/prerakmody/bayesuncertainty-error-correspondence>.
Bayesian Deep Learning, Bayesian Uncertainty, Uncertainty-Error Correspondence, Uncertainty Calibration, Contour Quality Assessment, Model Calibration
§ INTRODUCTION
In recent years, deep learning models are being widely used in radiotherapy for the task of medical image segmentation. Although these models have been shown to accelerate clinical workflows <cit.>, they still commit contouring errors <cit.>. Thus, a thorough quality assessment (QA) needs to be conducted, which places a higher time and manpower requirement on clinical resources. This creates a barrier to the adoption of such deep learning models <cit.>. Moreover, it also creates an obstacle for adaptive radiotherapy (ART) workflows, which have been shown to improve a patient's post-radiation quality-of-life <cit.>. This obstacle arises due to ART's need of regular contour updates. Currently, commercial auto-contouring tools do not have the ability to assist with quick identification and rectification of potentially erroneous predictions <cit.>.
Quality assessment (QA) of incorrect contours would require two steps – 1) error detection and 2) error correction <cit.>. Currently, errors are searched for by manual inspection and then rectified using existing contour editing tools. Error detection could be semi-automated by recommending either potentially erroneous slices of a 3D scan <cit.>, or by highlighting portions of the predicted contours <cit.> or blobs <cit.>. Upon detection of the erroneous region, the contours could be rectified using point or scribble-based techniques <cit.> in a manner that adjacent slices are also updated. For this work, we will focus on error detection.
Various approaches to error detection have suggested using Bayesian Deep Learning (BDL) and the uncertainty that it can produce as a method to capture potential errors in the predicted segmentation masks <cit.>. Although such works established the potential usage of uncertainty in the QA of predictions, it may not be sufficient in a clinical workflow that relies on pixel-wise uncertainty as a proxy for error detection. In our experiments with deep Bayesian models, we observed that the relationship between prediction errors and uncertainty is sub-optimal, and hence has low clinical “utility". Ideally, for semi-automated contour QA, the uncertainty should be present only in inaccurate regions and not in the accurate ones. At times, literature usually refers to this as uncertainty calibration <cit.>, but we find this term incorrect as historically, calibration is referred to in context of probabilities of a particular event <cit.>. Thus, we believe it is semantically incorrect to say uncertainty calibration and instead propose to use the term uncertainty-error correspondence.
To create a Bayesian model that is incentivized to produce uncertainty only in inaccurate regions, we use the Accuracy-vs-Uncertainty (AvU) metric <cit.> and its probabilistic loss version <cit.> during training of a UNet-based Bayesian model <cit.>. This loss promotes the presence of both accurate-if-certain (n_ac) as well as inaccurate-if-uncertain (n_iu) voxels in the final prediction (fig:abstract_visual). With uncertainty present only around potentially inaccurate regions, one can achieve improved synergy between clinical experts and their deep learning tools during the QA stage. Our work is the first to use the AvU loss in a dense prediction task like medical image segmentation and also with datasets containing natural and not synthetic variations as was previously done <cit.>. This work extends our conference paper <cit.> with additional datasets, experiments and metrics. There, we adapt the original AvU loss by considering the full theoretical range of uncertainties in the loss, rather than one extracted from the validation dataset <cit.>. For our work we use the predictive entropy as an uncertainty metric <cit.>.
Several other approaches have been considered in context of uncertainty, for e.g. ensembles, test time augmentation (TTA) and model calibration. While ensembles of models have good segmentation performance <cit.>, they are parameter heavy. TTA <cit.> performs inference by modulating a models inputs, but does not perform additional training, so may be unable to transcend its limitations. Calibration techniques attempt to make predictions less overconfident <cit.>, however they do not explicitly align model errors with uncertainty. All the above methods are benchmarked on the truthfulness of their output probabilities (when compared against voxel accuracies) using metrics like expected calibration error (ECE). However, a model with lower ECE than its counterparts may not necessarily have higher uncertainty-error correspondence.
Finally, to evaluate calibrative and uncertainty-error correspondence metrics, one needs to compute the “true" inaccuracy map. Similar to our conference paper <cit.> and inspired by <cit.>, we classify inaccuracies of predicted voxel maps into two categories: “errors" and “failures" (see sec:appendix_a). Segmentation “errors" are those inaccuracies which are considered an artifact similar to inter-observer variation, a phenomenon common in medical image segmentation <cit.>. Thus, we consider these smaller inaccuracies to be accurate in our computations, under the assumption they do not require clinical intervention. In the context of contour QA, such voxels should ideally be certain. Hence, only the segmentation “failures" are a part of the “true" inaccuracy map used to calculate the calibrative and uncertainty-error correspondence metrics.
To summarize, our contributions are as follows:
* For the purpose of semi-automated quality assessment of predicted contours, we aim to improve uncertainty-error correspondence (unc-err) in a Bayesian medical image segmentation setting, pioneering this in the context of radiation therapy. Specifically, we propose using the loss form of the Accuracy-vs-Uncertainty (AvU) metric while training a deep Bayesian segmentation model.
* We compare our Bayesian model with the AvU loss against an ensemble of deterministic models, five approaches employing calibration-based losses and also test time augmentation. We also perform an architectural comparison by comparing models with Bayesian convolutions placed in either the middle layers or decoder layers of a deep segmentation model.
* We benchmark unc-err of the segmentation models on both in- and out-of-distribution radiotherapy datasets containing head-and-neck CT and Prostate MR scans. Models are benchmarked on these datasets across discriminative, calibrative and uncertainty-error correspondence metrics.
§ RELATED WORKS
§.§ Epistemic and aleatoric uncertainty
Recent years have seen an increase in work that utilizes probabilistic modeling in deep medical image segmentation. The goal has been to account for uncertainty due to noise in the dataset (aleatoric uncertainty) as well as in the limitations of the predictive models learning capabilities (epistemic uncertainty). Noise in medical image segmentation refers to factors like inter- and intra- annotator contour variation <cit.> due to factors such as poor contrast in medical scans. Works investigating aleatoric uncertainty model the contour diversity in a dataset by either placing Gaussian noise assumptions on their output <cit.> or by assuming a latent space in the hidden layers and training on datasets containing multiple annotations per scan <cit.>. A popular and easy-to-implement approach to model for aleatoric uncertainty is called test-time augmentation (TTA) <cit.>. Here, different transformations of the image are passed through a model, and the resulting outputs are combined to produce both an output and its associated uncertainty.
In contrast to aleatoric uncertainty, epistemic uncertainty could be used to identify scans (or parts of the scan) that are very different from the training dataset. Here, the model is unable to make a proper interpolation from its existing knowledge. Methods such as ensembling <cit.> and Bayesian posterior inference (e.g., Monte-Carlo DropOut, Stochastic Variational Inference) <cit.> are common methods to model epistemic uncertainty in neural nets. While Bayesian modeling is a more mathematically motivated and hence, principled approach to estimating uncertainty, ensembles have been motivated by the empirically-proven concept of bootstrapping. In contrast to Bayesian models where the perturbation is modelled by placing distributions on weights, ensembles use either different model weight initializations, or different subsets of the training data. In Bayesian inference techniques, perturbations are introduced in the models activation or weight space. Dropout <cit.> and DropConnect <cit.> are popular techniques that apply the Bernoulli distribution on these spaces. Stochastic variational inference (SVI) is another type of weight space perturbation that usually assumes the more expressive Gaussian distribution on the weights. Bayes by Backprop <cit.> and its resource-efficient variant such as FlipOut <cit.> are examples of SVI. For our work, we consider approaches that are designed for both epistemic uncertainty (Ensembles and SVI models) as well as aleatoric uncertainty (TTA).
§.§ Uncertainty use during training
Other works also use the uncertainty from a base segmentation network to automatically refine its output using a follow-up network. This refinement network can be graphical <cit.> or simply convolutional <cit.>. Uncertainty can also be used in an active learning scenario, either with <cit.> or without <cit.> interactive refinement. Shape-based features of uncertainty maps have also been shown to identify false positive predictions <cit.>. Similarly, we too use uncertainty in our training regime, but with the goal of promoting uncertainty only in those regions which are inaccurate, an objective not previously explored in medical image segmentation.
§.§ Model calibration
In context of segmentation, model calibration error is inversely proportional to the alignment of a models output probabilities with its pixel-wise accuracy. Currently there is no proof that reduction in model calibration error leads to improved uncertainty-error correspondence. However, a weak link can be assumed since both are derived from a models output probabilities. It is well known that the probabilities of deterministic models trained on the cross entropy (CE) loss are not well calibrated <cit.>. This means that they are overconfident on incorrect predictions and hence fail silently. This, which is an undesirable trait in context of segmentation QA and needs to be resolved.
To abate this overconfidence issue, methods such as post-training model calibration (or temperature scaling) <cit.>, ensembles <cit.>, calibration-focused training losses <cit.> and calibration-focused training targets <cit.> have been shown to improve model calibration for deterministic models. Temperature scaling, a post-training model calibration technique, has been shown to perform poorly in out-of-domain (OOD) settings <cit.>, relies wholly on an additional validation dataset and/or needs explicit shape priors <cit.>. FinerLocal temperature scaling techniques have been proposed that calibrate on the image or pixel level <cit.>, however they are still conceptually similar to the base method and are hence plagued by the same concerns. Others <cit.> used a shape prior module for out-of-domain robustness, but they only introduced synthetic textural variations in their work.
Another approach to model calibration is to regularize a model during train to promote uncertainty. For e.g. the ECP <cit.> technique explicitly adds the negative entropy to the training loss. Conversely, the Focal loss <cit.> achieves thisattempts to calibrate a model implicitly by assigning lower weights (during training) to more confident predictions. Other methods smooth the hard targets of the ground truth towards a uniform distribution in the limit. For e.g. Label Smoothing <cit.> does this by modifying modifies the class distribution of a pixel by calculating a weighted average (using parameter α) between the hard target and a uniform distribution. On the other hand, Spatial Varying Label Smoothing (SVLS) <cit.> modifies a pixel's class allocation by considering classes around it. To avoid excessively making the models predictions uniform, Margin-based Label Smoothing (MBLS) <cit.> reformulates the above approaches by showing that they essentially perform loss optimization where an equality constraint is applied on a pixels logits. MBLS attempts to achieve the best discriminative-calibrative trade-off by softening this equality constraint. They subtract the max logit of a pixel with its other logits and only penalize those logit distances that are greater than a predetermined margin. Others extend thisthe MBLS framework by further tuningeither learning class-specific weights for the equality constraint <cit.> or reformulating SVLS to a similar formulation similar to MBLS <cit.>. Although these methods attempt to make models less overconfident, they do not explicitly align a model's error to its uncertainty.
There also exist other approaches to model calibration for e.g., multi-task learning <cit.>, mixup augmentation <cit.> and shape priors <cit.>. Multi-task learning requires additional data that may not always be present, while mixup creates synthetic samples which are not representative of the real data distribution. Finally, shape priors may not be applicable to tumors with variable morphology.
Model calibration techniques are evaluated by metrics like Expected Calibration Error (ECE) and its variants <cit.>, however others have also proposed terms like Uncertainty-Calibration Error (UCE) <cit.>. While ECE evaluates the equivalency between accuracy and predicted probability, UCE compares inaccuracy and uncertainty. However, while it is semantically appropriate to expect an average probability of p (0 ≤ p ≤ 1) to give the same average accuracy (i.e., the mathematical formulation of ECE), the same is not appropriate for inaccuracy and uncertainty u (0 ≤ u ≤ 1). Hence, UCE is not applicable to our work.
To conclude, the issue with each of the aforementioned techniques for epistemic, aleatoric and calibrative modeling is that they do not explicitly train the model to develop an innate sense of potential errors on a given segmentation task. Given that this is the primary requirement from a contour QA perspective, these models may be unable to have good uncertainty-error correspondence.
§ METHODS
§.§ Neural Architecture
We adopt the OrganNet2.5D neural net architecture <cit.> which is a standard encoder-decoder model connected by four middle layers. It contains both 2D and 3D convolutions in the encoder and decoder as well as hybrid dilated convolutions (HDC) in the middle. This network performs fewer pooling steps to avoid losing image resolution and instead uses HDC to expand the receptive field. To obtain uncertainty corresponding to the output, we add stochasticity to the deterministic convolutional operations by replacing them with Bayesian convolutions <cit.>. We experiment with replacing deterministic layers in both the HDC as well as the decoder layers to understand the effect of placement.
In a Bayesian model, a prior distribution is placed upon the weights and is then updated to a posterior distribution on the basis of the training data. During inference (eq:1_variation_inference), we sample from this posterior distribution p(W|D) to estimate the output distribution p(y|x,D) with x, y and W being the input, output and neural weight respectively:
p(y|x,D) = E_W ∼p(W|D)[p(y|x,W)].
This work uses a Bayesian posterior estimation technique called stochastic variational inference, where instead of finding the true, albeit intractable posterior, it finds a distribution close to it. We chose FlipOut-based <cit.> convolutions which assume the distribution over the neural weights to be a Gaussian and are factorizable over each hidden layer. Pure variational approaches would need to sample from this distribution for each element of the mini-batch <cit.>. However, the FlipOut technique only samples once and multiplies that random sample with a Rademacher matrix, making the forward pass less computationally expensive.
§.§ Training Objectives
In this section , we use a notation format, where capital letters denote arrays while non-capital letters denote scalar values.
§.§.§ Segmentation Objective
Upon being provided a 3D scan as input, our segmentation model predicts for each class c ∈ C, a 3D probability map P̂_̂ĉ of the same size. Each voxel i ∈ N has a predicted probability vector P̂^̂î containing values p̂_̂ĉ^̂î for each class that sum to 1 (due to softmax). To calculate the predicted class of each voxel ŷ^̂î, we do:
ŷ^̂î = *argmax_c ∈ Cp̂_̂ĉ^̂î.
To generate a training signal, the predicted probability vector P̂^̂î is compared to the corresponding one-hot vector Y^i in the gold standard 3D annotation mask. Y^i is composed of y^i_c ∈{0,1}.
Inspired by <cit.>, we re-frame the binary cross-entropy loss (eq:3_cross_entropy), as penalizing both the foreground (y_c^i=1) and background ((1-y_c^i)=1) voxels of the probability maps of each class with a weight w_c:
L_CE = - 1/|C|( ∑_c ∈ C w_c [
∑_i ∈ N(
y_c^iln(p̂_c^i)
+
(1-y_c^i) ln(1 - p̂_c^i)
) ] ).
Note, we do not utilize the DICE loss for training as it has been shown to have lower model calibration metrics <cit.>. Also, since the CE loss is susceptible to fail during a class-imbalance, we use its weighted version.
§.§.§ Uncertainty Objective
In a Bayesian model, multiple forward passes (m ∈ M) are performed and the output 3D probability maps (̂P̂_̂ĉ)̂_̂m̂ of each pass are averaged to output P̂_̂ĉ (eq:1_variation_inference). Using P̂_̂ĉ, we can calculate a host of statistical measures like entropy, mutual information and variance. We chose entropy as it has been shown to capture both epistemic uncertainty, which we explicitly model in FlipOut layers, as well as aleatoric uncertainty, which is implicitly modeled due to training data <cit.>. We use the predicted class probability vector P̂^̂î for each voxel and calculate its (normalized) entropy u^i:
u^i = -1/ln (|C|)∑_c ∈ Cp̂_̂ĉ^̂îln (p̂_̂ĉ^̂î).
Since we have access to the gold standard annotation mask, each voxel has two properties: accuracy and uncertainty. Accuracy is determined by comparing the gold standard class y^i to the predicted class ŷ^̂î. We use this to classify them in four different categories represented by n_ac, n_au, n_ic and n_iu, where n stands for the total voxel count and a, i, u, c represent the accurate, inaccurate, uncertain and certain voxels. A visual representation of these terms can be seen in fig:abstract_visual. Here, a voxel is determined to be certain or uncertain on the basis of a chosen uncertainty threshold t ∈ T where the maximum value in T is the maximum theoretical uncertainty threshold <cit.>. The aforementioned four terms are the building blocks of the Accuracy-vs-Uncertainty (AvU) metric <cit.> as shown in eq:5_avu - eq:6_avu_terms and it has a range between [0,1]. A higher value indicates that uncertainty is present less in accurate regions and more in inaccurate regions, thus improving the “utility" of uncertainty as a proxy for error detection.
AvU^t = n_ac^t + n_iu^t/n_ac^t + n_au^t + n_ic^t + n_iu^t
n_ac^t = ∑_i ∈{y_i = ŷ_̂î &
u_i ≤ t} 1,
n_au^t = ∑_i ∈{y_i = ŷ_̂î &
u_i > t} 1
n_ic^t = ∑_i ∈{y_i ≠ŷ_̂î &
u_i ≤ t} 1,
n_iu^t = ∑_i ∈{y_i ≠ŷ_̂î &
u_i > t} 1
To maximize AvU for a neural net, one can turn it into a loss metric to be minimized. As done in <cit.> for an image classification setting, we minimize its negative logarithm (eq:7_avu_diff) to improve mathematical stability of gradient descent. However, the AvU metric, as defined above, is not differentiable with respect to the neural net's weights. This is due to all its constituent terms being produced either due to thresholding or max operations which introduce discontinuities that disrupt gradient flows.. This is because the model's outputs are simply used to create a mask and hence no backpropagation can take place. The AvU metric is made differentiable by instead using the uncertainty u^i derived from P̂^̂î (eq:4_entropy), thus allowing for gradient flows. by instead using the maximum value from each voxel's predicted probability vector p^i = max(P^i). Also, a smooth non-linear operation i.e., tanh is used to constrain its value (eq:8_avu_diff_terms_accurate). The differentiable uncertainty term is multiplied by other scalar weighing terms c.f. the maximum probability (p̂^̂î = max(P̂^̂î)) and accuracy/inaccuracy mask for a voxel. All these operations together allow us to calculate proxy values for n_ac, n_au, n_ic and n_iu. In addition, rather than evaluating the loss at a single uncertainty threshold, we integrate over the theoretical range of the uncertainty metric. Thresholding is done by once again multiplying the uncertainty value with a binary mask. The benefits of this thresholding were shown in our conference paper <cit.>:
L_AvU^t = - ln(1 + n_au^t + n_ic^t/n_ac^t + n_iu^t),
L_AvU = 1/T∑_t ∈ T L_AvU^t,
where
n_ac^t = ∑_i ∈{y_i = ŷ_̂î &
u_i ≤ t}p̂^̂î· (1 - tanh(u^i)),
n_ic^t = ∑_i ∈{y_i ≠ŷ_̂î &
u_i ≤ t} (1-p̂^̂î) · (1-tanh(u^i)),
n_au^t = ∑_i ∈{y_i = ŷ_̂î &
u_i > t}p̂^̂î·tanh(u^i)
n_iu^t = ∑_i ∈{y_i ≠ŷ_̂î &
u_i > t} (1-p̂^̂î) ·tanh(u^i).
Finally, the total loss L combines the segmentation and uncertainty loss as:
L =L_CE + α· L_AvU.
§.§ Evaluation
§.§.§ Discriminative and Calibration Evaluation
We evaluate all models on the DICE coefficient for discriminative performance. Calibration is evaluated using the Expected Calibration Error (ECE) <cit.>. Numerical results are compared with the Wilcoxon signed-ranked test where a p-value ≤ 0.05 is considered significant.
§.§.§ Uncertainty Evaluation
As the model is trained on the Accuracy-vs-Uncertainty (AvU) metric, we calculate the AvU scores up to the maximum normalized uncertainty of the validation dataset. A curve with the AvU score on the y-axis and the uncertainty threshold on the x-axis is made and the area-under-the-curve (AUC) for each scan is calculated. AUC scores provide us with a summary of the model performance regardless of the uncertainty threshold, and hence we use it to compare all models.
The AvU metric outputs a single scalar value for the whole scan and does not offer much insight into the differences in uncertainty coverage between the accurate and inaccurate regions. To abate this, we compare the probability of uncertainty in inaccurate regions p(u|i) to the probability of uncertainty in accurate regions p(u|a). Let us plot p(u|i) and p(u|a) on the y-axis and x-axis of a graph respectively, and define n_iu, n_au, n_ac and n_ic, as the count of true positives, false positives, true negatives and false negatives respectively. Thus, p(u|i) is the true positive rate and p(u|a) is the false positive rate. Computing this at different uncertainty thresholds provides us with the Receiver Operating Characteristic (ROC) curve, which we call the uncertainty-ROC curve <cit.>.
Given that ROC curves are biased in situations with class imbalances between positive (inaccurate voxels) and negative (accurate voxels) classes, we also compute the precision-recall curves <cit.>.Here, precision is the probability of inaccuracy given uncertainty p(i|u) and recall is the probability of uncertainty given inaccuracy p(u|i). Note, that the precision-recall curves do not make use of n_ac, which can be high in count for a well-performing model.
Finally, to calculate the calibrative and uncertainty-correspondence metrics, we need an inaccuracy map. We use an inaccuracy map based on the concept of segmentation “failures" and “errors" (sec:appendix_a). To do this, we perform a morphological opening operation using a fixed kernel size of (3,3,1).
§ EXPERIMENTS AND RESULTS
§.§ Datasets
§.§.§ Head-and-Neck CT
Our first dataset contained Head and Neck CT scans of patients from the RTOG 0522 clinical trial <cit.>. The annotated data, which had been collected from the MICCAI2015 Head and Neck Segmentation challenge, contained 33 CT scans for training, 5 for validation and 10 for testing <cit.>. We further expanded the test dataset with annotations of 8 patients belonging to the RTOG trial from the DeepMindTCIA dataset (DTCIA) <cit.>. This dataset included annotations for the mandible, parotid glands, submandibular glands and brainstem. Although there were annotations present for the optic organs, we ignored them for this analysis as they are smaller compared to other organs and require special architectural design choices. Since the train and test patients came from the same study, we considered this as an in-distribution dataset. We also tested our models on the STRUCTSeg (50 scans) dataset <cit.>, hereby shortened as STRSeg. While the RTOG dataset contained American patients, the STRSeg dataset was made up of Chinese patients and hence considered out-of-distribution (OOD) in context of the training data. The uncertainties of this dataset were evaluated to a value of 0.4 since that is the maximum empirical normalized entropy.
§.§.§ Prostate MR
Our second dataset contained MR scans of the prostate for which we use the ProstateX repository <cit.> containing 66 scans as the training dataset. The Medical Decathlon (Prostate) dataset with 34 scans <cit.> and the PROMISE12 repository with 50 scans <cit.> served as our test dataset. The Medical Decathlon dataset (abbreviated as PrMedDec henceforth) contained scans from the same clinic as the ProstateX training dataset. We combined the Peripheral Zone (PZ) and Transition Zone (TZ) from the MedDec dataset into 1 segmentation mask. The PROMISE12 dataset (abbreviated as PR12) was chosen for testing since literature <cit.> has shown lower performance on it and hence it serves as a good candidate to evaluate the utility of uncertainty. This dataset is different from ProstateX due to the usage of an endo-rectal coil in many of its scans as well as the presence of gas pockets in the rectum and dark shadows due to the usage of older MR machines. Thus, although these datasets contained scans of the prostate region, there exists a substantial difference in their visual textures. The maximum empirical normalized entropy of this 2-class dataset is 1.0 and hence the uncertainty-error correspondence metrics were calculated till this value.
§.§ Experimental Settings
We tested the Accuracy-vs-Uncertainty (AvU) loss on four datasets containing scans of different modalities and body sites. We trained 11 models: Det (deterministic), Det+AvU, Ensemble, Focal, LS (Label Smoothing), SVLS (Spatially Varying Label Smoothing), MbLS (Margin based Label Smoothing), ECP (Explicit Confidence Penalty), TTA (Test-Time Augmentation), Bayes and Bayes + AvU. As the names suggest, Bayes and Bayes + AvU are Bayesian versions of the deterministic OrganNet2.5D model <cit.>. The baseline Bayes model contained Bayesian convolutions in its middle layers and was trained using only the cross-entropy (CE) loss. The Bayes + AvU was trained using both the CE and Accuracy-vs-Uncertainty (AvU) loss. Two additional Bayesian models were trained which tests if the placement of the Bayesian layers had any effect: BayesH and BayesH + AvU. Here, BayesH refers to the Bayesian model with Bayesian layers in the head of the model (i.e the decoder). Results for these models can be found in sec:app_bayesh.
The Ensemble was made of M=5 deterministic models with different initializations <cit.>. For TTA, we applied Gaussian noise and random pixel removals for M=5 times each and then averaged their outputs. The hyperparameters of the other models were chosen on the basis of the best discriminative, calibrative and uncertainty-error correspondence metrics on the validation datasets (<ref>). For the calibration focused methods we used the following range of hyperparameters: Focal (γ=1,2,3), MbLS (m=8,10,20,30) for head-and-neck CT, MbLS (m=3,5,8,10) for prostate MR, LS (α=0.1, 0.05, 0.01), SVLS (γ=1,2,3) and ECP (λ=0.1, 1.0, 10.0, 100.0) for head-and-neck CT and ECP (λ=0.1, 1.0, 10.0, 100.0, 1000.0) for prostate MR. For the AvU loss, we evaluated weighting factors in the range [10,100,1000,10000] for the head-and-neck dataset, and [100,1000,10000] for the Prostate dataset.
We trained our models for 1000 epochs using the Adam optimizer with a fixed learning rate of 10^-3. The deterministic model contained ≈ 550K parameters and thus the Ensemble contained ≈ 2.75M parameters. Since the Bayesian models double the parameter count in their layers they incurred an additional parameter cost and ended up with a total of ≈ 900K parameters.
§.§ Results
In sec:results_hn and sec:results_pros we show discriminative (DICE), calibrative (ECE) and uncertainty-error correspondence metrics (ROC-AUC, PRC-AUC) for the two datasets.
§.§.§ Head-and-neck CT
Results in tab:results_table_hn showed that the AvU loss on the Bayes model significantly improved calibrative and uncertainty-error correspondence (unc-err) metrics for both in-distribution (ID) and out-of-distribution (OOD) datasets. The Bayes+AvU model also always performed better than the Det, calibration-focused and TTA models for unc-err metrics. Also, its ECE scores were in most cases better than calibration-focused models. However, there was no clear distinction between the performance of the Ensemble and Bayes+AvU model for ECE and unc-err metrics across both datasets. Also, the AvU loss did not benefit the unc-err metrics for the Det model, in both datasets. Of all the calibration-focused models, LS had the lowest ECE and unc-err metrics, while the ECP model had the best unc-err metrics. When compared to Det, the TTA model improved calibrative and unc-err metrics for the OOD dataset, while maintaining it for the ID dataset.
Visually, the Bayes+AvU model was able to successfully suppress uncertainty in the true positive (TP) (Case 1/2 in fig:results_visual_hn_rtog) and true negative (TN) (Case 3 in fig:results_visual_hn_rtog) regions of the predicted contour. Moreover, it also showed uncertainty in false positive (FP) regions while also suppressing uncertainty in TP regions (Case 3 in fig:results_visual_hn_structseg). Calibrative models (e.g. Focal, LS, SVLS) tended to be quite uncertain in TP or TN regions, which may lead to additional QA time. Detailed descriptions are provided in sec:app_visualresults.
§.§.§ Prostate MR
Similar to the head-and-neck CT dataset, the use of the AvU loss on the baseline Bayes model significantly improved its uncertainty-error correspondence (unc-err) while maintaining calibration performance (tab:results_table_pros). Moreover, it improved the DICE values such that its one of the most competitive amongst all models. Also, the Bayes+AvU had better performance in both unc-err and calibrative metrics when compared to the Det, calibration-focused and TTA models. When comparing to the Ensemble, the Bayes+AvU had similar DICE. While Bayes+AvU had better calibrative and unc-err performance in the in-distribution (ID) dataset, the Ensemble performed better in the out-of-distribution (OOD) setting. The AvU loss had no positive effect on the DICE and unc-err performance of the Det model in both the ID and OOD setting, however there was an increase in ECE.
Visual results show that the Bayes+AvU successfully suppresses uncertainty in the true negative (Case 1 in fig:results_visual_pros_prmeddec, Case 2 in fig:results_visual_pros_pr12) and true positive (Case 2 in fig:results_visual_pros_prmeddec) regions of the predicted contour. It also shows uncertainty in the false positive regions (Case 2 in fig:results_visual_pros_prmeddec, Case 1/3 in fig:results_visual_pros_pr12)
§ DISCUSSION
Although medical image segmentation using deep learning can now predict high quality contours which can be considered clinically acceptable, a manual quality assessment (QA) step is still required in a clinical setting. To truly make these models an integral part of clinical workflows, we need them to be able to express their uncertainty and for those uncertainties to be useful in a QA setting. To this end, we test 11 models which are either Bayesian, deterministic, calibration-focused or ensembled.
§.§ Discriminative and Calibrative Performance
In context of DICE and ECE, the use of the AvU loss on the baseline Bayes model always showed results which have never statistically deteriorated. Moreover, the DICE results for the in-distribution (ID) head-and-neck dataset (RTOG) were on-par with existing state-of-the-art models (83.6 vs 84.7 for <cit.>). The same held for the ID Prostate dataset (PRMedDec) where results were better than advanced models (84.9 vs 83.0 for <cit.>). These results validate the use of our neural architecture <cit.>, and training strategy.
Secondly, although the Ensemble model, in general, had better or equivalent DICE and ECE scores across all 4 datasets, it also required 3x more parameters than the Bayes+AvU model. Also, as expected, and due to 5x more parameters, the Ensemble model performed better than the Det model for DICE and ECE.
Finally, in the regime of segmentation “failures" as the inaccuracy map, the calibrative methods did not generally have improved calibration performance when compared to the Det model. In theory, these models regularize the model's probabilities by making it more uncertain and hence avoid overconfidence. In practice however, this leads to the predicted contours being uncertain along their accurate boundaries, most evident in visual examples of the Focal and SVLS model (see fig:results_visual_hn and fig:results_visual_pros). Also, visual image characteristics in different regions of the scan that are similar to the segmented organs may cause these models to showcase uncertainty in those areas (for e.g. patches of uncertainty in Case 3 of fig:results_visual_hn_rtog).
§.§ Uncertainty-Error Correspondence Performance
Although calibrative metrics are useful to compare the average truthfulness of a model's probabilities, they may not be relevant to real-world usage in a pixel-wise segmentation QA scenario. Considering a clinical workflow in which uncertainty can be used as a proxy for error-detection, we evaluate the correspondence between them. Results showed that across both in- and out-of-distribution datasets, the Bayes+AvU model has one of the highest uncertainty-error correspondence metrics. Similar trends were observed for the BayesH+AvU (sec:app_bayesh) model, however Bayes+AvU was better. We hypothesize that this is due to perturbations in the bottleneck of UNet-like models having a better understanding of semantic concepts (e.g., shape, size etc) than the decoder layers. However, the AvU loss did not offer benefit to the Det model on both datasets indicating that this loss may rely on the model to already exhibit some level of uncertainty.
An interesting case is shown in fig:results_visual_hn_structseg (Case 3) which showed uncertainty on the white blob (a vein) in the middle of the grey tissue of the organ. Many models showed uncertainty on the vein due to a difference in its texture from that of the organ. However, this information may be distracting to a clinician as they are using uncertainty for error detection. Given that there were no segmentation “failures”, our Bayes+AvU model successfully suppressed all uncertainties. In another case (fig:results_visual_hn_rtog - Case 3), we saw that for 3D segmentation, uncertainty is also 3D in nature. Our Bayes+AvU model had an error in the second slice and correctly showed uncertainty there. However, this uncertainty overflowed on the first slice and hence penalized the uncertainty-error correspondence metrics. Such results indicate that during contour QA, the clinician can potentially trust our AvU loss models more than other models as they are better indicative of potential errors. This reduces time wasted analyzing false positive regions (i.e., accurate but uncertain) and hence increases trust between an expert and deep learning-based contour QA tools. Also note that in general, the two-class prostate dataset visually showcased higher levels of uncertainty than the six-class head-and-neck dataset.
As seen in tab:results_table_hn, tab:results_table_pros and fig:results_curves, there is no clear choice between the top two performing models i.e., Bayes+AvU and Ensemble for uncertainty-error correspondence. The visual results, however, indicate that the Ensemble model is more uncertain in accurate regions. Also, for all the datasets, the Det model has high AvU scores when compared to the Bayes+AvU model (sec:appendix_c). Here, it is important to consider that the AvU metric (eq:6_avu_terms) is essentially uncertainty accuracy, and thus, also comes with its own pitfalls. Given that all models had a DICE value which leads to more accurate terms and less inaccurate terms, the AvU metric got skewed due to the large count of n_ac terms. However, upon factoring the ROC and PRC curves, it becomes evident that the Det model is not the best performing for uncertainty-error correspondence.
Finally, all calibration-focused methods - Focal, ECP, LS, SVLS and MBLS had ROC and PRC metrics lower than the baseline Bayes model indicating that training for model calibration may not necessarily translate to uncertainty outputs useful for error detection.
§.§ Future Work
In a radiotherapy setting, the goal is to maximize radiation to tumorous regions and minimize it for healthy organs. This goal is often not optimally achieved due to imperfect contours caused by time constraints and amorphous region-of-interest boundaries on medical scans. Thus, an extension of our work could evaluate the contouring corrections made by clinicians in response to uncertainty-proposed errors in context of the dose changes to the different regions of interest. Such an experiment can better evaluate the clinical utility of an uncertainty-driven error correction workflow.
§ CONCLUSION
This work investigates the usage of the Accuracy-vs-Uncertainty (AvU) metric to improve clinical “utility" of deep Bayesian uncertainty as a proxy for error detection in segmentation settings. Experimental results indicate that using a differentiable AvU metric as an objective to train Bayesian segmentation models has a positive effect on uncertainty-error correspondence metrics. We show that our AvU-trained Bayesian models have equivalent or improved uncertainty-error correspondence metrics when compared to various calibrative and uncertainty-based methods. Given that our approach is a loss function, it can be used with other neural architectures capable of estimating uncertainty.
Given that deep learning models have shown the capability of reaching near expert-level performance in medical image segmentation, one of the next steps in their evolution is evaluating their clinical utility. Our work shows progress on this using a uncertainty-driven loss in a Bayesian setting. We do this for two radiotherapy body-sites and modalities as well in an out-of-distribution setting. Our hope is that the community is inspired by our positive results to further contribute to human-centric approaches to deep learning-based modeling.
The research for this work was funded by Varian, a Siemens Healthineers Company, through the HollandPTC-Varian Consortium (grant id 2019022) and partly financed by the Surcharge for Top Consortia for Knowledge and Innovation (TKIs) from the Ministry of Economic Affairs and Climate, The Netherlands.
The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects.
We declare we don't have conflicts of interest.
§ SEGMENTATION “FAILURES" AND “ERRORS"
Gindraft=false
§ WEIGHTAGE OF AVU LOSS
The table below show the weights used for the AvU loss which were finetuned on the validation datasets of the head-and-neck CT and prostate MR. The final weightage was chosen by identifying the inflection point at which the ROC-AUC and PRC-AUC drop precipitously. Given that the AvU loss is a log term, its values are inherently small (≤ 1.0). This is then added to the cross-entropy term, which is a sum of logs (Eqn (3)) over all the voxels (=N) and all the classes (=C). Thus, we used a balancing term in the range of 10^1 to 10^4.
§ HYPERPARAMETER SELECTION
In the tables shown below, we report results for different hyperparameters of different model classes. If the DICE of a hyperparameter is 10.0 points lower than the class maximum, we ignore it. We also ignore models with large drops in ECE or AvU-AUC when compared to models in its own class. To choose the best hyperparameter, it has to perform as the best in four out of the five metrics, else we chose the middlemost hyperparameter.
§ VISUAL RESULTS
Visual results in fig:results_visual_hn and fig:results_visual_pros show pairs of consecutive CT/MR slices to better understand the 3D nature of the output uncertainty across all models. We show examples with both high and low DICE to investigate the presence and absence of uncertainty in different regions of the model prediction.
§.§ Head-And-Neck CT
The first two rows of fig:results_visual_hn_rtog and fig:results_visual_hn_structseg show the mandible (i.e. lower jaw bone) with only the Bayes+AvU model having overall low uncertainty in accurate regions and high uncertainty in (or close to) inaccurate regions.
In the next set of rows for head-and-necks CTs, we observe the parotid gland, a salivary organ, with (fig:results_visual_hn_rtog - Case 2) and without (fig:results_visual_hn_structseg - Case 2, Case 3) a dental scattering issue. In both cases, while the Det model shows low uncertainty, the baseline Bayes model shows high uncertainty in accurate regions. Usage of the AvU loss lowers uncertainty in these regions, while still exhibiting uncertainty in the erroneous regions, for e.g. the medial (i.e. internal) portion of the organ in fig:results_visual_hn_rtog (Case 2).
Moving on to our last case, we see the submandibular gland, another salivary gland in fig:results_visual_hn_rtog (Case 3). The Ensemble, Focal, SVLS and MBLS models all display high uncertainty in the core of the organ, which are also accurately predicted. On the other hand, the AvU loss minimizes the uncertainty and shows uncertainty in the erroneous region on the second slice.
§.§ Prostate MR
For the prostate datasets, we see two cases with high DICE in fig:results_visual_pros_prmeddec (Case 1) and fig:results_visual_pros_pr12 (Case 2) where the use of the AvU loss reduces uncertainty for the baseline Bayes model.
We also see cases with low DICE in fig:results_visual_pros_prmeddec (Case 2) and fig:results_visual_pros_pr12 (Case 1). Due to their low DICE all models display high uncertainty, but the Bayes+AvU model shows high overlap between its uncertain and erroneous regions. The same is also observed in fig:results_visual_pros_pr12 (Case 3).
Finally, in fig:results_visual_pros_prmeddec (Case 3), we do not see any clear benefit of using the AvU loss on the Bayes model.
§ BAYESH MODEL
|
http://arxiv.org/abs/2409.03371v1 | 20240905091939 | Star Formation | [
"Rajika Kuruwita",
"Łukasz Tychoniec",
"Christoph Federrath"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP",
"astro-ph.GA"
] |
CHAPTER: STAR FORMATION
1]Rajika Kuruwita
2]Łukasz Tychoniec
3]Christoph Federrath
[1]Heidelberg Institute for Theoretical Studies, Stellar Evolution Theory, Schloß-Wolfsbrunnenweg 35, 69118 Heidelberg, Germany
[2]Leiden University, Leiden Observatory, PO Box 9513, 2300RA, Leiden, The Netherlands
[3]Australian National University, Research School of Astronomy and Astrophysics, ACT 2611, Australia
Chapter Article tagline: update of previous edition, reprint..
Simple measures to capture the robustness and the plasticity of soil microbial communities
[
September 9, 2024
==========================================================================================
minitocafter
minitocbefore
[Glossary]
Molecular cloud a region of space primarily composed of hydrogen in its molecular state (H_2).
Pre-stellar core an over-density of gas that does not have a star, but will collapse to form a star.
[Nomenclature]
MHD Magnetohydrodynamics
ISM Interstellar medium
AU Astronomical units
YSO Young stellar object
PDF Probability density function
SED Spectral energy distribution
§ ABSTRACT [Abstract]
In this chapter, we will cover how stars form from the stellar nurseries that are giant molecular clouds. We will first review the physical processes that compete to regulate star formation. We then review star formation in turbulent, magnetized molecular clouds and the associated statistics giving rise to the star formation rate and the initial mass function of stars. We then present the protostellar stages in detail from an observational perspective. We will primarily discuss low-mass (<1.5) stars. Finally, we examine how multiplicity complicates the single-star formation picture. This chapter will focus on star formation at redshift 0.
Key words: Astrophysical processes: astrophysical magnetism – gravitation; Interstellar medium: interstellar dynamics, molecular clouds; Protostars: young stellar objects.
[
standard jigsaw,
colback=green!15,
opacityframe=0.5,
opacityback=0.2
]
§ LEARNING OBJECTIVES
* Understand the role of gravity, hydrodynamics, turbulence, radiation, and magnetic fields in star formation.
* Look at how the star-forming environments set the initial conditions for star formation and the rate of star formation.
* Review what observations of young stellar objects (YSOs) tell us about the star formation process.
* Explore how the picture of isolated star formation is complicated by the fact that most stars are born with siblings.
§ INTRODUCTION TO THE PHYSICS OF STAR FORMATION
The most abundant element in the universe is hydrogen (see `Big Bang Nucleosynthesis'), which typically exists in an ionized state as protons in low-density, hot regions (temperature T>10^4), or as atomic hydrogen with a proton and electron in higher-density, cooler (100≲ T<10^4) regions <cit.>. However, when the density is high enough and the temperature is sufficiently low <cit.>, hydrogen atoms can form the more stable H_2 molecule. These regions of higher density, where molecular hydrogen exists, are called `molecular clouds'. Stars form from over-densities of gas within these molecular clouds.
§.§ Gravity, gas dynamics and the formation of the first hydrostatic core
When a local over-density forms in a molecular cloud, it is typically called a `pre-stellar core'. At this stage, a star has not formed, but the pre-stellar core will collapse under its own gravity to start the star-formation process. Assuming that the pre-stellar core is mostly spherical, with no rotation, gravity is the only force acting on the pre-stellar core. Also assuming that the gas is pressure-less, a star should form within a free-fall time . The free-fall time for a spherical cloud of uniform density ρ is defined as
=√(3π/32Gρ),
where G is the gravitational constant. Observed pre-stellar cores have number densities of n_H2≃ 10^5^-3 <cit.>, which translates to ρ≃ 3.8 × 10^-19 g^-3. At this density, the free-fall time of a typical pre-stellar core is 0.1.
However, gas is not a pressure-less fluid, and as the pre-stellar core collapses, the outward force of the pressure gradient will counteract the inward gravitational force. When the collapse progresses to gas densities of ≃10^-10 g^-3, the pressure force and the gravitational force are comparable and the fluid is approaching hydrostatic equilibrium. This is defined as
dP(r)/dr = GM(r)ρ(r)/r^2,
where P(r), ρ(r) and M(r) are the pressure, density, and enclosed mass, at radius r. This hydrostatic object that forms from the pre-stellar collapse is called the `first hydrostatic core' <cit.>. This first hydrostatic core is not yet a star, because its radius is ∼4. To describe the further collapse of the first hydrostatic core into a star, we must now understand how radiation plays a role.
§.§ Radiation and the formation of the second hydrostatic core
Electromagnetic radiation can be produced from two pathways: 1. Converting mass into radiative energy through stellar nucleosynthesis/nuclear fusion (using Einsteins E=mc^2; also see `Evolution and final fates of low- and intermediate-mass stars', `Evolution and final fates of massive stars'), or 2. Converting from a different type of energy into radiative energy.
Conversion between energy types is an essential physical process. During the pre-stellar core collapse described in <Ref>, the gas has gravitational potential energy that is converted into thermal energy. This thermal energy is equivalent to the gas pressure, which eventually slows down the pre-stellar collapse, to form the first hydrostatic core. The question then is, how do we continue to collapse beyond the first hydrostatic core to form a star?
Thermal energy can be removed from the system via radiation. During the initial stages of pre-stellar collapse, the thermal energy from the gravitational collapse can be radiated away efficiently because the gas has a low optical depth (τ). This efficient radiative cooling means that the gas remains approximately isothermal until number densities of ∼10^10 are reached.
The optical depth is a measure of how easily photons can pass through a medium, telling us if a medium is opaque (or `optically thick'; τ≫1) or transparent (or `optically thin'; τ≪1). The optical depth is defined as
τ_λ = ∫_0^L nσ_λ dl,
where σ_λ is the cross-section of interaction of the particles in the medium with photons of wavelength λ, and n is the number density of particles. The optical depth is calculated by integrating these values over some line of sight L.
The interactions that are quantified by σ_λ are absorption and scattering events. An atom or molecule is more likely to absorb a photon if its energy matches a difference in electron shell energies because this can excite electrons into higher energy states. This process produces absorption-line spectra. Scattering events are dependent on the size of the particles (r_p) and wavelength. In the Rayleigh scattering regime, longer wavelength photons can penetrate the medium further before an interaction than shorter wavelength photons. For a hydrogen molecule, which has a size of 1.2×10^-8, any wavelength from the visible light (∼5×10^-5) to radio waves (>10^2) falls into the Rayleigh scattering regime.
In the optically thin regime, a photon of radiation can pass through gas unhindered allowing for efficient radiative cooling. As stated previously, the temperature of molecular hydrogen in molecular clouds is quite cool, around 10-40 K, which according to Wien's law would radiate a blackbody spectrum that peaks in the infrared (∼10^-2). During the initial protostellar collapse, the number density of molecular hydrogen is low enough that the infrared radiation is optically thin, leading to efficient cooling.
In the optically thick regime, a photon has many interactions with particles in the medium being absorbed, re-emitted, and scattered. The trapping of radiation makes cooling very inefficient, and radiation is more likely to be converted to thermal energy increasing the temperature of the medium. This is what happens when the first hydrostatic core forms: the medium transitions from optically thin to optically thick.
The first hydrostatic core is still accreting mass from the surrounding pre-stellar core, which disturbs the hydrostatic equilibrium. With the increasing mass, the hydrostatic core will continue to contract under its own gravity. However, because the gas is optically thick, making radiative cooling inefficient, the temperature of the core increases during contraction.
Once the temperature at the center of the first hydrostatic cores exceeds 2000 K, the H_2 molecules separate into individual atoms. When this dissociation happens, thermal energy is used to break the chemical bonds in the H_2 molecular to create atomic hydrogen. Because of this efficient use of thermal energy for H_2 dissociation, the system returns to a near isothermal state and a second isothermal collapse is triggered. As with the first isothermal collapse, the second isothermal collapse will halt when the internal pressure increases and a new hydrostatic equilibrium is established.
The second hydrostatic core is a few solar radii large and is embedded in the first hydrostatic core, which the second core accretes from. Further accretion makes the second hydrostatic core undergo another adiabatic contraction, which increases the central density and temperature, ionizing the atomic hydrogen. When the central temperature exceeds 10^4 K the hydrogen is fully ionized. The higher density and temperature make the second core opaque to the radiation being produced in the center. Convection is triggered within the second hydrostatic core because of the high optical depth.
When most of the mass from the pre-stellar core has been accreted by the second core, we essentially have a protostar. This protostar will continue to contract, following the Hayashi track, decreasing in luminosity, but maintaining the same surface temperature of ∼4000K <cit.>. For low-mass protostars (M_⋆<0.6), they remain fully convective and will continue contracting along the Hayashi track until hydrogen fusion is triggered, and the star is on the main sequence. For more massive protostars, the central temperature of the protostar increases more during the Hayashi track contraction, and the core becomes radiative. These stars continue to contract slowly, but the surface temperature increases, effectively making the protostar have a constant luminosity <cit.>, eventually also triggering hydrogen fusion and joining the main sequence. For a more detailed evolution of the later stages of star formation. For further details on the stellar evolution of stars, we refer the reader to `Evolution and final fates of low- and intermediate-mass stars' and `Evolution and final fates of massive stars'.
§.§ Magnetic fields and angular momentum transport
In the previous section, we focused on the collapse of a non-rotating cloud. However, the pre-stellar cores that form in giant molecular clouds inherit angular momentum from the turbulence in the parent cloud, such that the cores are always rotating. Even though these clouds initially have very low rotation, as they collapse, they rotationally flatten and spin up due to the conservation of angular momentum <cit.>. The collapse halts when the radius of the cloud matches the centrifugal radius. This is defined as the radius where the rotation rate equals the Keplerian velocity and is given by
r_c = j^2/GM,
where M and j are the mass and mass-specific angular momentum of the pre-stellar core, respectively. However, based on observed rotation rates, for a 1 pre-stellar core, the centrifugal radius is 2.7×10^6, which is significantly larger than a star, i.e. ∼1 <cit.>. Therefore, approximately 99.99% of the angular momentum must be removed, for the gas to collapse down to stellar radii. This angular is now understood to be removed via jets and outflows which are the result of magnetic fields.
Magnetic fields permeate the universe and play an important role in the star formation process. Charged particles preferentially travel along magnetic fields. As stated earlier, most of the gas in the ISM exists in an ionized state, which is essentially full of positively charged protons, and negatively charged electrons. Due to the charged nature of the ISM, it is expected that there is a strong coupling between gas motions and magnetic fields. However, even in regions where the ionization fraction is very low, such as in the star-forming, molecular phase of the ISM, the coupling between ions and neutrals ensures that even the neutral gas is subject to the same Lorentz force as the ions.
Magnetic field lines have tension, similar to a rubber band. If a magnetic field line is perturbed by charged gas moving perpendicular to the line, the magnetic tension makes the field line want to return to its unperturbed state. The magnetic field line, bouncing back after being perturbed, excites waves that move along the magnetic field. These waves are called Alfvén waves, and they can remove angular momentum and energy from a collapsing pre-stellar core. As pre-stellar cores rotate and collapse, the gas drags the magnetic field along with the motion, building magnetic pressure. Magnetic pressure is defined as
P_B = B^2/2μ_0.
where B is the magnetic field strength, μ_0 is the vacuum permeability. The force from the magnetic pressure gradient is the actual restoring force for the magnetic field lines and acts as the `tension' mentioned previously. This pressure gradient force F_B opposes the gravitational force, and this pressure gradient causes the field lines to `snap' back triggering Alfvén waves to travel along magnetic field lines with velocity v_A, and reduces the rotational speed of the pre-stellar core. The velocity of Alfvén waves is given by
v_A = B/√(μ_0ρ),
ρ is the mass density of the gas.
The energy for the Alfvén waves is converted from the kinetic energy of the ionized material in the pre-stellar core, and this is how magnetism can remove energy and angular momentum during the initial stages of pre-stellar core collapse, however at later stages other magnetic mechanisms become more dominant.
While magnetism can remove energy from moving ionizing gas in a collapsing pre-stellar core, strong magnetic fields can slow down, or halt collapse due to magnetic pressure. As described in <Ref>, the collapse of the pre-stellar core due to gravity is counteracted by the gas pressure. Magnetic pressure also contributes to counteracting gravitational collapse. The mass-to-flux ratio (M/Φ) has historically been used to quantify whether a pre-stellar core will collapse, or if the magnetic pressure is too high that it prevents collapse, where Φ is the magnetic flux through the pre-stellar core. This ratio essentially quantifies the balance of the gravitational force against the force due to magnetic pressure. The critical mass-to-flux ratio ((M/Φ)_crit) was calculated by <cit.> to be 487 g cm^-2 G^-1, where cores that are supercritical (i.e. (M/Φ)>(M/Φ)_crit) will collapse, while sub-critical cores ((M/Φ)<(M/Φ)_crit) will not.
As supercritical pre-stellar cores collapse, the rotation leads to the formation of a disk due to the conservation of angular momentum, but the magnetic field is also dragged along with the gas and is wound up within the disk. <Ref> shows this magnetic field morphology produced from simulations of the collapse of a rotating supercritical core. The left and middle panels show gas density projections of this core perpendicular and parallel to the rotation axis of the core, and the blue streamlines trace the magnetic field. The right panel shows a three-dimensional rendering of the magnetic field, with the color indicating magnetic field strength. We see that the magnetic field morphology is pinched inwards in the disk, and within the disk, the magnetic field is coiled up. A natural consequence of the coiling of magnetic fields is the production of protostellar jets and outflows. Multiple mechanisms have been proposed for launching outflows including the disk wind model <cit.>, the magnetic tower <cit.> and the `X-wind' model <cit.>. All of these models are likely to be present during the star formation process and act in different regimes. <Ref> describes how the protostar and circumstellar disk are connected, and where different outflows are launched.
The disk wind model: <cit.> calculated that centrifugally-driven outflows can be launched from a disk with a coiled-up magnetic field if the angle of the magnetic field to the disk mid-plane is less than 60^∘. The velocity of the outflows reflects the rotation profile of the disk, with faster-velocity outflows being launched at smaller radii, and lower-velocity outflows being launched at larger radii. These outflows are often called `winds' and are launched from the disk surface, as shown in <Ref>.
The magnetic tower describes the launching of outflows via a magnetic pressure gradient. <cit.> describe these as highly coiled-up magnetic structures and the pinching of magnetic fields to produce strong pressure gradients away from the disk, producing a force that significantly overcomes the gravitational force. Many ideal MHD simulations of molecular core collapse and protostar formation find that this mechanism is what drives the initial jet launching <cit.>. The magnetic tower is also likely acting along with the magneto-centrifugally driven disk winds of <cit.>.
The X-wind model mainly concerns regions in the inner disk where the magnetosphere of the star threads through the inner disk, as shown in <Ref>. Because of the heat from the protostar, the inner disk is fully ionized, and strong coupling between the gas and magnetic field. Due to the conservation of angular momentum, the protostar would typically rotate quickly, but because of this strong coupling between the protostar and inner disk which rotates at lower speeds, (magnetic) tension builds up between these two regions. This tension leads to the launching of jets by the X-wind mechanism. These jets have velocities of a few 100 kms, similar to velocities observed in protostellar jets.
The modern consensus is that it is a combination of the disk wind and X-wind model working together to produce the outflow features observed in protostars, with the X-wind producing the highly collimated jet and the disk-wind producing the lower-velocity outflow. Overall, magnetic fields play an important role in the removal of angular momentum, aiding pre-stellar collapse, and allowing protostars to accrete from their circumstellar disks.
§ STELLAR NURSERIES – MOLECULAR CLOUDS
Stars form in cold, turbulent, molecular clouds. We know this from molecular-line observations with radio and sub-mm telescopes (see also 'The interstellar medium'). These clouds consist mainly of molecular hydrogen, H_2, with carbon monoxide, CO, being the second-most abundant molecule. CO is typically used to measure the turbulent velocities in molecular clouds, because at the low temperatures of about , H_2 cannot emit photons due to its missing permanent dipole moment, while CO is easily excited, and the Doppler shift of its rotational lines can be used to measure the line-of-sight (LOS) velocity of the gas <cit.>.
Given the typical temperatures of molecular clouds, implying sound speeds of the order of , and measured velocity dispersion of , the clouds are governed by supersonic turbulent motions with sonic Mach numbers of =σ_v/≈1–50. When studied over different length scales, the velocity dispersion follows a power-law relation with scale, ℓ,
(ℓ) = (L) (ℓ/L)^p ≈ 1 (ℓ/pc)^p,
with p≈0.4–0.5 based on observations <cit.>. This power-law form is indeed similar to the power law obtained in the famous Kolmogorov model of turbulence (p=1/3) <cit.>, however, which strictly only applies to incompressible turbulence. Instead, the Burgers model of turbulence <cit.> may be more applicable here, as it is based on an ensemble of discontinuities (shocks), which corresponds to p=1/2. Reality likely sits in between those extremes, not to mention the added complication of intermittency and magnetic fields, with active research exploring turbulence models for this complex regime of compressible plasma turbulence <cit.>.
§.§ Turbulence-regulated star formation
Motivated by the fact that all star-forming clouds observed so far exhibit high levels of compressible turbulence, many authors have investigated star formation in turbulent media <cit.>. The compressible nature of turbulence gives rise to a characteristic gas density probability distribution function (PDF), enabling analytic estimates of the star formation rate (SFR) and the initial mass function (IMF; See also `Stellar initial mass function') of stars.
§.§.§ The gas density PDF
Turbulent isothermal gas can be well approximated by a log-normal density PDF <cit.>,
p(s) = (2π^2)^-1/2exp[-(s-)^2/2^2],
with the dimensionless logarithmic density contrast s=ln(ρ/), mean density , mean log-density =-^2/2 <cit.>, and log-density variance <cit.>,
^2=ln[1+b^2^2(1+β^-1)^-1],
where β is the plasma beta (ratio of thermal to magnetic pressure; note that β→∞ if the magnetic field is zero). A more recent modification of this relation accounts for strong magnetic guide fields <cit.>. The parameter b in <Ref> is the turbulence driving parameter, which is controlled by the mixture of solenoidal vs. compressive modes in the driving mechanism of the turbulence <cit.>. Purely solenoidal (divergence-free) driving has b∼1/3, while purely compressive (curl-free) driving has b∼1 <cit.>. Modifications of <Ref> can be made to account for non-isothermal gas conditions <cit.>, and where intermittency plays a role <cit.>.
§.§.§ The star formation rate
The modern theory of star formation is based on the turbulent density PDF. A key step in turbulence-regulated theories of the SFR and IMF is to estimate the fraction of dense gas that can form stars, and this is exactly what <Ref> and (<ref>) can provide. To derive a rate at which this dense gas turns into stars, we need to divide the dense gas fraction by the freefall time, which, using the definitions in Sec. <ref>, gives the basic expression for the SFR per average freefall time, ⟨⟩, for a cloud of mass <cit.>,
= ⟨⟩/ =
/∫_^∞ρ/⟨ρ⟩⟨⟩/(ρ) p(s) ds =
/∫_^∞exp(3/2s) p(s) ds =
/2exp(3/8^2) [1+erf(^2-/(2^2)^1/2)],
where (ρ)=3π/(32Gρ) defined in <Ref>, the star-to-core mass ratio <cit.>, and ∼2 is a numerical correction factor, calibrated in simulations <cit.>. Finally, the critical density for star formation, , is given by
= ln[(π^2/5) ^2 ^2(1+β^-1)^-1],
which is obtained by comparing the Jeans length <cit.> with the turbulent sonic scale <cit.>, which marks the transition from supersonic turbulence on cloud scales, to subsonic turbulence inside the dense star-forming cores and accretion disks. Thus, the critical density is a result of the competition of gravity and turbulence, which gives rise to the virial parameter =2E_kin/E_grav in <Ref>, the ratio of twice the kinetic to gravitational energy of the cloud. The numerical correction factor ∼0.2 accounts for a slight mismatch between the sonic and Jeans scales when forming , and can be determined by calibration with numerical simulations <cit.>.
A key prediction of this theory is that the SFR depends on 4 basic cloud parameters, namely the virial parameter (), the sonic Mach number (), the turbulence driving mode (b), and the magnetic plasma beta (β). For instance, keeping all parameters fixed at typical cloud values (∼1, ∼10, β∼0.3), except for the driving mode, <Ref> predicts an SFR that is a factor of ∼2.4 higher for compressive driving compared to solenoidal driving. With the associated reduction in for compressive driving, due to the stronger local compressions leading to a higher overall binding energy of a cloud, compressive driving can yield an order of magnitude higher SFR than solenoidal driving <cit.>.
§.§ The initial mass function of stars
The initial mass function (IMF) is the distribution of the birth mass of stars. `Stellar initial mass function' in this series provides more information, so we focus here on the link between turbulence and the IMF and present a brief summary of the physics primarily involved in controlling the IMF.
§.§.§ Basic characterization
The IMF is usually characterized by a power-law section for masses ≳1 <cit.>, and a log-normal (or several power-law sections that may be approximated by a log-normal) turnover toward smaller scales, into the brown-dwarf regime <cit.>, which is difficult to constrain exactly, due to the uncertainties involved in observing low-mass (low-luminosity) stars. The peak (or characteristic mass) of the IMF is around .
§.§.§ Physical processes
The IMF is controlled by a combination of physical processes. Gravity and turbulence play a central role <cit.>. However, magnetic fields and radiation <cit.> are also key ingredients, as they tend to reduce fragmentation. Moreover, magnetic fields produce powerful jets and outflows from the accretion disk around a newborn star, removing mass from the disk and core, and thereby significantly contributing to setting the final mass of a young star <cit.>.
In <Ref> we saw that the turbulent density distribution determines the amount of dense gas eligible for star formation, thereby determining the SFR. Similar holds for the IMF, with most modern theories of the IMF relying on the same underlying physics that gives rise to the turbulent gas density distribution described by <Ref>. Here we highlight one aspect of this distribution, namely, its width, which is crucially determined by the driving mode of the turbulence (b parameter in <Ref>). Using numerical simulations, <cit.> showed that the IMF depends on the driving mode (solenoidal vs. compressive) of the turbulence. This is shown in <Ref>. We see that compressive driving produces substantially stronger density fluctuations than solenoidal driving. The IMF resulting from several sets of simulations with different random seeds yields a total of 468 and 445 stars, with a median stellar mass of (0.4±0.1) and (0.6±0.2) for compressive and solenoidal driving, respectively. This shows that turbulence is a key ingredient for the IMF, and variations in the driving mode of the turbulence may produce significant variations in the IMF.
§.§ Feedback processes
While the interplay of turbulence and gravity is a key controller of star formation, as discussed in the previous two subsections, stellar feedback processes also play a crucial role. We broadly distinguish mechanical and radiative forms of feedback.
Mechanical feedback is the redistribution of mass and momentum by jets and outflows from the accretion disk around protostars (see Sec. <ref>) or from supernova explosions. Jets and outflows are particularly relevant for the SFR and IMF, in that they limit the amount of material that can be accreted onto the protostar by about a factor of 2, therefore slowing down star formation <cit.>. Moreover, the mechanical nature of this type of feedback can cause coherent accretion streams to break, thereby inducing additional fragmentation, which, together with the direct limiting effect on accretion, leads to an overall reduction of the average stellar mass by a factor of ∼3 <cit.>.
Radiative feedback describes the heating, ionization, and/or radiation pressure induced by stars. This form of feedback, in particular direct radiation pressure and reprocessed ionizing radiation from massive stars, also causes a mechanical effect in that the radiation force can push on the dust <cit.>, forming expanding shells around HII regions, sculpting dense structures such as the Pillars of Creation. Evolved stars drive winds throughout most of their lifetime, re-injecting material (in particular metals), momentum and energy into the ISM. While the aforementioned radiative feedback processes are primarily relevant for massive stars, heating feedback is crucial for all young stars, including low-mass stars. Accretion causes a local (≲0.1) heating effect around young stars, which limits fragmentation of the surrounding gas, thereby significantly controlling the low-mass end of IMF <cit.>.
Finally, all the mechanical feedback types, as well as the radiative ones that cause a mechanical effect can drive turbulence <cit.>, thereby closing a feedback loop, in which turbulence is responsible for regulating star formation as described in Sec. <ref> and <ref> above.
§ OBSERVATIONAL VIEW OF THE STAR FORMATION PROCESS
In this section, we describe the observational constraints on the formation of a single Solar-like stellar system. Due to the embedded nature of protostellar sources, their studies are mostly conducted at infrared and longer wavelengths, as the young protostars are often in the densest part of the cores from which they form, where extinction inhibits observations at shorter wavelengths. The focus of this section is the protostellar stages (i.e., Class 0 and Class I objects). The evolution of later stages (pre-main sequence stars), protoplanetary disks, as well as planet formation, is covered in `Protoplanetary disk origins and free-floating exoplanets', `Protoplanetary disk chemistry and structure', and `Planet formation mechanisms'.
§.§ Protostellar evolutionary path and classifications
The protostellar evolution is divided into classes, based on their observed properties. The different observational properties of protostellar evolutionary characterization are summarized in Table <ref>. These empirical classes of evolution were first introduced based on the observations of a near-infrared spectral index between 2 to 20 μm defined as
α _IR= d log (λ F_λ)/d log (λ),
for flux F_λ at wavelength λ <cit.>. The spectral index changes as the protostar gains mass, disperses the envelope and forms a protostellar disk. The youngest protostars show a redder (positive) spectral index, with increasing brightness towards longer wavelengths, and more evolved sources have a negative spectral index as the stellar spectral energy distribution (SED) approaches that of a main-sequence star.
However, this characteristic does not account for the existence of even younger objects, Class 0 protostars, which are often too cold and visually extinct to emit in the near-IR regime <cit.>. Flat-spectrum sources were also later distinguished as a transition between Class I and Class II sources <cit.>.
Protostars have been also categorized by their bolometric temperature T_ bol, which is established as the temperature of a blackbody with the same mean wavelength as the SED of a protostar <cit.>. Based on that classification, the Class 0 stage can be distinguished as having T_ bol≤ 70. Both of the described methods, however, rely on observable properties of the system, where, for example, the inclination of the protostellar disk can alter the measured infrared spectral index <cit.>. Another method involves comparing the contribution of sub-millimeter luminosity to the bolometric luminosity of the source, where most embedded sources have at least 0.5% of their luminosity above 350 μm contribution <cit.>.
Another way to classify protostellar sources is by their physical parameters instead of observed properties, which provide more descriptive characteristics of the state of the system <cit.>. These physical classifications are illustrated in <Ref>. In Class 0, most of the system's mass is still in the envelope; Class I marks the transition where disk mass is comparable to or greater than the mass of the envelope, while most of the system's mass is already in the central star; Class II sources have a negligible envelope, with the gaseous disk still present, but its mass is much lower than the mass of the central star; by Class III the star is a pre-main sequence object and the disk is gas-less and of negligible mass.
A protostellar system comprises different physical components that can be observationally characterized using various molecular tracers. These tracers are summarised in <Ref>. In the following sections, we describe the key characteristics and evolutionary trends in each of those components.
§.§ Protostellar envelope
Sub-millimeter single-dish and interferometric continuum observations, sensitive to cold grains, are widely used tools to recover the dust structure of the envelopes. Observations find that protostellar envelopes have radii of several 1000. Inferred from dust emission, the density profiles of the protostellar envelopes are often found following a radial density profile close to ρ∝ r^-2 <cit.>, which is consistent with theoretical predictions described in <cit.>, but steeper profile closer to ρ∝ r^-3/2 of the inside-out collapse <cit.> is also observed <cit.>. Dust properties are typically similar to the interstellar medium; however, in the inner envelope, signatures of grain growth can be observed <cit.>.
Gas in the protostellar envelopes is traced almost exclusively at (sub-)millimeter wavelengths due to the very low temperatures of the order of 10-20. Velocity-resolved observations of emission lines can trace the infall and rotation of the envelope and can be used to constrain the angular momentum <cit.>. At densities of 10^4-10^5^-3, the freeze-out timescales of the gas become shorter than the envelope lifetime, and certain gas species sublimate onto the dust grains. For example, the freeze-out temperature of carbon monoxide (CO) is 20-25, while water (H_2O) freezes at temperatures below 100. The depletion of CO and H_2O from the gas to the ice phase causes a rise of emission of molecules, which otherwise are efficiently destroyed through reactions in the gas-phase with CO. Such tracers are DCO^+ and N_2H^+, which are tracers of CO freeze-out and H^13CO^+, which tracers H_2O freeze-out <cit.>. The frozen molecules cannot be traced with emission spectroscopy, but they absorb the light, especially in the infrared regime. Combined with the laboratory characterization of ice mixtures, a detailed composition of the ice mantle in envelopes around protostars can be obtained <cit.>.
The envelope dissipates during protostellar evolution as the material is delivered to the disk and star. At the same time, protostellar outflows and jets open up a cavity wall and expel a large amount of material from the system. On the other hand, streamers of gas from the larger cloud scales can replenish the envelope with material at various stages of evolution <cit.>.
§.§ Outflows and jets
Outflows are one of the first signs of the new star being born as they expel gas away from the deeply embedded protostar. As the outflow propagates at supersonic velocities, it creates shocks with the surrounding medium. Therefore, high-temperature traces such as H_2 rotational transitions are commonly used to study shocked gas. Shocks disrupt dust mantles and cores, releasing material that would, in quiescent ISM conditions, remain in the solid phase. Therefore, SiO molecular gas or atomic and ionized emission from Si, Fe, and Ni is observed in shocked gas.
Observationally, outflows are typically divided into the high velocity (>30), highly collimated component often called jets, and the low velocity (<30) wind angle component sometimes called winds. The low-velocity component is expected to trace the envelope material entrained by the faster component, or the disk wind which is the gas directly released from the protoplanetary disk. The low-velocity outflow is traced by rotational transitions of CO. In some young outflows, the outflow can also be traced by more complex species such as CH_3OH and H_2CO, which trace the sputtering of grains in low-velocity shocks at the outflow cavity walls.
Detailed studies of jet kinematics can inform about their precise physical origin and mechanism (see <ref>. The chemical content of the jets undergoes evolution. Molecules such as CO, SiO, and SO are mostly detected in very young Class 0 sources. This is likely because high number densities of the order of 10^6^-3 are required for efficient gas-phase formation of molecules from initially atomic material. Further into the evolution, the neutral ([O I], [Ni I], [Cl I], [S I]) and ionized ([Fe II], [Ne II], [Ar II]) components of the jet become dominant <cit.>. Prominent refractory contents of the jet material suggest that jets either launch from the inner regions of the disk or that dust grains are launched and efficiently destroyed in the jet.
Apart from chemical evolution, jets and outflows also significantly change their energetic and mass output during protostellar life. Young outflows are characterized by the most energetic outflows, and the total outflow force is found to be correlated with protostellar luminosity, indicating a strong relation between accretion and ejection activity of the protostar <cit.>. This correlation between outflow and accretion rate is used to design simulation sub-resolution models of jets and outflows <cit.>.
Since the outflows are expected to remove angular momentum, observations of the rotational signature is one of the crucial observations. Rotation of the jet and wind has been observed <cit.>, indicating that the angular momentum is indeed removed with the outflow.
Jets launched from the inner regions of the protoplanetary disks often form internal shocks, which are characterized by high densities, where molecules can efficiently form. Those shocks are caused by internal variations of the jet velocity, which occur due to accretion variability. Because of that, jets are fossil records of the accretion process, revealing that the protostellar accretion process is highly variable in nature <cit.>.
§.§ Protostellar accretion
Most of the stellar mass is assembled during the early stages of evolution (Class 0/I). Direct observations of the protostar remain a challenge for observations since the protostars are deeply embedded. Nevertheless, in recent years significant progress has been made to extract stellar properties <cit.>. Hydrogen recombination lines, which are tracers of high-density and high-temperature gas, are used to probe the accretion onto the protostar. With a combination of bolometric luminosity estimates and infrared photometry, stellar properties can be constrained.
Measured accretion rates are often lower than expected, considering the duration of accretion and the final masses of stars from the initial mass function <cit.>. This discrepancy between the observed accretion rate of young stars being significantly lower than expected from models is called the Luminosity Problem. A solution to this problem is that protostars accrete a significant portion of their mass during periods of high accretion, such as outbursts or in the initial stages of protostar formation. Protostellar accretion is, therefore, a highly variable process that evolves dramatically during protostellar life <cit.>.
§.§ Embedded disk
In the inner regions of the envelope, the velocity profile changes as the forming circumstellar accretion disk follows Keplerian rotation. Several young disks have their Keplerian rotation characterized in observations. However, it remains a challenge as most of the dust disks are small, with radii ≤50 <cit.>. With different tracers such as formaldehyde (H_2CO) or optically-thin isotopologues of CO, it is possible to study the temperature of the disk <cit.>.
Dust masses of the young Class 0 and Class I disks are fundamental to estimating the total budget of building blocks of planets. However, they are difficult to constrain as the young disks are optically thick and hard to discern from the surrounding envelope. Observations at longer wavelengths ∼1 can mitigate those issues and have been used to constrain masses of the order of 50 to 150 Earth masses <cit.>.
This is a factor of 5 to 20 more than typical masses of Class II disks <cit.>.
The available mass budget, grain growth observed in Class I systems, and substructures omnipresent in Class II disks suggest that planet formation should already begin early. These structures, such as gaps and spirals, are rarely observed in Class 0, while they appear to be more common in the Class I stage, suggesting an evolution of disks potentially shaped by planets <cit.>. For further details on protoplanetary disks and planet formation, we refer to `Protoplanetary disk chemistry and structure' and `Planet formation mechanisms'.
§ MULTIPLICITY AND THE FORMATION OF BINARY/MULTIPLE STAR FORMATION
<Ref> and <ref> have focused on the formation of a single star, however many stars exist in binary or multiple star systems (<cit.>; See also `Observing binary stars'). <Ref> compiles observational surveys of main sequence stars, with the left panel showing the fraction of stars of mass M that are in binary or higher (thick crosses), or triple or higher (thin crosses) star systems. The right panel shows the companion frequency as a function of star mass. We see that many stars can exist in binary or multiple-star systems, with more massive stars being more likely to have companions. The actual fraction of all stars that are in multiple star systems is sensitive to the initial mass function (see <Ref>), however, it is accepted that a significant number of stars are in multiple star systems, and their formation must not be ignored when understanding star and planet formation.
We also find that most stars are born with a companion, with multiplicity being highest in the protostellar Class 0 (see <Ref>), decreasing as we look at more evolved protostars. This means that many of the stars in these Class 0 multiple-star systems will interact and get ejected as they evolve towards the main sequence, or maybe even merge to form more massive stars <cit.>. These interactions early on can affect the disks around the protostars and affect the sites of planet formation. Many stars that are single on the main sequence may have begun their life in a multiple-star system and were ejected through complex orbital dynamics.
Observations of separations in young binary and multiple star systems in star-forming regions find a bimodal distribution with one peak at ∼100 and another at ∼3000, as seen in <Ref>. When this bimodal distribution was first observed, the origin of the two peaks was attributed to two formation pathways for multiple star systems: 1. pre-stellar core fragmentation, and 2. circumstellar disk fragmentation.
§.§ Core fragmentation
Core fragmentation was used to explain the separation peak at ∼3000 because this formation pathway acts on larger scales of 100s to 1000s of . As stated throughout this chapter, molecular clouds are turbulent, and this turbulence can create over-densities that make pre-stellar cores collapse to form a star. However, pre-stellar cores also have sub-sonic turbulence, which may seed further over-densities that can fragment to form stars. The description of pre-stellar collapse in <Ref> starts with a spherical cloud, however, turbulence in the ISM can seed the formation of filaments <cit.>, from which most pre-stellar cores fragment. This is seen in <Ref>, with an observed filament (leftmost panel) versus a modeled filament and cores (rightmost panel). This elongation adds asymmetry, which can seed fragmentation, along with the turbulent nature of the cores.
Fragmentation in hubs, where filaments intersect is also observed. This is seen in <Ref> where at low resolution (center panel) fragmentation is observed, and at higher resolution, further hierarchical fragmentation is also observed. The filaments that feed these hubs inject turbulent energy, which can lead to fragmentation.
The fragments that form along a filament can dynamically fall towards each other because the relative velocity of the fragments to each other is low, and stars that form from core fragmentation in hubs will likely also experience complex dynamical interactions. While the larger peak in the separation distribution at ∼3000 is attributed to core fragmentation, many multiple star systems that form via this pathway often inspiral to smaller separations, even down to <100 <cit.>.
§.§ Disk fragmentation
As described in <Ref>, circumstellar disks are a natural consequence of pre-stellar core collapse, and under the right conditions, fragmentation can occur within these disks to form new stars. The separation peak at ∼100 has been attributed to disk fragmentation because this formation pathway acts on disk scales. Circumstellar disks can extend up to ∼600, with mean disk sizes in the Class 0/I stage being around ∼75 <cit.>.
The Toomre Q <cit.> is a quantity that is often used to measure the stability of a disk, defined as
Q = Ω/π G Σ,
where is the sound speed, Ω is the angular frequency, and Σ is the gas surface density of the disk. A parcel of gas is considered to be stable if Q≫1, and unstable and prone to collapse if Q≪1. The Toomre Q essentially measures a ratio of how pressure and rotationally supported the disk is against its own gravity. A disk that is Toomre unstable can become stable by either rotating faster (increase Ω), reducing surface density, or being hotter; i.e., higher . This definition ignored other forms of support against fragmentation, specifically any type of non-thermal pressure. Sources of non-thermal pressure include magnetic pressure, turbulent pressure, and, radiation pressure. The latter two in particular, as well as magnetic tension, however, can have significant anisotropic effects, so simply adding them as an isotropic pressure contribution to Q may be too simplistic. A Toomre Q that has magnetic pressure added has been derived from MHD simulations <cit.>, and is defined by multiplying the Toomre Q by a scaling factor that arises from adding the thermal and magnetic pressures <cit.>,
Q_B = Q√(1 + β^-1),
where β is plasma beta, as defined in <Ref>.
The ideal disks for fragmentation are massive, cold disks, which are relatively rare. Radiation feedback from the central star, is expected to provide thermal support against disk fragmentation <cit.>. However, stars accrete episodically (see <Ref>), therefore circumstellar disks can go through cycles of heating and cooling, and if the time between accretion events is sufficiently long, disks may cool temporarily to become unstable <cit.>.
§.§ Evolution of young multiple star systems
After fragmentation into binary or multiple-star systems, these stars also interact. Simulations of the formation of eccentric binaries find that accretion bursts can be triggered at periastron (the closest separation) because the companion star disrupts the circumstellar disk <cit.>. Accretion bursts can also be triggered by the flyby of unbound stars <cit.>, which is not uncommon in clustered star-forming environments. Most binaries have orbital periods that are significantly longer than a human lifetime, therefore, it has been difficult to observe directly companion-triggered accretion, but there are a handful of short-period young binaries where companion-triggered accretion is observed over multiple orbits <cit.>.
Interactions between stars can also truncate circumstellar disks. Simulations find that the radius of circumstellar disks is truncated to approximately a third of the binary separation <cit.>. Thus, truncation can shorten the lifetime of the disk and potentially hinder planet formation.
While circumstellar disks (the disks around individual stars) can be truncated or destroyed by binary-star interactions, simulations find that the formation of circumbinary disks is ubiquitous. Circumbinary disks can form either via the inspiral of binaries formed through core-fragmentation <cit.>, or through disk fragmentation <cit.>. Observations find that many of the largest protostellar disks are circumbinary disks <cit.>, and some are unusually old (>10), for example, AK Sco (18±1; ), HD 98800 B (10±5; ) and V4046 Sgr (12–23; ). The size and persistence of circumbinary disks may provide an ideal environment for planet formation. For details on accretion from circumbinary disks, we refer the reader to `Circumbinary Disk Accretion'.
Young multiple-star systems can experience complex orbital dynamics such as higher-order systems ejecting companions, and new multiple-star systems forming through dynamical capture. Approximately one-third of binaries are estimated to not have been born together based on observations <cit.> and simulations <cit.>. Dynamical capture is likely to be easier in star-forming environments because these young stars are actively accreting from their gaseous environments, which produce dynamical drag. Simulations of binaries that formed via core fragmentation also find that in-spiraling halts when the binary is no longer embedded in a dense gaseous environment <cit.>, highlighting that early stellar dynamics are strongly influenced by gas dynamics. Once these young multiple-star systems have accreted mass and are no longer embedded, they are not expected to evolve much dynamically. For details on the evolution of multiple are systems after their initial formation, we refer readers to `Evolution of binary stars'.
§ CONCLUSIONS
Stars form in turbulent environments with a complex interplay of different physics. At the beginning of this chapter, we reviewed the role of gravity, hydrodynamics, radiation, and magnetism in the collapse of a pre-stellar core into a star. We find that the collapse of a core by gravity is counteracted by gas pressure, radiation feedback, and magnetic pressure on different scales, but efficient radiative cooling, and angular momentum removal by magnetic fields, jets, and outflows can also aid pre-stellar collapse. We provided a summary of the physics of molecular clouds in which stars form, and how the interplay of gravity, turbulence, magnetic fields, and stellar feedback in the form of jets/outflows and radiation controls the star formation rate (SFR) and the initial mass function (IMF) of stars. We then reviewed what observations can tell us about the star formation processes. Sub-millimeter and infrared studies reveal a complex interplay of different components of the protostellar systems. Observations of gas kinematics can constrain theoretical predictions on the origin of protostellar jets and outflows, while thermal dust continuum observations deliver constraints on the onset of planet formation. Finally, we highlighted how the formation of multiple star systems complicates our single-star picture of star formation. We emphasize that most stars are born with companions and why and how this may affect planet formation.
[Acknowledgments]
R.L.K. acknowledges funding from the Klaus Tschira Foundation.
C.F. acknowledges funding provided by the Australian Research Council (Discovery Project DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). L.T. is supported by the Netherlands Research School for Astronomy (NOVA).
Harvard
|
http://arxiv.org/abs/2409.02312v1 | 20240903215055 | Investigating Mixed Reality for Communication Between Humans and Mobile Manipulators | [
"Mohamad Shaaban",
"Simone Macci`o",
"Alessandro Carf`ı",
"Fulvio Mastrogiovanni"
] | cs.RO | [
"cs.RO"
] |
Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker
Tarik P. Cysne
September 9, 2024 – Version 1.0
============================================================================================
§ ABSTRACT
This article investigates mixed reality (MR) to enhance human-robot collaboration (HRC). The proposed solution adopts MR as a communication layer to convey a mobile manipulator's intentions and upcoming actions to the humans with whom it interacts, thus improving their collaboration. A user study involving 20 participants demonstrated the effectiveness of this MR-focused approach in facilitating collaborative tasks, with a positive effect on overall collaboration performances and human satisfaction.
Human-Robot Collaboration, Mixed Reality, Software Architecture.
§ INTRODUCTION
Human-robot collaboration (HRC) is becoming increasingly important in the era of human-centred production and smart factories, both from scientific and industrial standpoints.
In HRC, humans and robots share physical space and duties <cit.>, combining the benefits of human cognitive ability with machine efficiency and speed.
However, this new paradigm introduces several technical challenges, from ensuring human teammate's safety throughout the collaboration process to developing efficient communication interfaces for hybrid human-robot teams. Efforts to enhance human safety in human-robot collaboration (HRC) have led to various solutions. These include strategies to minimize collision risks by predicting human space occupancy <cit.>, as well as techniques for detecting and mitigating contact <cit.>. Communication also plays a vital role in ensuring smooth interactions. Sharing information about the robot's intents is important and enables humans to anticipate and react accordingly. Researchers have investigated several methods to provide humans with feedback about the robot's internal state either through visual <cit.> or vocal <cit.> feedback during HRC processes.
In this work, we explore how Mixed Reality (MR), as a communication medium that projects holographic representations of data into the real environment in a coherent, contextual way, can enhance human-robot collaboration (HRC). MR has been explored as a tool for effective communication between humans and robots, particularly using head-mounted displays (HMDs). These devices hold the potential to make the robot's internal state intuitively understandable to human collaborators by projecting its planned actions as holograms. By understanding a robot's intended actions, humans can better coordinate their own, leading to smoother and safer interactions. This clarity reduces the risk of accidents and ensures both parties can work more efficiently together.
In a previous article, we introduced the MR-HRC architecture, which leverages the MR layer to display a robot manipulator's intentions, defined as upcoming planned actions, through holographic cues <cit.>. With a user study in a collaborative scenario, we proved how such a form of holographic communication benefited the overall performance of the human-robot team. Nevertheless, the scheme proposed in <cit.> was limited to only rendering intentions for fixed manipulator robots. On the contrary, the present work aims to overcome these limitations extending the developed framework through an expanded architecture named MR-HRC v2, which permits the communication of mobile manipulators intent over the MR Layer. Although our work focuses on the communication aspect brought by MR, we also observe a contribution to the safety problem, derived from the additional safety induced by an intuitive, holographic communication of robot motions throughout an HRC process.
To validate the effectiveness of the expanded architecture, we carried out an experimental campaign with 20 participants in a complex scenario of mobile collaboration, where a human and a robot were required to interact while also carrying out independent, concurrent activities in a scenario simulating a logistic centre. The user study proved the architecture's effectiveness in driving collaboration. Furthermore, we observed a positive effect of MR-induced communication on task quality and team efficiency.
§ RELATED WORK
Effective communication between humans and robots is crucial for fluent collaboration, enabling agents to coordinate, synchronize, and achieve seamless interactions. HRC can require explicit and implicit communication and leverage visual, auditory, and tactile channels.
Commonly adopted solutions are based on gestural interaction, speech, and haptic feedback <cit.>.
Verbal communication offers an intuitive and expressive solution to convey robots' intentions to the operator <cit.>. However, it can be easily affected by background noise, which is frequent in industrial settings.
Various approaches using visual stimuli have been proposed in the literature.
Blinking lights and flashing cues <cit.>, or screens located in the workspace, may require high levels of effort to be fully understood by humans.
On the contrary, 2D projections on relevant objects are more intuitive but require structured environments <cit.>.
Augmented reality handheld devices enable the overlay of holograms to the environment, allowing an additional communication modality. However, since operators must hold the device, it compromises their ability to work freely <cit.>.
Conversely, by exploiting MR-based HMD devices, researchers could project static and dynamic holograms in the user's first-person view while ensuring the operator's hands are free <cit.>.
The effectiveness of HMD to display in advance the robot's planned motion has been demonstrated in studies of collaborative interactions between humans and a dual-arm manipulation system, such as Baxter from Rethink Robotics <cit.>. Greenfield et al. (2020) extended the research on intent communication through mixed reality by introducing volume visualization alongside path visualization to indicate the space the robot will occupy. They used colour gradients to represent proximity, allowing users to understand safe zones <cit.>. They validated this solution through an experimental study using a KUKA LBR iiwa 7 R800 manipulator in a collaborative setting. The results showed higher perceived safety among participants, enabling them to remain close to the robot without altering its operations.
Additionally, in a more recent study, Newbury et al., 2021 an HMD to convey the robot's intent and perception pipeline output for synchronizing human-robot interactions during handovers <cit.>. Notably, the study introduced an innovative method utilizing holographic overlays to display the estimated object pose and intended grasp pose of a Franka Emika Panda manipulator, demonstrating its effectiveness in enhancing interaction safety, trustworthiness, fluency, and predictability.
These studies have significantly advanced the field of intent communication in HRC using mixed reality. However, their primary focus was fixed manipulators such as Rethink Robotics Baxter and KUKA LBR iiwa 7 R800. These studies showcase the potential of MR and AR technologies in enhancing safety, efficiency, and understanding between humans and robots in a shared workspace. However, mobile robots introduce different challenges and opportunities for intent communication, such as dynamic environment navigation and spatial awareness. Although HMD received significant attention regarding mobile robots' teleoperation and programming <cit.>, really few works address the intent communication aspect. While, Walker et al., 2018 proposed an MR framework designed to communicate drone motion intent, improving user understanding of robot goals
<cit.>, Zu et al., 2018 presented an MR interface that visualizes a robot sensory data (e.g., laser scanner and cost map) alongside static planned paths and handover positions <cit.>. This article seeks to contribute to this research by extending an existing architecture <cit.> to suit the mobile manipulator's scenario by dynamically displaying both navigation and manipulation intentions. This approach enhances the human ability to supervise robots' activities and improves safety by visualizing the future robot's state. Additionally, we conduct a preliminary experimental study to assess the effectiveness of MR intent communication for mobile manipulators.
§ SYSTEM'S ARCHITECTURE
The MR-HRC v2 architecture, shown in Fig. <ref>, comprises two primary components: the Mixed Reality (MR) Application, highlighted in blue, and the robot application core, represented in green. This setup facilitates interaction between a human operator and a mobile manipulator by using an MR headset to convey the robot’s intentions to the human, focusing on a direct, intuitive communication method to represent the state of collaboration and the robot's actions.
The part of the architecture handling robot operations follows the classical sense-reason-act paradigm.
In this context, localization and recognition of objects (see Fig. <ref>) and robot self-localization in the environment are handled online by the Perception and SLAM modules using onboard sensors.
Then the High-Level Task Planner module plans the sequence of actions (e.g., pick a bottle or navigate to shelf) based on the perceived objects, and the final goal.
Actions resulting from the High-Level Task Planner are handled by the Motion Planner when they involve manipulations or by the Navigation Planner when they are navigation tasks.
The Mixed Reality Application communicates to the human the intentions of the robot using dynamic holograms that depict its future state (see Fig. <ref> and <ref>).
This approach was first introduced by Macciò et al., 2022 <cit.>, and it is made possible by applying a temporal delay (Δ t) to the robot commands generated by the Navigation Planner and the Motion Planner.
§ IMPLEMENTATION, FRAMEWORKS, AND EQUIPMENT
We implemented the MR-HRC v2 architecture, presented in the previous section, by embracing the open-source paradigm and making tailored contributions to relevant projects as needed. We chose the Robot Operating System (ROS) framework as our implementation platform and, for validation purposes, integrated with TIAGo++, a robot from Pal Robotics. Here, we describe the implementation details for each module in the architecture, highlighting our contributions.
Robot operations are controlled with already available software modules.
The robot's arms motion planning is performed via Moveit, whereas autonomous localization and navigation are handled by the ROS Navigation Stack.
We extended TIAGo's original perception capabilities to detect and localize objects in the environment.
On top of the robot's head, we mounted a ZED2 camera, calibrated with respect to the robot's reference frame and providing a wide-angle field of view (FOV).
Whenever a new frame from the ZED2's left camera is captured, it is processed with YOLOv5 to recognize and localize objects in 2D space.
The 3D pose of an object is estimated by projecting its 2D bounding box on the corresponding depth map, see Fig. <ref>.
The result is an array of 3D bounding boxes
that are used for planning pick actions.
The Perception Module runs on an NVIDIA JETSON TX2, with 256 CUDA cores, mounted on the robot.
At this stage, the High-Level Task Planner holds a predefined sequential set of actions for the robot to perform, i.e., pick, transport, and place actions.
At run time, the predefined actions are grounded with values from the perception system, i.e., the object positions.
The high-level task planner also describes how the human and the robot interact. In particular, when an object is out of the robot's reachable workspace, the High-Level Task Planner activates the "handover mode". In this process, the robot executes a half-turn rotation around its base before assuming a predetermined handover pose (see Fig. <ref>). It then extends its right manipulator, opens its gripper, and waits for the human to hand over the desired object.
The Mixed Reality Application, developed using Unity and deployed to Hololens 2, a Microsoft state-of-the-art MR-HMD device, runs at 30Hz with the native Hololens resolution.
On top of the 3D engine, we use two SDKs, namely Vuforia, which is currently used to extract the robot position in the MR device reference frame using a 25h9 April tag attached to the robot base, and Microsoft Mixed Reality Toolkit (MRTK), responsible for overlaying 3D holograms on top of the real world.
In this implementation of the MR communication, the delays Δ t between the hologram and the robot motions have been set to 5 seconds,
a value empirically determined to allow a reasonable separation between holographic and subsequent robot movements without significantly affecting task pace.
Unity official support is provided for interfacing with ROS[<github.com/Unity-Technologies/ROS-TCP-Endpoint>], thus providing proper communication with the rest of the architecture.
The MR application's source code is openly available to other researchers through GitHub[<github.com/TheEngineRoom-UniGe/MR-Tiago>].
§ EXPERIMENTAL SETUP
The experiments detailed in this section aim to evaluate the impact of using Mixed Reality (MR) technology for communicating robot intentions in human-robot collaboration (HRC). Previous research has demonstrated the positive influence on HRC when MR is used to convey the intentions of static manipulators <cit.> and drones <cit.> . Building on this foundation, our study extends the investigation to a more complex scenario involving mobile manipulators.
Therefore, in the context of this work, we hypothesized that the adoption of MR technologies in HRC with a mobile manipulator could lead to:
H1 Reduced overall task completion time - We expect MR to streamline the human-robot interaction, leading to faster competition times.
H2 Enhanced team fluency - MR can foster smoother collaboration by:
H2.1 Increasing proactive human interventions to assist the robot - By providing real-time information and guidance through MR, humans can anticipate the robot's needs and offer assistance proactively.
H2.2 Decreasing the number of failed interactions - Clearer communication and shared situational awareness through MR should minimize misunderstandings and lead to fewer failed interactions.
H2.3 Decreasing the times in which human operator should pause their task to observe the robot - MR can keep humans informed about the robot's progress without requiring them to constantly monitor its actions.
§.§ Collaborative Scenario
To test MR-HRC v2 and evaluate our hypotheses, we created a warehouse setting where humans share their working environment with a robot[<https://youtu.be/Ni-a2fXTb7o>].
The workspace is a room with three shelves and two crates, arranged to force humans and robots to cross paths unintentionally throughout the experiment, see Fig. <ref>.
The human teammate is supposed to restock Shelf 3 with bottles taken from Crate A while the robot prepares an order by picking objects from Shelf 1 and Shelf 2 and placing them into Crate B.
For the human task, twelve bottles with random numeric labels are inside Crate A, shuffled before the beginning of each experiment.
The human operator should pick one item at a time from the crate and place it on Shelf 3, rearranging the bottles by labels in descending order.
Four bottles are distributed between Shelf 1 and 2, with one bottle intentionally placed out of the robot's reach.
Bottle positions on the shelves are not predetermined, and the robot randomly chooses which bottle to reach and pick next.
Robot actions may fail (i.e. if an object slips from the gripper) or grasping it may be unfeasible (i.e. when the bottle is outside the robot's workspace).
In these cases, the human should understand the occurrence of the critical situation
and supervise the behaviour of the robotic teammate, possibly assisting it
by correcting its grasp or collecting the unreachable bottle and handing it over.
While the robot is designed to be autonomous, the confined and intricate nature of the workspace can occasionally lead to situations where the robot and human unintentionally obstruct each other's movements, disrupting their tasks. To mitigate these issues, the human worker acts as a supervisor for the robot, monitoring its actions and intervening when necessary.
It is worth noting that, given the presence of one bottle outside the robot workspace, the human is forced to perform a handover with the robot once during the experiment.
In these cases, the robot is instructed to reach a predefined handover pose (see Fig. <ref>) and wait for the human to bring the object.
The experiment is complete when the human has arranged all bottles on Shelf 3, and the robot has filled Crate B with the other four bottles.
§.§ User Study
We carried out an experimental campaign with N=20 participants (15 males and 5 females), all aged between 21-33 (Avg =23.7, StdDev = 2.49).
All participants had little or no prior experience in terms of MR-HMDs and interaction with robotic platforms.
The experiment has been approved by the Ethical Committee for research at the University of Genoa through protocol n. 2021/65, issued on November 18, 2021.
Participants were asked to complete the joint activity with TIAGo++
(C1) without MR communication, or
(C2) with MR communication.
Participants were randomly assigned to one of the two conditions.
Before the experiment, candidates were instructed on their tasks and how to interact with the robot.
For participants performing under C2, a very brief overview of how to navigate the holographic menus of the HMD was provided, along with instructions for the initial calibration of the MR application.
After that, the experiment could start.
Given the preliminary nature of the study and the complexity of the experimental scenario, we chose to focus our analysis exclusively on the evaluation of H1 and H2, which only considers metrics associated with team fluency.
The two current hypotheses have been tested independently, using quantities measured during the experimental campaign. In particular, for
H1, we manually timed each trial and measured how long it took for the human operator and the robot to complete their respective tasks.
Each experiment was video recorded and subsequently analyzed offline by a coder to extract the following metrics related respectively to H2.1, H2.2 and H2.3:
M1 Proactive human interventions - This metric measures the number of times a participant proactively assisted the robot during task execution. Examples include correcting a bottle's position for a more stable grasp or placement in the delivery box.
M2 Number of failed interactions - This metric captures instances where human-robot collaboration deviated from its intended course. Here is a breakdown of the categories that contribute to this metric:
∙ Path Interference and Positioning Errors - This occurs when the human and robot physically block each other's movements (path interference). This disrupts the robot's navigation, causing it to deviate from its planned path and potentially leading to positioning errors (failing to reach the intended location for picking up or delivering a bottle).
∙ Human Intervention Failure - These are situations where the robot is failing a task and the human does not intervene to correct it.
M3 Time Spent Monitoring the Robot - This metric measures the total occurrences when participants paused their task to evaluate the robot's behaviour. It reflects the participant's uncertainty about the robot's intention.
§ RESULTS
The results of the user study are hereby reported, with a focus on the two aforementioned hypotheses.
As for H1, Fig. <ref> and <ref>, respectively, report the results related to completion time for the overall collaboration and completion time for the human restocking task only.
It is possible to note in Fig. <ref> that the time required to complete the collaborative task remained comparable in both experimental conditions, typically ranging between 355 and 387 seconds.
However, Fig. <ref> shows that participants under C2 completed their restocking task, on average, in around 250 seconds, whereas the average measured time in condition C1 was around 330 seconds. While these numbers may appear quite large for such a simple restocking task, it is to be noted that participants were also simultaneously required to supervise the robot's actions and intervene when necessary.
As such, an interpretation of the results conveyed by Fig. <ref> and <ref> is given as follows:
participants always completed their tasks before the robot, and the total completion time depended only on the robot's performance, which was comparable in the two conditions.
Nevertheless, participants in condition C2, aware of the robot's upcoming intentions thanks to the holographic interface, managed to plan their movements and actions synchronously with those of their robot teammate, resulting in fewer mutual obstructions and faster completion times on the human's side.
To further validate such results, we performed a t-test on the distributions depicted in Fig. <ref> and <ref>, which could be assumed normal through the Shapiro-Wilk test <cit.> (p-value > 0.05 for all distributions).
For the data in Fig. <ref>, the t-test returned p-value > 0.4, confirming that no significant difference could be observed in the total completion time in conditions C1 or C2.
Conversely, the t-test performed on the distributions in Fig. <ref> yielded p-value < 0.01, thus corroborating the significant difference between times measured on completion of the restocking task under C1 or C2.
To evaluate hypothesis H2, we adopted the various metrics illustrated in the previous Section and compared them under the two experimental conditions.
Fig. <ref> reports the results related to the amount of human assistance offered to the robot (M1) .
The histograms specifically show the distribution of participants across different numbers of proactive interventions to assist the robot in completing the collaboration task.
For example, in condition C1, 60% of participants concluded their experiment without helping the robot. The remaining 40% only intervened once.
In contrast, participants in condition C2, who received anticipatory holographic communication, exhibited greater proactiveness. This is reflected in the distribution: 40% intervened at least once, another 40% intervened twice, and the remaining participants intervened three times.
Since data distribution in Fig. <ref> could not be assumed normal, we adopted the non-parametric Wilcoxon signed-rank test <cit.>.
The test yielded a statistic W = 36, which should be compared with the critical value W_c = 60 extracted from <cit.>, fixing the significance level α = 0.05 and the sample size N=20.
Since W < W_c, we could reject the null hypothesis and confirm the significant difference between the degree of human assistance in conditions C1 and C2. It is important to note that the software and hardware controlling the robot's behaviour were identical in both conditions (C1 and C2). This ensures that the observed differences in human intervention rates stem from the communication method (anticipatory holographic vs. no) and not from potential performance issues with the robot in C2.
Similar considerations can be made by observing Fig. <ref>, which depicts data related to failed interactions during the experiments (M2).
Under condition C1, participants lacked an intuitive communication channel with the robot. This resulted in a higher number of failed interactions: 40% failed three times, 40% failed twice, 10% failed once, and only 10% avoided any failure.
In contrast, under condition C2 participants were more aware and responsive due to the improved communication method. This led to a significant decrease in failed interactions: 40% had no failure, 50% failed only once, and the remaining 10% failed twice.
Again, we employed the Wilcoxon test to evaluate the significance of such results.
In this case, the test yielded statistic W = 4, and by comparing this value with the critical one (W_c) mentioned before, we could confirm the statistical difference between the two experimental conditions.
Fig. <ref> illustrates the number of times participants interrupted their restocking task to observe the robot (M3).
In condition C1, participants interrupted their restocking task an average of four times per trial. This frequent interruption suggests that participants found it difficult to infer the robot's intention without a holographic communication channel. In contrast, when the holographic interface is introduced (condition C2) interruptions significantly reduce, 75% of participants paused their task less than three times. This aligns with the shorter task time observed in Fig. <ref>.
We assessed the significance of these results through a t-test, carried out after ensuring that distributions could be assumed normal (Shapiro-Wilk test yielded p-values > 0.2 for both cases).
The t-test returned a p-value < 0.01, corroborating the statistical difference between C1 and C2.
§ CONCLUSIONS
We presented MR-HRC v2, a software architecture that utilizes Mixed Reality to facilitate smooth and intuitive collaboration between humans and robots. MR serves as a communication layer that intuitively conveys the robot's intentions and forthcoming actions to the human collaborator. This approach, which builds on anticipatory communication, was originally proposed by Macciò et al., 2022 <cit.>, and has been expanded to include scenarios with mobile manipulators, enabling the conveyance of navigation intentions through moving holograms. The architecture was evaluated in a user study involving human interaction with TIAGo++, a state-of-the-art dual-arm manipulator from PAL Robotics, within a collaborative environment. Such an experimental campaign allowed us to evaluate the effectiveness of MR as a medium to communicate the robot's intentions, comparing two experimental conditions. In particular, we found that participants experiencing the whole holographic communication (condition C2) accomplished a smoother collaboration with the robot. This resulted in fewer mutual hindrances and improved awareness of the teammate's actions, which may lead to safer collaborative conditions.
Future studies in this direction should also assess the subjective perception of humans regarding the interaction with the robot and the holographic communication interface.
§ ACKNOWLEDGEMENT
This research was partially supported by the Italian government under the National Recovery and Resilience Plan (NRRP), Mission 4, Component 2 Investment 1.5, funded from the European Union NextGenerationEU and awarded by the Italian Ministry of University and Research. Furthermore, it has been also partially supported by the NVIDIA Academic Hardware Grant Program.
IEEEtran
|
http://arxiv.org/abs/2409.02371v1 | 20240904014109 | Unfolding Videos Dynamics via Taylor Expansion | [
"Siyi Chen",
"Minkyu Choi",
"Zesen Zhao",
"Kuan Han",
"Qing Qu",
"Zhongming Liu"
] | cs.CV | [
"cs.CV"
] |
./figs/
theoremTheorem
Remarks.
lemmaLemma
coroCorollary
propProposition
definitionDefinition
defiDefinition
assumAssumption
*goalPlan
lemLemma
remark
remarkRemark
problemProblem
ℝ
ℝ
ℝ
ℂ
ℂ
|
http://arxiv.org/abs/2409.03196v1 | 20240905023903 | End-to-End Lyapunov-Based Eclipse-Feasible Low-Thrust Transfer Trajectories to NRHO | [
"Nicholas P. Nurre",
"Ehsan Taheri"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"math.OC"
] |
End-to-End Lyapunov-Based Eclipse-Feasible Low-Thrust Transfer Trajectories to NRHO
Nicholas P. NurreGraduate Student, Department of Aerospace Engineering, Auburn University, 141 Engineering Dr Auburn, AL 36849. and
Ehsan TaheriAssistant Professor, Department of Aerospace Engineering, Auburn University, 141 Engineering Dr Auburn, AL 36849.
September 5, 2024
========================================================================================================================================================================================================================================================================
§ ABSTRACT
Generating low-thrust transfer trajectories between Earth and the Near Rectilinear Halo Orbit (NRHO), that is selected for NASA's Gateway, can be challenging due to the low control authority available from the propulsion system and the important operational constraint that the duration of all eclipses has to be less than a prescribed 90-minute threshold. We present a method for generating eclipse-feasible, minimum-time solutions to the aforementioned trajectory design problem using a Lyapunov control law. Coasting is enforced during solar eclipses due to both the Earth and Moon. We used particle swarm optimization to optimize the NRHO insertion date, time of flight, and control law parameters according to a cost function that prioritizes 1) convergence to the target orbit, 2) satisfaction of eclipse-duration constraints, and 3) minimization of time of flight. Trajectories can serve as initial guesses for NASA's high-fidelity trajectory design tools such as Copernicus and GMAT.
§ INTRODUCTION
Design of low-thrust transfers to the vicinity of the Moon is of interest, with the selection of a Near-Rectilinear Halo Orbit (HALO) by NASA as a staging platform for exploration of the Moon and beyond <cit.>. Designing low-thrust trajectories can be quite difficult due to the combination of the very low control authority available from low-thrust propulsion systems and the highly nonlinear dynamical environment of the cislunar region. Since existing low-thrust spacecraft are powered with solar arrays, it's essential that no solar eclipses last longer than a certain time interval. For instance, for the transfer of the Co-Manifested Vehicle (CMV) of Gateway, all eclipse durations need to be less than 90 minutes <cit.>. Extended operation of spacecraft within eclipses can deplete the batteries and lead to a complete loss of the vehicle.
Due to low propulsive accelerations on the order of 1.0×10^-4 m/s^2, transfer trajectories can require times of flight on the order of a year or longer. Furthermore, low-thrust trajectories consist of different phases, where the primary gravitational influence shift from Earth to the Moon. Therefore, transfer trajectories are typically solved in multiple subphases. Ref. <cit.> gives an overview of NASA's third and most recent Design Reference Mission (DRM) for the transfer of the CMV, which was designed in four subphases using Copernicus <cit.>. Indirect optimization methods are also used for designing low-thrust trajectories to quasi-periodic, near-rectilinear Halo orbits that leverages ephemeris-driven, “invariant manifold analogs” as long-duration asymptotic terminal coast arcs <cit.>. All discontinuous events (such as entry into and exit from Earth eclipses and throttle switches) are made smooth through the powerful and novel Composite Smooth Control (CSC) framework <cit.>. Ref. <cit.> solves a similar transfer problem in two subphases with an indirect method to optimize a powered Earth-spiral subphase that is then heuristically patched into a second ballistic subphase. Numerical continuation and homotopy methods are fundamental to the convergence of the Hamiltonian two-point boundary-value problems associated with indirect methods <cit.>.
To consolidate the design approach, we solved a similar problem in one phase (i.e., an Earth-centered perturbed two-body model is used with perturbations due to the Moon, Sun, and Earth's second zonal harmonic subject to Earth eclipses) with a multiple-shooting indirect method <cit.>. Both minimum-time and minimum-fuel solutions were achieved by starting with a high level of spacecraft acceleration and performing numerical continuation to gradually reduce the value of acceleration.
Another approach for designing low-thrust many-revolution trajectories is to use Lyapunov-based approaches. Lyapunov control (LC) is based on the Lyapunov stability theory, using which an LC law is obtained by finding the expression for control which makes the time derivative of a control-Lyapunov function (CLF) for the system negative <cit.>.
The CLF is positive in terms of the states of the system and should become 0 at the equilibrium (corresponding to a desired “goal” or target state). The method can be likened to converting the second-order trajectory optimization problem into a first-order stabilization problem. We note that LC has been used extensively for low-thrust trajectory optimization <cit.>. For instance, Ref. <cit.> applies Q-law <cit.> to solve Earth-Moon transfers in two subphases. The results are shown to serve as high-quality initial guesses for GPOPS-II <cit.> – a pseudospectral direct method solver. The authors leverage the computational efficiency of LC to perform an extensive trade study over potential epochs and departure orbits, allowing for an a posteriori analysis of the eclipses. Ref. <cit.> proposes a hybrid LC based methodology for designing Earth-Moon transfers in a full-ephemeris model. A study on the sensitivity of the LC law to missed-thrust events is also performed to demonstrate the robustness of the control law.
In this paper, we consider a single-phase design approach similar to what we considered in Ref. <cit.>; however, a Lyapunov control (LC) law is used instead of an indirect method to solve the trajectory optimization problem.
Further, convergence of LC laws is asymptotic and depends on the rate at which the Lyapunov function value is decreased. Finite-time convergence to the goal can be achieved by parameterizing the CLF and optimizing these new parameters with respect to a cost function to obtain near-optimal solutions (for example, with respect to flight time or fuel consumption). In addition, convergence tolerances can be set that define when the propagated state is “close enough” to the goal/target state. LC laws are also straightforward to design and implement, and more importantly, are closed-loop in nature (i.e., they only depend on the current state) <cit.>. Thus, a motivation for this work is to rapidly solve low-thrust transfer problems. These solutions, in turn, can be used as initial guesses for other high-fidelity trajectory optimization tools that will provide more optimal solutions that precisely satisfy boundary conditions like, for instance, the indirect method in Ref. <cit.> or Copernicus.
An important operational constraint, for low-thrust trajectories to the Gateway, is that the duration of all eclipses has to be less than a prescribed 90-minute threshold. Mission design strategies for ensuring all solar eclipse durations are less than the prescribed time often entail generating a large set of reference trajectories for a range of departure epochs, as was done for Artemis I <cit.>, to have a variety of options. However, many departure windows may not be feasible. Ref. <cit.> reports that 18% of all launch days were infeasible for Artemis I. The eclipse-duration constraint can pose more challenges for extremely low-thrust propulsion systems that require significantly longer times of flight. For low-thrust transfers departing from a Geostationary Transfer Orbit (GTO) to the Moon, Earth eclipse events highly depend on the departure epoch and GTO orientation. Adjustments can be made to these values to mitigate eclipse durations. However, even with an analyst's extensive experience, this post-processing approach can take many iterations to identify feasible launch opportunities. Long intermittent Earth and Moon eclipses occurring in cislunar space, when the spacecraft's relative velocity is much slower, are possible and not as easily preventable.
Incorporating eclipse-duration constraints within the trajectory optimization can could increase the number of feasible departure windows and improve optimality by allowing the trajectory optimization to be “aware” of such constraints within the optimization process. Eclipse-duration constraints are difficult to enforce due to the fact that 1) the number of eclipses is not known a priori and 2) the number and duration of eclipses can change within the trajectory optimization process. Ref. <cit.> outlines an effective strategy that treats eclipse durations as inequality constraints in Copernicus. The results, in the paper, indicate that it was possible to increase the number of feasible launch dates by about 20% for Artemis I. In this paper, however, we attempt to satisfy the eclipse maximum-duration constraint with a soft-penalization enforced while optimizing the parameters of the transfer problem. This eliminates an analyst-in-the-loop design approach and automates and facilitates the solution procedure.
The contributions of the paper are as follows. A LC law is derived and used to solve transfer trajectory optimization problems similar to those in Refs. <cit.>. The LC law can only be used to transfer the spacecraft starting from a fully defined state into an orbit, i.e., in its standard formulation LC cannot be used for rendezvous type transfers unless modifications are applied to the problem formulation <cit.>. Therefore, always starting from a point on the NRHO, the control law is applied backward in time for a departure from a GTO to an insertion at NRHO and forward in time for a departure from NRHO to an insertion at GTO. We only consider transfer maneuvers from GTO to NRHO. The GTO departure time and true anomaly are free and the NRHO insertion time is fixed. However, we consider the NRHO insertion time to be a design parameter. Because the ephemeris-propagated and ephemeris-corrected NRHO provided in Ref. <cit.> is considered, the entire state on the NRHO can be defined by the time (i.e., epoch). While numerically integrating the spacecraft equations of motion, event detection is used to determine if the solutions converge, if the spacecraft intersects the surface of the Earth or Moon, and when eclipses due to the Earth and Moon occur. Coasting is enforced during eclipses and the duration of each eclipse is calculated. Particle swarm optimization <cit.> is used to optimize the NRHO insertion date, time of flight, and parameters of the CLF with respect to a hierarchical cost function. The cost function prioritizes 1) convergence to the target orbit, 2) satisfaction of the maximum-eclipse-duration constraint, and 3) minimization of the time of flight.
§ DYNAMICAL MODEL
The entire transfer problem is solved in the J2000 Earth-centered inertial (ECI) frame.
All accelerations are expressed with respect to this frame. The spacecraft's motion is modeled with position, r=[x, y, z]^⊤, and velocity, v=[v_x, v_y, v_z]^⊤, vectors. The spacecraft's state vector is x=[r^⊤, v^⊤]^⊤ and the equations of motion are defined as,
ẋ = f(t,x,α̂) = [ v; a_kep + a_3rd + a_J_2 + α̂ a_scδ_ecl ],
where t denotes time, a_kep is the two-body (Keplerian) acceleration due to the Earth, a_3rd is the collection of third-body gravitational perturbations, and a_J_2 is the acceleration due to Earth's J_2 gravitational perturbation. In the last acceleration term, α̂ a_scδ_ecl, which denotes the acceleration produced by the propulsion system, α̂ is the thrust steering unit vector and δ_ecl∈{0,1} is the eclipse-triggered throttle factor. Since the contribution and emphasis of the work is on satisfying the maximum-eclipse-duration constraint, a constant spacecraft acceleration is assumed with its value set to a_sc=1.0×10^-4 m/s^2. This value is chosen to match the transfer problems in Refs. <cit.>. Propellant-mass considerations belong to our future work. The thrust steering unit vector can freely orient in space, but is constrained to a unit vector, i.e.,
α̂^⊤α̂ = 1.
In this work, the change in spacecraft mass is not taken into account. But, our future work will investigate implementing a LC law coasting mechanism to obtain suboptimal minimum-fuel solutions, like the one that is introduced with Q-law in Ref. <cit.>. Earth's two-body acceleration can be written as,
a_kep = -μ_Earth/r^3r,
where r=r and μ_Earth is the gravitational parameter of the Earth. Perturbing accelerations due to the gravity of the Moon, Sun, and Jupiter are considered and written as <cit.>,
a_3rd = -∑_k∈ Kμ_kr + F(q_k)r_k/r - r_k^3,
where
F(q_k) = q_k(3 + 3q_k + q_k^2/1+(1+q_k)^3/2), q_k = r^⊤(r-2r_k)/r_k^⊤r_k,
with K∈{Moon, Sun, Jupiter}, and r_k denotes the position of the k-th body with respect to the Earth. Note that this formulation avoids any numerical error due to cancellations when terms are of significantly different values <cit.>. The acceleration vector due to Earth's J_2 gravitational perturbation is written as <cit.>,
a_J_2 = 3J_2μ_Earth R_Earth^2/2r^4[ x/r(5x^2/r^2-1), y/r(5y^2/r^2-1), z/r(5z^2/r^2-3) ]^⊤,
where R_Earth is the mean radius of the Earth and J_2=1082.63×10^-6.
A canonical distance unit (DU) is defined by one Earth radius, R_Earth, and a canonical time unit (TU) is defined such that the scaled value of Earth's gravitational parameter is 1 DU^3/TU^2. These canonical distance and time units are then used to scale all states and parameters of the dynamical model. Future work could include investigating more sophisticated scaling methods and even time regularization methods such as Ref. <cit.> that might make numerical integration of Eq. (<ref>) more efficient. All planetary ephemerides and constants are obtained using NASA's SPICE toolkit <cit.> and the generic kernel files[<https://naif.jpl.nasa.gov/pub/naif/generic_kernels/>]
, , , and .
It is computationally inefficient to call SPICE routines during numerical integration. Thus, all ephemerides obtained positions, i.e., r_k appearing in Eq. (<ref>) and in the eclipse model presented in the next section, are fitted by a spline function, which has proved to be more computationally efficient. The interpolation is performed to an accuracy on the order of 0.1 m.
§ ECLIPSE MODEL
In this work, eclipses due to the Earth and Moon are considered. The cylindrical eclipse model from Ref. <cit.> is used. The eclipse model assumes the Earth, Moon, and Sun to be perfect spheres and the spacecraft to be a point mass. Eclipse coasting is enforced during both umbra (total eclipse) and penumbra (partial eclipse). Thus, only the penumbra exits and entries are calculated, since umbra occurs inside penumbra. Figure <ref> illustrates the Sun-Earth shadow geometry. Note that Figure <ref> is greatly exaggerated and not drawn to scale. The same geometry is also used for modeling Moon eclipses.
Let r_Sun=r_Sun be the distance between the Earth and the Sun and let η∈ [0,1]. From the geometrical proportion of the penumbral cone, we have
(1-η)r_Sun/2R_Earth=η r_Sun/2 R_Sun,
where the value η can be expressed as,
η = R_Sun/R_Earth+R_Sun.
Therefore, the angle that the penumbral cone makes with respect to r_Sun can be expressed as,
θ_p = sin^-1(R_Earth/(1-η)r_Sun) = sin^-1(R_Earth + R_Sun/r_Sun).
The position of the spacecraft projected onto r_Sun is defined as,
r̅ = (r^⊤r̂_Sun)r̂_Sun,
where r̂_Sun=r_Sun/r_Sun. Earth shadows only occur on the side of the Earth opposite the Sun, i.e., the spacecraft can only encounter a shadow when r^⊤r̂_Sun<0. Let the distance between the spacecraft and the center of the penumbral cone be defined as,
γ_Earth = r - r̅,
and let the distance between the penumbral terminator and the center of the penumbral cone at the projected spacecraft location be defined as,
κ_Earth = ((1-η)r_Sun + r̅)tan(θ_p).
Therefore, it can be stated that the spacecraft is in the Earth shadow when γ_Earth < κ_Earth and r^⊤r̂_Sun<0 and not in a shadow otherwise.
Let γ_Moon and κ_Moon denote the same definitions as Eqs. (<ref>) and (<ref>), respectively, but for the Moon-Sun eclipse model. Also, let r_sc/Moon=r - r_Moon be the position of the spacecraft with respect to the Moon and r̂_Sun/Moon = (r_Sun - r_Moon)/r_Sun - r_Moon be the unit vector pointing towards the Sun with respect to the Moon. The following switching functions can then be defined
S_ecl,Earth,1 = γ_Earth - κ_Earth , S_ecl,Earth,2 = r^⊤r̂_Sun,
S_ecl,Moon,1 = γ_Moon - κ_Moon , S_ecl,Moon,2 = r_sc/Moon^⊤r̂_Sun/Moon.
The eclipse-triggered throttle factor in Eq. (<ref>), δ_ecl, can be defined as a multiplication of two factors as,
δ_ecl = δ_ecl,Earth×δ_ecl,Moon,
where
δ_ecl,Earth = 0, S_ecl,Earth,1<0 and S_ecl,Earth,2<0,
1, else,
and
δ_ecl,Moon = 0, S_ecl,Moon,1<0 and S_ecl,Moon,2<0,
1, else.
During numerical integration of Eq. (<ref>), an event-detection algorithm is used to determine the exact time of each eclipse entry, t_ecl,n^-, and exit, t_ecl,n^+, for all n eclipses with n=1,…,N. The duration of each eclipse, denoted as d_ecl,n = t_ecl,n^+ - t_ecl,n^-, is also calculated. Note that the total number of Earth and Moon eclipses, N, is not known in advance. The eclipse constraint, that all eclipses be less than 90 minutes <cit.>, is considered in this work and can be expressed formally as,
d_ecl,n≤ 90 minutes, ∀ n = 1,…,N.
Unlike the eclipse model used by the authors in Ref. <cit.>, the domain for the eclipse model from Ref. <cit.> is defined interior to the occulting bodies. This was the main reason that we adopted this eclipse model. While optimizing all the parameters of the transfer problem, event detection is used to stop integration when the spacecraft intersects the Earth or Moon. This logic works most of the time, however, sometimes the dynamics become significantly nonlinear around the Moon and the event-detection algorithm misses the intersection event. When the model from Ref. <cit.> is used, 's are returned since the model is undefined for the domain inside the occulting body, which breaks the optimization routine. A similar problem is reported with the eclipse model used in <cit.> along with a strategy for overcoming it. The previously presented eclipse model circumvents this problem altogether.
§ TRAJECTORY OPTIMIZATION PROBLEM
The transfer problem, departing from a GTO at a date t_0 and inserting into NRHO at a later date t_f, is solved backwards in time over the time horizon
t∈[t_f,t_0], t_f > t_0.
The GTO orbital parameters are taken from Ref. <cit.> with apogee and perigee altitudes of 33,900 km and 350 km, respectively, and an inclination of 28.5. Because it is stated in Ref. <cit.> that the right ascension of the ascending node (RAAN) and argument of perigee (ARGP) of the GTO are unrestricted for the initial analysis, only the specific angular momentum, eccentricity, and inclination are considered in the boundary conditions for the GTO, leaving the RAAN and ARGP as free parameters. True anomaly is also free, however, this fact is inherent to the LC law that will be used to solve the trajectory optimization problem.
The boundary conditions on the GTO are therefore defined as
h(x(t_0)) = h_GTO, e(x(t_0)) = e_GTO, i(x(t_0)) = i_GTO,
where subscript `GTO' denotes values of the GTO and the specific angular momentum, h, eccentricity, e, and inclination, i, are defined as <cit.>,
h = r×v, e = (v^2-μ_Earth/r)r - (r^⊤v)v/μ_Earth, i = cos^-1(h_ẑ/h),
where v=v and h_z denotes the z component of the specific angular momentum vector in the J2000 ECI frame.
It is expected for the spacecraft's osculating orbit, with respect to the Earth, to become hyperbolic close to the Moon. Thus, the angular momentum was selected as opposed to, for example, the semimajor axis. The semiparameter, defined as p=h^2/μ_Earth <cit.>, could also be an acceptable element to target. Ultimately, the goal is to target the size, shape, and inclination of the GTO only. A variety of other boundary conditions could be formulated of which some could lead to a better CLF and, subsequently, a better control law and therefore will be investigated in our future work.
The NRHO ephemeris is obtained from the kernel file[<https://naif.jpl.nasa.gov/pub/naif/misc/MORE_PROJECTS/DSG/>]
described in Ref. <cit.>. The boundary condition at t_f is defined as
x(t_f) = x_NRHO(t_f).
The minimum-time constant-acceleration transfer trajectory optimization problem can be stated as,
min_α̂,t_0,t_f J = t_f-t_0,
s.t., Eqs. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>).
A parameterized LC law based on the goal defined by Eq. (<ref>) will be derived and substituted into Eq. (<ref>). Eq. (<ref>) can then be solved as a parameter optimization problem using a heuristic algorithm in which Eq. (<ref>) is integrated over the time horizon given by Eq. (<ref>) with the initial condition given by Eq. (<ref>).
We note that Eq. (<ref>) is quite challenging to enforce directly since the number of eclipses, N, is not known a priori and can also change during the iterations of the optimization process. Instead, Eq. (<ref>) is enforced as a soft penalty along with a penalty to further promote satisfaction of Eq. (<ref>) since it is not guaranteed. These two penalties and the time of flight are encoded into a single cost function that the heuristic algorithm minimizes. This cost function will be explained in detail after the control law is derived.
§ LYAPUNOV CONTROL LAW
The control-Lyapunov function (CLF) is defined as,
V(x) = 1/2w(x)^⊤Kw(x),
where the constraint vector, w(x), is defined as,
w(x) = [ h(x) - h_GTO, e(x) - e_GTO, i(x) - i_GTO ]^⊤.
Note that no scaling is performed on Eq. (<ref>) because the canonical scaling method ensures that h has the same order of magnitude of e and i.
Instead of using a diagonal parameter matrix K, a full parameter matrix is used to consider a larger family of CLFs as it is proposed in <cit.>. The 3×3 positive-definite matrix K is defined through a novel eigendecomposition method as,
K= QΛQ^⊤,
where the column vectors of Q make up the eigenvectors of K and Λ is a diagonal matrix of the eigenvalues of K. This parameterization is based on Ref. <cit.> and allows an efficient (i.e., minimal number of parameters) representation of a full matrix that is guaranteed to be positive-definite subject only to bounds on its parameters. The eigenvalue matrix, Λ, is simple to construct, i.e.,
Λ = [ k_1 0 0; 0 k_2 0; 0 0 k_3 ],
where the parameters k_1, k_2, and k_3 are constrained to being real and positive. The matrix Q can be generated as a rotation matrix and Ref. <cit.> outlines a generalized way to generate rotation matrices in n-dimensions. Because Q is 3×3, in this work, the method in Ref. <cit.> reduces to any standard Euclidean rotation matrix parameterized by 3 angle-like parameters, k_4, k_5, and k_6.
A control law is derived by making the time-derivative of Eq. (<ref>), dV/dt=V̇, as negative as possible subject to Eq. (<ref>), i.e., we have
α̂^* = _α̂=1V̇.
In Eq. (<ref>), V̇ can be found through the chain rule as,
V̇ = ∂ V/∂x∂x/∂ t + ∂ V/∂ t.
Since V does not explicitly depend on time, Eq. (<ref>) reduces to
V̇ = ∂ V/∂rv + ∂ V/∂v(a_kep + a_3rd + a_J_2 + α̂a_scδ_ecl).
It can be shown that Eq. (<ref>) is pointwise satisfied with the selection of thrust steering vector as,
α̂^* = -(∂ V/∂v/∂ V/∂v)^⊤.
Because the transfer problem is being solved backwards in time, the sign of Eq. (<ref>) should be reversed to ensure V approaches 0 at the GTO departure time t_0. The resulting control law is
α̂^* = (∂ V/∂v/∂ V/∂v)^⊤.
Eq. (<ref>) is calculated in CasADi <cit.>, a symbolic framework that uses automatic differentiation. Note that if the transfer problem that departs at NRHO and arrives at GTO were solved, then Eq. (<ref>) would be used.
Due to the very low control authority available from the low-thrust propulsion system (compared to the highly-perturbed dynamical model), the CLF time derivative, Eq. (<ref>), may become positive over one (or more) finite intervals, thereby not guaranteeing the system to converge as stated by the Lyapunov stability theory. However, this doesn't guarantee nonconvergence either. The results of Ref. <cit.> show that converged solutions can still be found as long as the CLF time derivative is negative almost everywhere except on a finite number of small intervals. This property is also observed to be held in our numerical results.
§ PARAMETER OPTIMIZATION PROBLEM
After deriving the LC law, the trajectory optimization problem in Eq. (<ref>) can now be stated as a parameter optimization problem (POP). The parameters considered are the NRHO insertion date, t_f, bounded by
t_f,lb≤ t_f ≤ t_f,ub,
the time of flight, Δ t, bounded by
Δ t_lb≤Δ t ≤Δ t_ub,
and finally the 6 CLF parameters in Eq. (<ref>), bounded by
0 < k_i ≤ k_ub, for i=1,2,3, 0 ≤ k_4 ≤π, 0 ≤ k_j ≤ 2π, for j=5,6.
These parameters are optimized using MATLAB's particle swarm optimization (PSO), a stochastic optimization algorithm that optimizes a scalar cost function subject only to bounds on the design variables. Under this parametrization, the time horizon in Eq. (<ref>) can be expressed as,
t ∈ [t_f,t_f - Δ t].
An important step in the resulting POP is to accurately solve for the initial value problem (IVP) given by the set of ordinary differential equation (ODEs) in Eq. (<ref>) with the control law in Eq. (<ref>) and boundary condition given by Eq. (<ref>) over the time horizon in Eq. (<ref>), i.e.,
ẋ = f(t,x,α̂^*;K), x(t_f) = x_NRHO(t_f), t ∈[t_f,t_f-Δ t].
MATLAB's variable-step variable-order nonstiff ODE integrator is used with an absolute and relative tolerance of 1.0×10^-10. Our extensive numerical studies indicate that this integrator performed most efficiently with the prescribed tolerances against the rest of MATLAB's ODE integrators.
The event-detection capability of is used extensively while solving Eq. (<ref>). The method works by monitoring the sign of an M number of scalar event functions, e_m for m=1,…,M. When the m-th function changes sign, a regula falsi algorithm is used to find the precise location of the zero of e_m, and, if all the corresponding termination conditions are met, integration stops. In our paper, there are M=7 event functions defined as follows,
e_1 = r - R_Earth - 200 km,
e_2 = r - R_Moon - 200 km,
e_3 = |h - h_GTO| - ϵ,
e_4 = |e - e_GTO| - ϵ,
e_5 = |i - i_GTO| - ϵ,
e_6 = S_ecl,Earth,1,
e_7 = S_ecl,Moon,1.
Eqs. (<ref>) and (<ref>) monitor if the spacecraft has intersected 200 km above the surface of the Earth and Moon. If either of their signs change, then integration stops and the cost function is appropriately updated and returned to PSO. Eqs. (<ref>), (<ref>), and (<ref>) monitor if the solution has converged or not (i.e., orbit insertion has been achieved or not). If any one of their signs become negative while the other two are also negative, then integration stops and the cost function is appropriately updated and returned to PSO. The tolerance ϵ=1.0×10^-3 was chosen as it provides a balance between convergence speed and accuracy; however, future work should investigate using different convergence criteria.
Eqs. (<ref>) and (<ref>) are from Eqs. (<ref>) and (<ref>), respectively, and determine if the spacecraft is in an eclipse or not. If Eq. (<ref>) (resp. Eq. (<ref>)) changes sign while S_ecl,Earth,2 (resp. S_ecl,Moon,2) is negative, then integration is terminated. However, integration is then restarted from the same time and state. This logic is followed so that the discrete function in Eq. (<ref>) is modeled as accurately as possible.
In formulating the cost function, let t_end denote the final time returned by Eq. (<ref>) under the event-detection logic, i.e., t_end always satisfies t_f-Δ t≤ t_end≤ t_f. The first priority of the cost function is to ensure the solution converges. If a solution to Eq. (<ref>) does not satisfy the constraint below,
|w(x(t_end))|<ϵ,
then, the value of cost, J_1, defined as,
J_1=w(x(t_end)),
is returned. Because of the highly nonlinear dynamics in the vicinity of the Moon, it was found that solutions commonly intersect the surface of the Moon. Thus, if integration was stopped due to Eq. (<ref>) becoming negative, then,
J_2 = 1000w(x(t_end)),
is returned as the cost function to PSO as a penalization.
If Eq. (<ref>) is satisfied for a solution, but Eq. (<ref>) is not, then
J_3 = -(∑_n = 1^N max(d_ecl,n-90 minutes,0))^-1,
is returned as the cost function to PSO. Note that it appears there's a possibility for a division by zero in Eq. (<ref>), however, the expression in Eq. (<ref>) is not evaluated if Eq. (<ref>) is satisfied, which eliminates this possibility. Also, Eq. (<ref>) is made negative to differentiate it from Eqs. (<ref>) and (<ref>), but inverted so that the violated eclipse durations are still minimized.
Finally, if Eqs. (<ref>) and (<ref>) are satisfied for a solution, then
J_4 = -(t_f-t_end)^-1,
is returned as the cost function to PSO. This cost is also made negative to differentiate from Eqs. (<ref>) and (<ref>) and inverted so that time of flight is minimized. However, to ensure it is differentiated from Eq. (<ref>), it has units of 1/[years] while Eq. (<ref>) has units of 1/[seconds] so that they are on different orders of magnitude. Further, to ensure that minuscule eclipse violations aren't interpreted as extremely low times of flights, if J_3<J_4 occurs for a converged solution, then Eq. (<ref>) is returned instead of Eq. (<ref>). While this allows for solutions with eclipses longer than 90 minutes to be deemed feasible, these solutions will only be infeasible by a duration on the order of a second. Note that the cost function used in this work is not continuous due to the logic involved, however, stochastic optimization algorithms, such as PSO, can deal with discontinuous cost functions.
Because LC laws are prone to extreme oscillations/chattering at the end of the maneuver, the step size of variable step integrators can become minuscule and halt progress <cit.>. To overcome this issue, integration of is stopped when a certain number of function evaluations is reached. A simple logic is implemented inside and, if triggered, then Eq. (<ref>) is simply returned as the cost function to PSO. In this paper, 1.0×10^5 function evaluations were arbitrarily chosen and found to provide acceptable results; however, different values may further benefit the algorithm given that the number of iterations is a problem-dependent number. The event-detection logic and cost function values are summarized in Algorithm <ref>.
§ RESULTS
The results are presented for a transfer problem with a fixed NRHO insertion date and then with a variable NRHO insertion date. These two cases are considered to demonstrate the impact of the insertion date and the types of solutions that can be achieved. The fixed NRHO insertion date was arbitrarily selected as t_f = 2026 DEC 06 00:00:00 UTC. This coincides with a state that is roughly at apolune on the NRHO. When t_f is allowed to vary, it is bounded between t_f,lb= 2026 NOV 06 00:00:00 UTC and t_f,ub= 2027 JAN 05 00:00:00 UTC, or, t_f,lb=t_f - 30 days and t_f,ub=t_f+30 days. The value of 30 days was selected because it is approximately equal to 1 period of the Moon's orbit around the Earth, giving a variety of phasing possibilities to the solution space. The time of flight for all cases was bounded by Δ t_lb=200 days and Δ t_ub=400 days. Simulations were performed on a 2023 MacBook Pro with the Apple M2 Pro chip, which allows for 10 “workers” in MATLAB to run PSO in parallel.
§.§ Fixed NRHO Insertion Date
For this transfer problem, PSO was run 5 times with a swarm size of 500 and a maximum time limit of 1 hour. The best run provided an eclipse-feasible solution, with a time of flight of Δ t= 321.17 days. The trajectory for this solution, in the J2000 ECI frame, is shown in Figure <ref>. The trajectory is also shown in the Earth-centered Sun-Earth rotating frame in Figure <ref> and in the Moon-centered Earth-Moon rotating frame in Figure <ref>. These frames are denoted by the unit vectors {x̂_ECR, ŷ_ECR, ẑ_ECR} and {x̂_MCR, ŷ_MCR, ẑ_MCR}, respectively, where the subscript `ECR' denotes Earth-centered rotating and `MCR' denotes Moon-centered rotating. Note that the legend in Figure <ref> also applies to Figure <ref> and Figure <ref>.
Figures <ref>, <ref>, and <ref> show the time histories of the specific angular momentum, eccentricity, and inclination, respectively. The time histories are plotted in the “forward sense of time,” i.e., the x-axis is 0 when the spacecraft is at the GTO and is Δ t when the spacecraft is at the NRHO. Figures <ref> and <ref> show the CLF value (Eq. (<ref>)) and the CLF time derivative value (Eq. (<ref>)), respectively, and are also plotted in the forward sense of time. Note that because this problem is being solved backwards in time, the sign of the control law was reversed, so, ideally, the function in Fig. <ref> is positive definite. It can be observed though that the CLF time derivative, in fact, becomes negative over multiple intervals. However, the trajectory still converges to the target orbit.
Figures <ref> and <ref> show the eclipse-triggered throttle factor and eclipse switching functions for the solution. Figure <ref> shows the duration of each eclipse. While included in the model, no eclipses due to the Moon occur in this solution. The first feasible solution from PSO in the same run was obtained in about 15 minutes and had a time of flight of Δ t= 332.72 days. Figure <ref> shows the duration of each of its eclipses. Comparing Figure <ref> and Figure <ref>, it can be interpreted for this particular case that PSO improves the time of flight by increasing the eclipse durations as much as possible.
§.§ Variable NRHO Insertion Date
For this transfer problem, PSO was ran once with an increased swarm size of 1000 to account for the extra parameter, t_f, being optimized. The first eclipse-feasible solution was obtained after about 1 hour and 45 minutes. The NRHO insertion date is t_f= 2026 NOV 06 00:01:35 UTC with a time of flight of Δ t= 314.03 days. The best eclipse-feasible solution was obtained after about 3 hours and 30 minutes and had an NRHO insertion date of t_f= 2026 NOV 06 00:00:00 UTC and a time of flight of Δ t= 303.23 days. The NRHO insertion date of many of the eclipse-feasible solutions tends towards t_f,lb, suggesting that shifting t_f,lb and t_f,ub may provide better solutions. Figure <ref> shows the trajectory for the best solution in the J2000 ECI frame. Figure <ref> shows the duration of each of the eclipses for the solution. An interesting aspect of the solution is that Moon eclipses occur in this transfer solution, with one Moon eclipse occurring very close to the NRHO insertion being the limiting eclipse duration.
It is hypothesized that it took longer to reach convergence because of the sensitivity introduced by making t_f a decision variable. Small changes in t_f can potentially cause large changes in the initial conditions, x(t_f), depending on how close x(t_f) is to perilune. This means that changes to t_f itself would require changes to the other parameters for the solution to converge. However, in the current formulation, all parameters are being optimized at the same level. It is hard to fully characterize this hypothesis without performing many runs of PSO, since PSO is a stochastic optimization algorithm. One future work is to investigate optimizing t_f using a bi-level optimization algorithm. Nonetheless, considering t_f as a design variable shows that it is possible to obtain solutions with a better time of flight within a new departure window. A potential use of our proposed formulation is to make the bounds t_f,lb and t_f,ub large to find a departure window over a wide range of time, or the bounds can be made small to improve optimality once an initial solution is obtained.
§ CONCLUSION
We presented a methodology for efficiently finding low-thrust spacecraft transfer trajectories under constant acceleration from a geostationary transfer orbit (GTO) to the near-rectilinear halo orbit (NRHO) earmarked for NASA's Gateway. The method is based on a closed-loop control law derived from a novel parameterization of quadratic control-Lyapunov functions. This control law is applied in a backward-in-time sense to generate solutions departing from the GTO and inserting into the NRHO. Solutions may also be obtained that depart from the NRHO and insert into the GTO.
To solve the resulting trajectory optimization problems, the parameters of the control law, time of flight, and NRHO insertion date are all optimized simultaneously with a stochastic optimization algorithm – particle swarm optimization (PSO). The cost function is designed to prioritize 1) convergence to the target orbit, 2) satisfaction of the constraint that all eclipse durations be less than 90 minutes, and 3) minimization of the time of flight. Results indicate that eclipse-feasible solutions can be obtained on the order of 10 minutes with the processing power of a personal laptop computer. Solutions obtained can serve as high-quality initial guesses to NASA's high-fidelity trajectory optimization tools such as Copernicus and GMAT.
§ ACKNOWLEDGMENT
We would like to acknowledge Saeid Tafazzol for his useful suggestions and discussions on efficiently parameterizing n-dimensional positive-definite matrices.
AAS_publication
|
http://arxiv.org/abs/2409.03133v1 | 20240904235556 | Explicit Asymptotic Solutions of $ν_e + e^-$ Neutrino Networks for Large Sets of Partial Differential Equations in Core-Collapse Supernovae | [
"Raghav Chari"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.SR",
"hep-ph"
] |
roman
0Front MatterrootNode
1Titlei
1Dedicationb
1Acknowledgementsc
1Quoted
1Abstracte
0Table of Contentsf
List of Tables
List of Figures
0Nomenclatureg
[1.25in]
arabic
apalike
Vita
|
http://arxiv.org/abs/2409.03055v1 | 20240904200453 | SymPAC: Scalable Symbolic Music Generation With Prompts And Constraints | [
"Haonan Chen",
"Jordan B. L. Smith",
"Bochen Li",
"Ju-Chiang Wang",
"Janne Spijkervet",
"Pei Zou",
"Xingjian Du",
"Qiuqiang Kong"
] | cs.SD | [
"cs.SD",
"eess.AS"
] |
MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation
Shehan Perera10009-0005-3831-0404 Yunus Erzurumlu10009-0006-5798-5842 Deepak Gulati20000-0003-3374-5992 Alper Yilmaz10000-0003-0755-2628
September 9, 2024
============================================================================================================================================
§ ABSTRACT
Progress in the task of symbolic music generation may be lagging behind other tasks like audio and text generation, in part because of the scarcity of symbolic training data.
In this paper, we
leverage the greater scale of audio music data
by applying pre-trained MIR models (for transcription, beat tracking, structure analysis, etc.) to extract symbolic events and encode them into token sequences.
To the best of our knowledge, this work is the first to demonstrate the feasibility of training symbolic generation models solely from auto-transcribed audio data.
Furthermore, to enhance the controllability of the trained model, we introduce SymPAC (Symbolic Music Language Model with Prompting And Constrained Generation), which is
distinguished by using
(a) prompt bars in encoding and (b) a technique called Constrained Generation via Finite State Machines (FSMs) during inference time.
We show the flexibility and controllability
of this approach,
which may be critical in making music AI useful to creators and users.
§ INTRODUCTION
The success of language models — especially large ones — has demonstrated that with
more data and larger models,
using a simple language model objective can endow a model with powerful natural language generation capabilities.
On the other hand, although symbolic music and natural language share many similarities,
no music model has yet seemed to match the capabilities of generative text models.
One reason for this gap is the insufficient amount of symbolic music data.
To address this, previous efforts in symbolic music generation have involved combining limited manually annotated data with data obtained by automatic transcription <cit.>, or collecting private symbolic training datasets <cit.>.
By contrast, in this work, we demonstrate that a high-quality, multi-track symbolic music generation model can be trained just using results from running Music Information Retrieval (MIR) models on audio music data.
In this way,
our framework eliminates the need for manually annotated symbolic music data, allowing for expansion purely through audio datasets.
On the other hand,
there has been a recent surge of efforts
that directly generate the auditory modality of music <cit.>.
This is useful for some applications, but typically precludes fine-grained control and
editing the outcome,
which is crucial for composers who wish to shape their musical ideas precisely.
In contrast, outputting symbolic data gives composers the ability to interactively shape and modify their musical ideas.
Considering such advantages, the problem of how to integrate user input to control the generation of symbolic music has been a popular research topic. In previous works, two methods for incorporating control signals are usually used. The first approach is based on a Variational Autoencoder (VAE) <cit.>, wherein the control is exerted within the VAE's latent space.
The second approach is to embed control information directly into the encoding of symbolic music and implant control inputs during inference <cit.>.
In this work, we introduce the SymPAC framework (Symbolic Music Language Model with Prompting And Constrained Generation),
designed to work with decoder-only language models to enable user input controls.
The SymPAC framework consists of the following two parts.
First, inspired by the prompting mechanism used in the natural language domain <cit.>, we introduce prompt bars in our symbolic music encoding, which consolidates all control signals into a separate prompt section before encoding the actual musical notes.
This design is essential for a decoder-only language model to have the full context of control signals during the generation of music.
Second,
in the controlled symbolic music generation setting, the generated tokens should not only comply with the encoding grammar but also adhere to user inputs.
Thus we propose to use Constrained Generation via Finite State Machines (FSMs), which constrains the sampling of tokens at each time step to a subspace.
We will discuss the advantages of SymPAC over previous methods in Section <ref>, and provide more details of how SymPAC can be used for various types of user inputs in Sections <ref> and <ref>.
We collected roughly
one million in-house audio samples and extracted MIR information for each, using
pre-trained models for beat tracking <cit.>, chord detection <cit.>, section detection <cit.>, multi-track transcription <cit.>, and music tagging <cit.>.
The MIR results were transformed into various tokens, and then integrated into an extended REMI <cit.> encoding to train a language model based on Llama <cit.> architecture.
To summarize, our main contributions are:
Scalability: We demonstrate that a high-quality symbolic music generation model can be trained solely with transcribed data, without the need of manually annotated symbolic music, and can be scaled by amassing more audios.
Controllability: We propose the SymPAC framework, which enables flexible user input controls on a decoder-only language model while retaining good quality.
§ RELATED WORK
§.§ Training Data For Symbolic Music
In Table <ref>, we summarize some popular music datasets in the symbolic and audio domains, together with our in-house audio dataset, and compare their sizes.
The Lakh MIDI Dataset <cit.> is one of the biggest public datasets, containing 170K multitrack pieces in MIDI format.
Many researchers use publicly available symbolic music datasets for training, but some collect and use large-scale ones that are not disclosed; e.g., MusicBERT <cit.> was trained on the Million-MIDI Dataset (MMD).
Although the combined size of the public datasets in Table <ref> is large, combining them is not straightforward since they vary in format.
For example, the Maestro dataset consists of transcriptions of piano performances where note timings reflect actual performance timings, whereas datasets like Lakh are quantized to metrical time with alignment to beats. The inclusion of instrument tracks and additional information (e.g., chords, sections) also differs between datasets.
To expand the scale of training data by combining these datasets, it is necessary to unify their formats first, which may be tedious and introduce errors.
On the other hand, publicly available audio datasets are much larger in scale. The Million Song Dataset (MSD) <cit.>, for example, contains 1M songs, or 709M notes in total after being run through a 5-track transcription model <cit.>.
The recently published DISCO-10M <cit.> is of an even larger scale.
Furthermore, by using a single set of MIR models to annotate all the audio data,
we do not need to be concerned about the issue of inconsistent data formats.
This makes it easier to scale up the training dataset.
§.§ Encoding For Symbolic Music
Since the introduction of the Music Transformer <cit.>, language models based on the transformer architecture have become a popular choice for symbolic music generation.
One of the most critical research questions has been how to encode symbolic music that is amenable to processing by such a model, which, in the context of language models, involves converting the piece into a sequence of tokens.
Early transformer-based models for symbolic music predominantly employed a MIDI-like encoding scheme, by treating MIDI event sequences almost identically as input token sequences <cit.>.
Later, the Revamped MIDI (REMI) encoding <cit.> was proposed, which modified the MIDI encoding by replacing time shift events with duration events for each note and introducing bar and beat concepts to adopt metrical time instead of absolute time. These modifications facilitated the model's learning of rhythmic patterns within the music,
improving the quality of the output.
Building upon REMI, several extensions have been proposed to support encoding multitrack <cit.> and various control tokens <cit.>.
Our work is based on the multitrack REMI encoding, and given the MIR models we have,
it incorporates
control tokens such as genre, chord, and section tokens to the encoding.
§.§ Controllable Symbolic Music Generation
Previous methods for controlling symbolic music generation have typically fallen into two categories.
The first is based on Variational Autoencoders (VAEs) <cit.>.
VAEs aim to find a latent space for representing music that encodes distinct musical attributes in independent dimensions. This disentanglement allows for specific attributes of generated music (e.g., rhythm, genre, or timbre) to be individually manipulated by altering corresponding dimensions in the latent space without affecting other attributes, thereby enhancing the controllability of music generation.
The second approach is to include control tokens in the encoding of symbolic music. For example, MMM <cit.> includes instruments and note density tokens in the encoding, which can be specified at inference. Similarly, FIGARO <cit.> uses “expert descriptions”
indicating time signature, note density, mean pitch, mean velocity and mean duration as well as instruments and chords.
It then uses an encoder-decoder model to learn a mapping from descriptions to sequences of a piece of music.
Driven by the development of Large Language Models (LLMs), recent work has also explored using natural language to control symbolic music generation <cit.>. Natural language text can also be treated as control tokens, with the key distinction that it usually requires pre-training the LLM on text.
In our work, the proposed SymPAC framework is designed to work with a decoder-only language model.
In a controlled generation setting, prompt bars that conform with user input control signals are generated first. The generation of musical part comes after that, in which the model will have full context of control signals from prompt bars. These two generation stages are both controlled by an FSM, which takes into consideration the grammar of the encoding and user inputs.
There are two main
differences between SymPAC and previous works
* We encode control signals as tokens and use FSM to enforce input control signals during inference.
In contrast, for VAE-based control methods, control signals are converted into latent embeddings, and the model is not guaranteed to follow these control signals.
* Since we use a decoder-only language model, the tokens in prompt bars are also learned simultaneously.
Consequently, the user is only required to input a portion of the control information, with the model being able to automatically generate missing controls.
In contrast, an encoder-decoder framework like the one described in <cit.> would require a complete encoder input during inference, which lacks flexibility.
§ METHOD
§.§ Symbolic Music Encoding And Prompt Bars
Our data representation is based on the REMI+ <cit.> representation, an extension of REMI <cit.> that supports multitrack data.
An illustration of our encoding is shown in Fig. <ref>.
The fundamental unit of our encoding is a bar, of which there are two types: prompt bar and song bar.
The token sequence of a song bar can be divided into four parts:
* The meta part includes four tokens for the , , (for section type name), and (which indicates the tempo range).
* The chord part consists of alternating and tokens.
* Each instrument track part consists of a token, followed by one or more groups of , and tokens.
* The drum track part consists of a token,
followed by one or more groups of and (drum MIDI) tokens.
Here are further explanations of , and tokens[Details of all token types are provided in supplementary materials]:
* : Represents the starting position of subsequent , or token within a bar. Each bar is divided into 16 steps, so that position ranges from 0/16 to 15/16.
* : Ranges from the minimum time division of 1/16 bar to a maximum of 2 bars, or 32/16.
* : A track token will only exist if there is at least one note in the bar for the corresponding instrument.
This allows the user to control which instruments are used within a bar.
Prompt bars contain a subset of tokens in song bars, retaining only tokens that represent control signals.
In our case, these include genre, section, tempo, chords and tracks.
As future work, this encoding could be extended to include more control signals (e.g. note density for a track).
The encoding of a full piece of music will consist of: all prompt bars in the piece; then, a special token; then, all song bars in the piece; and finally a special token.
During training stage, the model is trained to predict tokens in prompt bars as well,
not distinguishing them from
tokens in song bars.
As mentioned previously, this design enables the user to input partial control signals (or no input at all), and the model is able to infer the missing ones.
§.§ Constrained Generation via FSM
In the controlled symbolic music generation setting, there are two types of constraints:
Grammar constraint: The encoding of symbolic music follows a specific format. For example, for our proposed encoding shown in Fig. <ref>, a token will always be followed by a token.
User input constraint:
Generated token sequence should conform with user inputs.
For example, if the user wants to generate “rock” style music, the token can only be .
Since we are already aware of these constraints in advance,
there is no need to sample from the entire vocabulary space during inference.
Instead, we can sample from a subspace that is in accordance with the constraints.
To achieve this, we employ a Finite State Machine (FSM) to interact with the language model ℳ during inference. Let x_t denote the token generated by ℳ at time step t. The FSM takes x_t, the current state q_t and the predetermined rule set ℛ, and outputs a subset of the vocabulary 𝒱_t+1, from which the language model ℳ can sample at time t+1. We call this procedure Constrained Generation via FSM, which is formally defined in Algorithm <ref>.
This algorithm is analogous to regular expression matching, where it checks if a given input string conforms to a specified pattern.
Here the pattern and input string are equivalent to rule set ℛ and token sequence s_t respectively.
§ EXPERIMENTS AND RESULTS
To validate our contributions, we conduct experiments to assess whether the system is scalable (i.e., improves when scaling up training data) and controllable (i.e., there is consistency between generation output and user inputs).
In Sec. <ref>, we conduct a quantitative analysis to compare models trained on different amounts of training data, in order to assess scalability.
In Sec. <ref>, we examine two common types of control inputs: chord progression and section structure.
The impact of these control inputs is tested through both quantitative metrics and qualitative examples.
Lastly, in Sec. <ref>, we compare our models trained on different datasets with other baseline symbolic music generation systems through subjective evaluation.
§.§ Datasets
We use three datasets in our experiments. We always use each dataset individually; i.e., we never merge the datasets to train a single model. The datasets are:
Lakh MIDI Dataset (LMD) <cit.>. A dataset in MIDI format, containing around 170K songs.
We use this to compare with models trained on transcribed audio data.
Million Song Dataset (MSD) <cit.>. A public dataset used extensively by MIR researchers. We use the
30–60s preview audio clips, representing the highlight of the song.
In-House Dataset (IHD). We use a licensed internal collection with about 1M
full songs in audio format, covering a wide range of Western modern genres.
§.§ Training Settings
We train a decoder language model with the Llama <cit.> architecture. We set the number of layers, number of attention heads and embedding dimensions to be 12, 12 and 768 respectively, resulting in a model with about 86M trainable parameters.
We concatenate token sequences of all pieces into a 1-D array, and randomly pick a window of size 10,240 as one training sample. As the average sequence lengths of LMD, MSD and IHD are 900, 1500 and 8000 respectively, this window size would contain 11.4, 6.8 and 1.3 pieces on average for each dataset.
When training data are limited, data augmentation and data filtering (to ensure that unusual data do not pollute the training) are commonly used.
However, we adopt neither approach, for two reasons.
First, since we have a large dataset of audio samples, the training data are likely to cover a broad spectrum of examples already, reducing the need to filter out unusual data points.
Second, augmentation may alter the training data in unwanted ways.
For example, a common augmentation approach is to transpose all the pitches in a piece <cit.>.
However, this may distort the pitch ranges of each instrument: e.g., if the input bass parts are transposed up and down, the model will not learn the correct range of realistic bass notes.
§.§ Unconditioned Generation
Intuitively, increasing the amount of data should enhance the performance of the model.
In this experiment, we use objective metrics to validate this.
Designing objective metrics to evaluate symbolic music remains an open question.
A common approach is to prepare a reference dataset,
calculate embeddings or metrics
of the generated samples and reference set,
and then compare these using distance metrics such as the Fréchet Distance or Kullback-Leibler Divergence (KLD).
For a detailed review on evaluation methods for symbolic music, see <cit.>.
In our experiment, we prepare a held-out validation set with 3000 samples.
We use a range of metrics that can be categorized into the following classes: chord, structure, instrument note (including vocals, guitar, piano and bass) and drum note.
Detailed definitions are provided in supplements. In general, the metrics in each class are as follows:
* Chord: chord label, chord root, chord transition;
* Structure: section label, section label bigram, instrument labels per bar;
* Instrument Note: note pitch, note duration, pitch class, min/max pitch per bar, max number of notes per bar, uniformity of number of notes per bar;
* Drum Note: drum key, max number of notes per bar, uniformity of number of notes per bar, and unique drums per bar.
We compare models trained on 100%, 10% and 1% of the IHD data,
and do generation in an unconditioned setting.
For each model, we generate 800 samples to compute metric distributions.
KLD values are then computed between distributions of generated samples and distribution of the validation set for each metric.
Lower KLD indicates that two distributions are closer, suggesting the generated samples sound more similar to the validation set.
We report the average KLD values for the same class, and provide a full list of KLDs for each metric in supplements.
The results are shown in Table <ref>. We can see that the model trained with 100% IHD data has the lowest KLD against the validation set on 6 out of 7 classes, and the model trained on only 1% data has the highest KLD on 6 out of 7 classes.
The results confirm that a model trained on more data can generate samples closer to the training data.
Furthermore, we observed that
the benefit of using more data is greater for the 'Note' metrics than for the 'Chord' or 'Structure' ones.
This is likely because note tokens are more numerous and have complex distributions, which needs larger scale of data to learn.
Counterintuitively, the KLD for 'Structure' was better when using 10% of the data instead of 100%. We speculate that since the structure tokens are scarcest, this could be the result of a lucky alignment between the validation set at the 10% of the data used, but this deserves more study.
§.§ Controlled Generation
The SymPAC framework aims to give users flexible control over the music generation process. However, we need to verify that this control is effective: do the notes generated agree with the control inputs?
To this end, we conduct controlled generation experiments on two input scenarios: chord progression inputs and section structure inputs.
Chord Progression Inputs.
In this experiment, we randomly pick 20 top trending chord progressions from HookTheory[https://www.hooktheory.com] as the chord progression inputs.
We only include major and minor triad chords.
We then let the model generate 64 bars of music by looping the chord progressions.
To evaluate the match between the input chord progression and the output, we apply a symbolic chord detection method on the generated samples. Details about the method can be referred in the supplementary materials.
The accuracy of detected chord from the input chord progression is shown in Table <ref>.
As shown in the result, the models trained on MSD, IHD 100% and IHD 10% all have similar overall accuracy, with MSD slightly out-performing the others. But the model trained on IHD 1% (just 10K songs) is much worse than the other three. This suggests that a dataset at the scale of 100K songs is sufficient to model low-level control signals like chord, given the model and encoding we are using here.
We also provide examples in supplementary audios of outputs when given unusual chord progressions.
Section Structure Inputs.
In this experiment, we take 10 typical section sequences as inputs (listed in supplements), ranging in length from 4 to 13 sections (16 to 68 bars), and use each model to generate 100 outputs per prompt.
We compare the same 4 models from the previous section.
For each generated output,
we leverage a Music Structure Analysis (MSA) algorithm <cit.> to predict its structure, and compare this to the input structure.
The MSA algorithm's predictions may be inaccurate, but we still expect that a greater match between the intended and estimated structure indicates more success at controlling the structure.
We use Foote's algorithm <cit.> for segmentation and the 2D-Fourier magnitude algorithm <cit.>
for section labeling, with a beat-wise feature embedding that averages the pitch-wise MIDI piano rolls within a beat interval. We evaluate the results using <cit.>, and report three metrics: boundary prediction f-measure with a 3-second tolerance (HR3F); pairwise clustering f-measure (PWF); and the normalized entropy score f-measure (Sf). To test directly how similar the repeated sections are, we also report PWF and Sf when the ground-truth segmentation is used.
We find that all metrics are worse (lower) when the system is trained on MSD or IHD 1%, and improve substantially when at least 10% of the data are used (Table <ref>). This is expected, since the audio clips in MSD are only excerpts and thus not instructive for modelling full-song structure.
Fig. <ref> shows the piano roll
of a typical output, where the match between the intended and predicted structure was average (Sf = 0.508).
Even so,
the match between the intended and realized structure is evident in the piano roll: the chorus sections are similar but not identical to each other, and so are the verse sections.
In both controlled generation experiments, the gap between 100% and 10% IHD is very small, indicating that 10% IHD data combined with SymPAC is sufficient for achieving good adherence to control inputs. However, it is important to remember that the metrics of these two experiments only reflect whether the control signals are well-followed, not the overall quality of the generated pieces.
§.§ Subjective Evaluation
The models tested so far were all trained on transcribed audio data, so it is worth comparing
with models trained directly on MIDI data.
In this experiment, we compare our model trained on LMD, MSD and IHD, and also two baselines, FIGARO <cit.> and MMT <cit.>, in a subjective listening test.
We recruited 12 participants with the background of MIR researchers or music producers.
Similar to <cit.>, we asked each participant to rate 10 audio samples generated by each model on a 5-point Likert scale on five criteria: coherence, richness, arrangement, structure and overall [These criteria are described as: (1) Coherence: The rhythm is stable; The chord progression develops logically; Dissonant notes are not excessive. (2) Richness: The melody and acccompaniment are interesting and diverse.
(3) Arrangement: Collaboration among multiple instruments is harmonious and natural; Arrangements for different instruments are diverse and reasonable.
(4) Structure: The piece includes a clear and engaging structure with appropriate repetitions and variations; The piece has obvious connections and reasonable developments between sections.
(5) Overall: I like this piece in general.
].
The result is summarized in Table <ref>. All of our
proposed models outperform the baselines in all dimensions.
Our model trained on IHD has higher performance than the other two training data setups,
which attests to the viability of leveraging audio data by running MIR models at scale.
The result using LMD was better than MSD, despite having fewer songs;
this could be due to LSD having more notes than MSD (see Tab <ref>), or due to it containing full songs instead of only excerpts.
We only compare FIGARO and Ours (LMD) with a statistical test, since these were trained on the same dataset. Mann-Whitney U tests found significant differences in Richness (p = .005), Structure (p = .0005), and Overall (p = .027) ratings, but not in Coherence (p = .85) or Arrangement (p = .122).
§ CONCLUSIONS AND FUTURE WORK
We trained a language model for symbolic music generation leveraging audio data and pre-trained MIR models.
We proposed the SymPAC framework, which includes prompt bars in encoding and Constrained Generation via FSM during inference time.
We showed how combining these two components enables a user to control the generation process, and we evaluated the results through quantitative and qualitative analysis.
Future work could improve at least two aspects of this system:
(1) We quantified position and duration to 1/16 per bar, which does not support 3/4 or 6/8 time signatures well. Also, the chord detection model we used only supports 12 major and minor chords, limiting the user input options. We can expand the encoding to support finer-grained quantization and more advanced chords.
(2) Our token sequence length is long: 8000 on average for samples in IHD.
We could use tokenization methods such as Byte Pair Encoding <cit.> or use compound word tokens <cit.> to compress sequences and improve training efficiency.
supplements
|
http://arxiv.org/abs/2409.02275v1 | 20240903202038 | Laser cooling a centimeter-scale torsion pendulum | [
"Dong-Chel Shin",
"Tina M. Hayward",
"Dylan Fife",
"Rajesh Menon",
"Vivishek Sudhir"
] | quant-ph | [
"quant-ph",
"physics.atom-ph",
"physics.optics"
] |
[email protected]
[email protected]
§ ABSTRACT
Torsion pendula have long been pivotal in the study of classical gravitational forces. Experimental tests of gravity’s fundamental nature call for mechanical systems in the quantum regime while being sensitive to gravity.
We laser cool a centimeter-scale torsion pendulum to a temperature of 10 mK
(average occupancy of 6000 phonons) starting from room temperature (equivalent to 2·10^8 phonons).
This is achieved by optical radiation
pressure forces conditioned on a quantum-noise-limited optical measurement of the pendulum's angular displacement
with an imprecision 13 dB below that at the standard quantum limit (SQL).
The measurement sensitivity is the result of a novel `mirrored' optical lever that passively rejects extraneous
spatial-mode noise by 60 dB.
The high mechanical quality (10^7) and quantum-noise-limited sub-SQL measurement imprecision demonstrate the
necessary ingredients for realizing the quantum ground state of torsional motion — a pre-requisite for
mechanical tests of gravity's alleged quantum nature.
Laser cooling a centimeter-scale torsion pendulum
Vivishek Sudhir
September 9, 2024
=================================================
Introduction.
Torsion pendula have long been pivotal in the measurement of weak fundamental forces, most notably
in establishing the electrostatic inverse-square law <cit.>,
precision measurements of classical gravitational forces <cit.>,
tests of the equivalence principle <cit.>,
and the first observation of radiation pressure torque <cit.>.
In all these experiments, the torsion pendulum is employed as a sensor for a weak classical force.
Recent interest in observing gravity's alleged quantum nature calls for
experiments where gravitationally attracting macroscopic mechanical oscillators are simultaneously prepared in
quantum states of their motion <cit.>. Torsion pendula are particularly suited for
such experiments on account of the low thermal Brownian noise of torsional suspensions even
when mass-loaded <cit.>, and well-understood techniques for isolating gravitational interaction
between them even with masses as small as 100 mg <cit.>.
However, in contrast to the mature array of techniques available for quantum-limited measurement and control of
linear motion within the field of cavity optomechanics <cit.>,
experimental realization of similar techniques has remained
elusive for torsional motion.
In this letter we demonstrate laser cooling of a centimeter-scale high-quality torsion pendulum using a
novel “mirrored optical lever” whose quantum-noise-limited sensitivity of 10^-12 rad/√(Hz)
is 13 dB below the zero-point motion of the pendulum. Conditioned on this measurement, we apply optical radiation
pressure torque on the pendulum so as to cool its angular motion to 10 mK from room temperature, corresponding to an average phonon occupation of 5964±39 (starting from ∼ 2×10^8).
In the following, we describe the torsion pendulum, the mirrored optical lever used to measure its angular motion,
its performance in terms of classical noise cancellation, its calibration, and utility in measurement-based
feedback cooling of torsional motion.
High-Q centimeter-scale torsion pendulum.
A remarkable advantage of torsional pendula in studies of gravity is that their suspensions can be realized with
exceptionally high mechanical quality factor (Q) even when mass-loaded <cit.>.
The reason is two-fold <cit.>: in a doubly-clamped bifilar (or ribbon-shaped) torsional suspension,
tensile stress leads to dilution of the intrinsic mechanical dissipation, and the bifilar geometry
is naturally soft-clamped so that loss at the clamps, and in the suspended mass, is suppressed.
Thus the intrinsic quality factor, Q_int, is elevated to <cit.>
Q_0 = Q_intD_Q where the dissipation dilution factor is D_Q ≈ (σ/2E)(w/h)^2;
here E and σ are Young's modulus and tensile stress, and w and h are the width and
thickness of the ribbon.
This is in marked contrast to tensile-stressed mass-loaded linear oscillators, where bending curvature due to the
loaded mass <cit.> erases the advantage from dissipation dilution.
This capability positions a macroscopic high-Q torsion pendulum as a unique candidate for both reaching
its motional ground state and for gravitational experiments.
We fabricated a doubly-clamped 0.9-cm long thin-film (w=0.5mm, h=400nm)
tensile-stressed (σ = 0.8GPa) torsion pendulum made of
stoichiometric Si_3 N_4.
The device was fabricated starting from a double-sided Si_3N_4-on-Si
wafer followed by lithography and reactive ion
etching. A second aligned lithography and etch created the window for optical access from the back-side.
The device was released using a 24-hour etch in potassium hydroxide. The sample underwent meticulous cleaning with acetone, isopropyl alcohol, deionized water, and oxygen plasma (see SI for more details).
The specific device used in the current study has its fundamental torsional mode resonating at
Ω_0 = 2π·35.95kHz with quality factor Q = 1.4· 10^7 inferred by ringdown measurements
(in vacuum, at 6e-7mbar) as shown in <ref>(c).
The measured Q is consistent with the expected dilution factor D_Q ≈ 2300.
Principle of mirrored optical lever. To achieve quantum-limited readout of angular motion,
we devised a `mirrored' optical lever. Its primary
advantage is passive rejection of classical noises arising from the laser beam's transverse displacement and tilt.
Suppose the output of a laser is predominantly in the fundamental Hermite-Gaussian mode with
amplitude a̅ = √(P/ħω_ℓ) (P the optical power, ω_ℓ =
2π c/λ the carrier frequency for a wavelength λ≈1064nm), then the optical field
can be expressed as <cit.> Ê_in(r,t) = (a̅ + a_00(t)) U_00(r)
+ ∑_n,ma_nm(t) U_nm(r). Here U_nm(r) is the (n,m)-Hermite-Gaussian (HG_nm)
basis function (see SI); the operators {a_nm(t)} represent
fluctuations in the laser field, which in the ideal case, when the field is quantum-noise-limited,
model the quantum vacuum fluctuations in HG_nm mode.
If this incident field is subjected to a transverse displacement δr = (δ x,0,0) and angular tilt
δθ at its beam waist, the optical field is transformed to
Ê_out(r,t) ≈Ê_in(r,t) + a̅(δ x/w_0 +
ikw_0δθ/2)U_10(r),
where w_0 is the waist size.
That is, transverse or angular motion scatters light from the incident HG_00 mode to the
HG_10 mode in proportion to the motion.
When the beam waist is placed at the location of the torsion pendulum, its motion θ_sig(t)
is experienced twice by the reflected optical beam, i.e. 2θ_sig(t);
the incident beam may also have extraneous classical
noises in the transverse displacement (x_ext) and tilt (θ_ext).
When the reflected beam is detected by a split balanced photodetector (SPD) downstream, the resulting
photocurrent fluctuations are described by its (symmetrized single-sided) power spectral density
(see SI, we neglect correlations between the tilt and transverse displacement <cit.>)
S_I[Ω] = 2/π(2η R P k w_0 sinζ)^2
[S_θ^sig[Ω] + 1/4S_θ^ext[Ω]
+ ( ζ/k w_0^2)^2 S_x^ext[Ω]
+ π/2/2η (a̅k w_0)^2^2ζ],
where R is the responsivity of the detector, η the detection efficiency,
and ζ is the Gouy phase shift from the torsion pendulum to the SPD.
The last three terms represent the imprecision in the measurement arising from extraneous tilt noise
(S_θ^ext), extraneous transverse displacement noise (S_x^ext), and the fundamental
imprecision from quantum vacuum fluctuations (from all odd-order HG modes that the SPD is sensitive to;
see SI).
Thus, quantum-noise-limited detection of torsional motion is only possible if extraneous
noise in the spatial mode of the laser beam is suppressed.
Our mirrored optical lever technique (shown in <ref>a,b) cancels classical spatial mode noise
to achieve quantum-noise-limited detection.
Specifically, the laser beam is split by a polarizing beam splitter (PBS) into two paths: one arm to the torsion pendulum, and the other to a retroreflector.
A corner cube retroreflector produces a mirror-image of the laser beam's transverse displacement and tilt,
i.e. δâ_10→ -δâ_10 after interacting with the retroreflector
(<ref>(b)(i) shows the effect of this on input noise), whereas the torsion pendulum induces
pure tilt (<ref>(b)(ii)).
The fields from the two arms are collected by quarter-wave plates at the PBS, and subsequently detected by the
SPD with a Gouy phase control using a lens.
Careful balancing of the power and Gouy phase in the two arms ensures arbitrarily good cancellation
of extraneous noises (acquired by the laser beam before the PBS, <ref>(b)(iii)) and
transduction of torsional motion into transverse displacement at the SPD (<ref>(b)(iv)).
Thus, for the mirrored optical lever
S_I[Ω] = 2/π(2η R P k w_0 sinζ)^2 [S_θ^sig[Ω]
+ π/2/η (a̅k w_0)^2^2ζ],
immune to classical noises and thus quantum-noise-limited in its imprecision (see SI for more details).
The trade-off is double the quantum noise from the mirrored beam.
In contrast to interferometric detection of linear motion, our scheme is immune to laser phase noise;
yet, the mirrored optical lever is qualitatively similar to homodyne detection: the quantum-noise-limited imprecision
decreases inversely with optical power (a̅∝ P) and the Gouy phase plays the role of the
homodyne angle. We place the SPD at Gouy phase ζ = π/2 which maximizes the angular signal.
Performance and calibration of mirrored optical lever.
In order to investigate the sensitivity of the mirrored optical lever, we first operate it with a flat mirror,
instead of the torsion pendulum, in the signal arm.
Independent calibration of the SPD voltage into angular motion is crucial for further investigation.
(Note that estimating the angular displacement from the measured optical lever arm does not
hold beyond the Rayleigh length.)
We perform direct calibration against a known frequency modulation. To wit,
an acouto-optic deflector (AOD) is placed at the input plane of the entrance lens of a 4f imaging system
(see <ref>), with the torsion pendulum (and retroreflector) at the output plane of the second lens.
In this configuration, the tilt of the laser beam at the input plane can be modulated by frequency-modulating the AOD drive as
Δθ_cal = (λ/v_c) Δ f_cal, where Δ f_cal is the frequency-modulation
depth, and v_c ≈5700m/s is the acoustic velocity of the AOD quartz crystal.
This known tilt change manifests as a voltage change (Δ V_cal) in the SPD signal.
By estimating the calibration factor, defined as α_cal=Δθ_cal/2Δ V_cal, the angular displacement spectrum is computed from the SPD spectrum: S_θ[Ω]=α_cal^2 S_V[Ω], where S_V[Ω] is the measured SPD spectrum (see SI for details).
<ref>(a) shows the voltage amplitude detected by the SPD with different frequency modulation depth of the AOD drive. The beam tilt modulation using the AOM ensures the linear calibration of the voltage to the angular spectrum with an R-square value of 0.99996, as revealed by the linear fit.
<ref>(b) demonstrates the calibrated performance of the mirrored optical lever.
The angle-referred imprecision noise decreases as the reflected optical power increases, and
reaches 2.6e-12rad/√(Hz) with 5 mW incident power, consistent with the quantum-noise scaling
in <ref> corresponding to a detection efficiency η≈ 0.75.
The excess noise peaks between 200 Hz and 1 kHz are attributable to seismic noise-induced
fluctuations (see SI).
Without the mirrored arm, we observed extraneous low frequency tilt noise above 1 mW
of optical power that compromised the linearity of the SPD; with the mirrored arm, we characterized input tilt
noise suppression by ∼ 60 dB over 1 Hz (see SI).
Given that our lab temperature is stabilized to a precision around 20 mK <cit.>, we conjecture
that the low frequency drifts in input laser tilt are due to refractive index fluctuations from air currents.
Imprecision below the standard quantum limit.
Next, we place the torsion pendulum in the signal arm of the mirrored optical lever.
<Ref>(a) shows the power spectrum of the measured angular fluctuations of the fundamental torsional
mode as the power in the measurement field increases.
The observed angle fluctuations
θ_obs = θ_phys + θ_imp
consists of the physical motion of the pendulum θ_phys and the imprecision noise
θ_imp of the optical lever.
The physical motion, shown as the black line in <ref>(a), is predominantly due to the
thermal Brownian motion of the mode due to its n_th = k_B T/(ħΩ_0) ≈ 2· 10^8
average phonons at room temperature, and is described by the fluctuation-dissipation theorem <cit.>,
S_θ^phys[Ω] = 4ħ(n_th + 12) χ_0[Ω]; here χ_0[Ω] =
[I(-Ω^2 + Ω_0^2 + i ΩΓ_0[Ω])]^-1 is the susceptibility of the torsional mode
with moment of inertia I and damping rate Γ_0[Ω] = (Ω_0/Q)(Ω_0/Ω).
This model shown as the black line in <ref>(a) is based on independent
measurement of the frequency, mechanical Q, and
calibrated mode temperature. The latter is inferred by calibrating each spectra using our frequency
modulation technique, and assuming that the torsional mode is in thermal equilibrium.
We verified (see SI) that the mode temperature is constant across all powers except the highest, which shows a
13% increase, presumably due to optical absorption.
The observed imprecision noise (<ref>(a)) is inversely proportional to the optical power,
consistent with quantum-noise-limited measurement.
The inset shows the measurement imprecision around resonance in
equivalent phonon units, n_imp = S_θ^imp/2S_θ^zp[Ω_0], where
S_θ^zp[Ω_0] = 4θ_zp^2/Γ_0 is the peak spectral density of the zero-point motion
θ_zp = √(ħ/(2I Ω_0)) of the torsional mode.
The overall low imprecision of the measurement, n_imp≈ 5· 10^-2, can be best
contextualized by comparing it against the standard quantum limit (SQL).
The SQL is the minimum total noise
in the measurement allowed by quantum mechanics without the use of any quantum enhancement:
it is the sum of the zero-point motion of the torsional mode, and the minimum trade-off between the measurement imprecision and the concomitant quantum back-action of the measurement
due to quantum fluctuations in the radiation torque of the measurement beam.
It can be shown (see SI) that for the measurement scheme we use here, the SQL is given by
S_θ^SQL = 2S_θ^zp[Ω_0](1+√(π))/2 ≈ 1.38· 2S_θ^zp[Ω_0], which is
the blue line in <ref>(a).
The imprecision in our most precise measurement is ∼ 13 dB below the SQL.
Laser cooling by radiation torque feedback.
The ability to measure with an imprecision below the SQL opens the door for laser cooling of torsional motion,
eventually into the ground state, paralleling the development of feedback cooling of linear
motion <cit.>.
We apply radiation pressure torque (δτ_fb) from a second laser beam, conditioned on the observed motion
δθ_obs, to actuate on the torsional mode.
The observed motion is used to synthesize a signal that drives an amplitude modulator
in the path of an actuation laser. This beam is focused to a 50μ m spot on the edge of the torsion pendulum
(at an angle with respect to the measurement beam so as to not cause scatter into the measurement beam, see <ref>(a)).
Torque actuation is exclusively driven by radiation pressure and constrained by imprecision noise at the SPD, ensuring that feedback cooling is governed primarily by the observed motion (see SI).
The resulting optical feedback torque, δτ_fb = -χ_fb^-1δθ_obs, can be engineered to
affect damping by adjusting the phase of the feedback filter χ_fb^-1 to be π/2 around resonance.
The low imprecision of the measurement guarantees that the damped motion is in fact cold.
The torsional mode is cooled by increasing the gain in the feedback loop. <Ref>(b) shows the spectrum S_θ^obs of
the observed motion as the gain is increased.
The effective damping rates (Γ_eff) are estimated by fitting the curves to a model of the apparent motion (see SI for details
of the model); solid lines show these fits.
Assuming that the torsional mode satisfies the equipartition principle, we estimate the phonon occupation from the model of the
physical angular motion, using parameters inferred from fits to the observed motion:
n_eff≈∫S_θ^phys/2θ_zp^2dΩ/2π≈ n_thΓ_0/Γ_eff+n_impΓ_eff/Γ_0.
The inset in <ref>(b) shows the inferred phonon occupation as the effective damping rate is increased by the feedback gain.
Feedback cooling cools the torsional mode from an average phonon occupation of ∼ 2· 10^8 (at room temperature) to about 5964±39,
ultimately limited by heating by feedback of imprecision noise, consistent with <ref>.
This sets the limit n_eff≳ 2√(n_thn_imp).
Conclusion. We demonstrated laser cooling of a centimeter-scale torsion pendulum using a measurement
whose imprecision is 13 dB below the standard quantum limit.
The performance of cooling is limited by the achievable strength of the measurement given the size of the
zero-point motion. This limitation can be overcome using lower frequency torsional oscillators laser-cooled
in a strong optical trap <cit.>, for which the requirements are much reduced <cit.>
compared to the current protocol.
Thus, our work opens the door for eventual quantum state control of macroscopic torsional pendula.
Combined with the pedigree and advantage of torsional pendula in the measurement of weak gravitational
interactions, this work establishes the necessary step towards mechanical experiments that explore the
fundamental nature of gravity.
Supplementary Information
§ TORSION PENDULUM
§.§ Fabrication
The torsion pendulum used in this work is fabricated starting
with a double-sided, 400 nm thick Si_3 N_4-on-Si wafer (WaferPro);
<ref> shows the essential steps in the process.
We coated both sides with 3μ m thick S1813 positive photoresist. Using mask lithography,
we patterned the top-side resist with the pendulum shape, while the bottom-side resist protected the wafer from
scratches during handling.
We transferred the pattern to the SiN film using a CF_4-based reactive ion etch. After removing the
remaining photoresist, we applied a fresh coat to both sides. We then patterned the backside with a square window,
carefully aligning it with the front pattern. After dry-etching the square window into the wafer, we applied a thick layer of photoresist to both sides for protection. We diced the wafer into individual samples. We removed the photoresist using
an acetone, iso-propyl alcohol (IPA), and de-ionized (DI) water rinse, followed by an oxygen plasma clean to eliminate any
lingering residue. We vertically placed each sample in a 74^∘ C potassium hydroxide bath for 24 hours.
After etching, we rinsed the samples in DI water, followed by IPA, and let them air-dry.
Finally, we gave the device a final oxygen plasma clean to remove any remaining residue.
§.§ Ringdown measurements
The mechanical Q of the fundamental torsional mode was estimated from ringdown measurements. The motion is
excited by a radiation pressure torque, and measured using the SPD. The measured signal is demodulated at the
mechanical frequency using a lock-in amplifier (Moku:Pro, Liquid Instruments).
<Ref> shows such measurements at a few different powers of the measurement beam.
§ NOISE IN AN OPTICAL LEVER
To better understand the quantum noises in optical lever detection, we consider here the decomposition of a
quantized optical field on an orthonormal Hermite-Gaussian (HG) basis.
The HG mode U_mn of m-th order in x-axis and n-th order in y-axis is defined as <cit.>
U_mn(x,y,z,t) = u_mn(x,y,z)e^iϕ_mn(x,y,z,t),
u_mn(x,y,z) = √(2π)1w(z)1√(2^m+nm!n!)e^-(x^2+y^2)/w^2(z)
H_m(√(2)x/w(z)) H_n√(2)y/w(z)),
ϕ_mn(x,y,z,t) = ω_ℓ t-kz-k(x^2+y^2)/2R(z)+(m+n+1)ζ(z);
here, ω_ℓ and k are the angular frequency and wavenumber of the laser beam;
w(z)=w_0√(1+(z/z_R)^2), R(z)=z(1+(z_R/z)^2), and ζ(z)=artan(z/z_R) are the beam radius, radius of curvature, and Gouy phase, respectively;
w_0 is the beam width at the waist; z_R=kw_0^2/2 is the Rayleigh length; and
H_n is the n^th Hermite polynomial.
These bases satisfy the orthonormality relation
∫ x y U_mn^* U_m'n'=δ_mm'δ_nn'.
The quantized electric field (in an arbitrary, but static polarization direction) of a laser beam with
power P in the (fundamental) HG_00 mode — such as the emission of an ideal laser — can be modeled by
Ê(x,y,z,t) = (a̅+ a_00(t)) U_00+∑_mn^δâ_mn(t)U_mn.
Here a̅=√(P/ħω_ℓ) is the mean photon flux amplitude and the operators
δâ_mn represent fluctuations in U_mn mode, that satisfy the
canonical commutation relations <cit.>:
[δâ_mn[Ω], δâ_m'n'[Ω']^†]
= 2πδ_mm'δ_nn'δ[Ω+Ω'].
In the ideal case, the fluctuations are given by vacuum fluctuations of the HG modes, a_mn^0, derived through the projection of the fluctuations of a plane wave into the HG modes.
To account for extraneous transverse displacement and tilt noises, we set
a_mn = a_mn^0 + δ a_mn^ext,
where δ a_mn^ext represent the extraneous fluctuations in each mode.
Let us consider a case where the laser beam at arbitrary point z=Z is transversely displaced by δ x and tilted by δθ in the xz plane, viz.,
Ê(x,y,Z,t) →Ê(x-δ x,y,Z,t)e^ikδθ x≈Ê(x,y,Z,t)+a̅(1/w(Z)δx̂+ikw(Z)/2δθ̂)
e^-iζ(Z)U_10,
where we assume δ x, δθ≪ 1, and have then represented these fluctuations by associating
them with operators.
<ref> shows that the laser beam's transverse displacement and tilt can be interpreted as the scattering of the fundamental mode to the first-order mode. For instance, at the beam waist (z=0), the displacement (tilt) fluctuations are encoded into the amplitude (phase) quadrature of the first-order mode, while in the far field (z≪ z_0 or z≪-z_0) the fluctuations are transposed to the opposite quadratures. Meanwhile, <ref> also explicitly shows how the first-order amplitude and phase quadratures contribute to the displacement and tilt fluctuations along the beam propagation:
x(Z) = w(Z)/2a̅(δâ_10 e^iζ(Z)+δâ_10^† e^-iζ(Z))
θ(Z) = 1/ia̅kw(Z)(δâ_10 e^iζ(Z)-δâ_10^† e^-iζ(Z))
This represents the essence of the optical lever technique; the laser tilt produced by a torsion pendulum at the beam waist is converted into the transverse displacement at the far field.
§.§ Measurement imprecision in split photo-detected optical lever
Thus, a split photodetector (SPD) can used in the far field to detect the transverse displacement of the laser beam,
and thereby infer the tilt that produced it <cit.>.
Specifically, the SPD takes the difference of the photocurrents from a pair of photodetectors placed next to each
other, which we take to be horizontal in the plane perpendicular to the propagation direction z.
The photocurrent operator Î(t;z) for the SPD located at a distance z can be computed by the photon number operator, n̂=â^†â, as
Î(t;z) = q_e [n̂_R(t)-n̂_L(t)]
=q_e∫_-∞^∞dx∫_-∞^∞dy sgn(x)Ê^†(x,y,z,t)Ê(x,y, z,t)
=2q_ea̅∑_m,n≠0^1/√(2^m+nm!n!)[δâ_mn(t)e^i(m+n)ζ(z)]∫_-∞^∞dx sgn(x) ∫_-∞^∞dy 2/π w(z)^2 H_m(x)H_n(y)e^-2(x^2+y^2)/w(z)^2
=2/πq_ea̅∑_m,n≠0^1/√(2^m+nm!n!)[δâ_mn(t)e^i(m+n)ζ(z)]∫_-∞^∞dx sgn(x) H_m(x)e^-x^2∫_-∞^∞dy H_n(y)e^-y^2
=2q_ea̅∑_m≠0^1/√(π2^mm!)[δâ_m,0(t)e^imζ(z)]∫_0^∞dx (1-(-1)^m) H_m(x)e^-x^2
=4q_ea̅∑_k=0^∞1/√(π2^2k+1(2k+1)!)[δâ_2k+1,0(t)e^i(2k+1)ζ(z)]∫_0^∞dx H_2k+1(x)e^-x^2
=4q_ea̅∑_k=0^∞[δâ_2k+1,0(t)e^i(2k+1)ζ(z)](-1)^k/(2k+1)k!√((2k+1)!/π 2^2k+1)
=2/√(π)q_ea̅∑_k=0^∞[δq̂_2k+1,0(t)cos((2k+1)ζ(z))-δp̂_2k+1,0(t)sin((2k+1)ζ(z))]D_k
where q_e is the electron charge, δq̂_mn(t)=(δâ_mn+δâ_mn^†)/√(2) and δp̂_mn(t)=(δâ_mn-δâ_mn^†)/√(2)i are amplitude and phase quadrature operators, and the coefficient D_k is defined as D_k =(-1)^k/(2k+1)k!√((2k+1)!/2^2k). <ref> shows that the split photodetection leads to the detection of quadratures of the odd-number HG modes δâ_2k+1,0, being amplified by the amplitude a̅ of the fundamental HG mode, and with a rotation determined by the Gouy phase acquired during propagation.
That is, in analogy with a balanced homodyne detection employed for measurement of longitudinal phase
fluctuations, the amplitude of fundamental mode a̅ and the Gouy phase shift act as the local
oscillator beam and the homodyne angle, respectively.
Suppose now that the angular fluctuations are produced by the motion of a torsion pendulum positioned at z=z_0.
The laser beam reflected off from the torsion pendulum, whose angular motion is denoted as θ_sig, attains a tilt of 2θ_sig. Given that the laser beam is subject to extraneous classical noises in the first-order HG mode, i.e. classical transverse displacement (x_ext) and tilt noise (θ_ext) referred to the position of the torsion pendulum, the optical field can be expressed as
Ê(r,t) ≈a̅U_00(r,t)+[ a̅/w(z_0)x_ext+ikw(z_0)/2(2θ_sig+θ_ext)]U_10(r,t)+ ∑_n,ma_nm(t) U_nm(r,t).
Then, the symmetrized single-sided spectral density of the photocurrent can be computed from <ref> as
S_I[Ω] = 2/π(2η R P k w(z_0) sinζ)^2[S_θ^sig[Ω] + 1/4S_θ^ext[Ω] + ( ζ/k w(z_0)^2)^2 S_x^ext[Ω]+ π/2/2η (a̅k w(z_0))^2^2ζ],
where R= q_e/ħ w_l and η are the responsivity and the quantum efficiency of the SPD, and ζ=ζ(z)-ζ(z_0) is the Gouy phase shift from the torsion pendulum to the SPD. In this calculation, the
single-sided spectral densities of the vacuum fluctuations are S_q^mn,0=1 and S_p^mn,0=1
(<ref>), and ∑_k=0^∞D_k^2=π/2.
Note that transduction of angular motion to SPD signal reaches its maximum value when the Gouy phase shift is set to
π/2. This condition can be achieved either by preparing a sufficient lever length, i.e., z-z_0 ≫ z_R, or by controlling the accumulated Gouy phase shift with a lens.
The last term in <ref> gives the imprecision noise due to quantum fluctuations in the higher-order modes,
S_θ^imp[Ω]
= π/2/2η (a̅k w(z_0))^2^2ζ,
whereas the second and third terms represent the imprecision noise from extraneous fluctuations in the laser.
§.§ Measurement back action in optical lever
We consider the quantum back-action torque induced by the laser beam in the optical lever. In the main manuscript, it is assumed that the laser beam is incident at the center of the torsion pendulum. Here, we address the scenario where the laser beam is offset by a distance x_0 from the center of rotation of the torsion pendulum. By linearizing the torque fluctuations, the back-action torque can be expressed as
δτ̂_ba[Ω] = Fδx̂[Ω] + δF̂[Ω] x_0,
where the first term represents the back-action torque due to the transverse displacement fluctuation, and the second term accounts for the force fluctuation. The spectral density of the back-action torque is given by
S_τ^ba[Ω] = F^2S_x[Ω] + x_0^2 S_F[Ω].
In general, these fluctuations can be decomposed into two contributions: extraneous classical noise and quantum noise. Therefore, we incorporate the classical components of the fluctuations, denoted as S_x^ext[Ω] and S_F^ext[Ω]. The quantum fluctuation of the transverse displacement can be derived from Eq.(<ref>) as S_x[Ω] = w^2(z_0)/2a̅^2, while the quantum radiation pressure force fluctuation is given by S_F^ba[Ω] = 8a̅^2ħ^2k^2, with the mean radiation pressure force F_ba = 2a̅^2ħ^2k^2. Consequently, the total back-action torque is expressed as
S_τ^ba[Ω] = (√(2)ħa̅kw(z_0))^2[1 + (2x_0/w(z_0))^2] + (2ħa̅^2k)^2S_x^ext + x_0^2 S_F^ext.
The first term arises from quantum fluctuations in the optical beam used for the measurement (with the term ∝ x_0 coming from any
potential mis-centering of the beam on the torsion pendulum), while the second and third terms arise from extraneous fluctuations in it.
§.§ Total measurement noise and the Standard Quantum Limit
The observed SPD signal, referred to angle, can be understood as the sum of the intrinsic
(i.e. thermal and zero-point) motion, back-action-driven motion, and measurement imprecision, i.e.
θ_obs = θ_int + θ_ba + θ_imp.
The intrinsic motion is given by the
fluctuation-dissipation theorem <cit.>
S_θ^int[Ω] = 4ħ(n_th[Ω]+12) χ_0 [Ω],
where n_th[Ω] = (e^ħΩ/k_B T-1)^-1 is the average thermal phonon occupation at temperature
T, and χ_0[Ω] = [I (-Ω^2 + Ω_0^2 - i ΩΓ_0[Ω])]^-1 is the mechanical
susceptibility for the torsion pendulum with moment of inertia I and
frequency-dependent structural damping rate Γ_0[Ω] = (Ω_0/Q)(Ω_0/Ω).
The back-action-driven motion is θ_ba = χ_0 δτ_ba, whose spectrum is
S_θ^ba = χ_0^2 S_τ^ba,
with S_τ^ba given by <ref>.
Note from <ref> that as the measurement imprecision due to quantum noise decreases (∝ 1/P),
quantum back-action increases (∝ P).
Thus the measured angular noise has a minimum
S_θ^obs
= S_θ^int + S_θ^ba + S_θ^imp≥ S_θ^int + 2√(S_θ^ba S_θ^imp),
whose ideal limit (zero temperature, perfect detection efficiency, beam perfectly centered, no extraneous
imprecision or back-action) on resonance defines the (resonant) standard quantum limit (SQL) of the measurement:
S_θ^SQL[Ω_0] = 2 ħχ_0[Ω_0] (1+√(π))
= 2 S_θ^zp[Ω_0] 1+√(π)/2.
Here the spectral density of the zero-point motion of the torsional mode is
S_θ^zp[Ω_0] = S_θ^int[Ω_0]|_T=0 = 4θ_zp^2/Γ_0,
where θ_zp = √(ħ/(2I Ω_0)) is the zero-point motion of the mode.
In contrast to the SQL for the measurement of linear motion, this is larger than twice the resonant
zero-point motion of the mechanical oscillator by (1+√(π))/2 ≈ 1.386.
§ EXTRANEOUS NOISE CANCELLATION IN MIRRORED OPTICAL LEVER
Here we describe the noise spectral density in the mirrored optical lever. In this configuration (as shown in Fig. 1 in the main text), the input laser beam is prepared to have equal components for polarizations in S and P, so that the P-polarized component passes on to the torsion pendulum to sense the angular motion (signal arm), while the S-polarized component is diverted to a retroreflector (reference arm). A retroreflector is a 90^∘ corner cube that inverts the tilt and transverse displacement of the beam with respect to the normal reflection from a flat mirror, i.e., it alters the sign of the amplitude flux of the first-order HG mode as δâ_10(t) → -δâ_10(t). In each arm, a quarter-wave plate is positioned so that both beams are collected from the output port of the PBS.
The output optical field is written as *Ê_out(r,t) = Ê_sig(r,t)*e_S+Ê_ref(r,t)*e_P, where *e_S and *e_P are the unit vectors for S and P polarizations, respectively. The signal and reference fields are obtained from the Eq.(<ref>) as
Ê_sig(r,t) = a̅U_00^sig(r,t)+[ a̅/w(z_0)x_ext+ikw(z_0)/2(2θ_sig+θ_ext)]U_10^sig(r,t)+ ∑_n,ma_nm(t) U_nm^sig(r,t),
Ê_ref(r,t) = a̅U_00^ref(r,t)+[ -a̅/w(z_0)x_ext-ikw(z_0)/2θ_ext]U_10^ref(r,t)+ ∑_n,ma_nm(t) U_nm^ref(r,t).
Here, U^sig(ref)_mn represents the notation for the HG modes coming from the signal (reference) arm.
We assume that the optical power and arm length for each arm are identical.
The combined field is then detected by an SPD, yielding the photocurrent density as
S_I[Ω] = 2/π(2η R P k w(z_0) sinζ)^2 [S_θ^sig[Ω]
+ π/2/η (a̅k w_0)^2^2ζ],
This indicates that the reference beam cancels out the classical displacement and tilt noises with the trade-off of the doubled shot noise. The orthogonal polarizations for the optical fields allow us to obviate the requirement for stabilizing laser frequency noise. The sensitivity for the angular motion detection is maximized when the Gouy phase shift ζ̃ is precisely set to π/2 as follows:
S_I[Ω] = 2/π(2η R P k w(z_0))^2 [S_θ^sig[Ω]
+ π/2/η (a̅k w(z_0))^2],
This result shows that, in principle, the mirrored optical lever with the use of a retroreflector is immune to the classical noises that the laser beam experiences before the PBS.
In our experiment, the output laser beam is split by a knife-edge right-angle mirror and then collected by a low noise balanced photodetector (HBPR-100M-60K-IN-FS, Femto). The difference in photocurrents between the two photodiodes of the BPD corresponds to the transverse displacement of the laser beam at the knife-edge mirror. The Gouy phase shift in Eq. (<ref>) is defined by the locations of the torsion pendulum and knife-edge mirror. We set the Gouy phase to be π/2 by manipulating a lens between the torsion pendulum and the knife-edge mirror.
To characterize the classical noise suppression of the mirrored optical lever, we measure the angular fluctuations over time with and without the mirrored arm. In this proof-of-concept experiment, the torsion pendulum is replaced with a flat mirror so that the performance is only limited by the measurement system. <ref>(a) shows the angle fluctuations of the laser beam referred to the location of the flat mirror over 300 s with a sampling rate of 100 Hz. We observe that the tilt fluctuation (blue curve) is suppressed when the mirrored image is simultaneously incident on the SPD, reducing the standard deviation from 620 nrad to 73 nrad.
Furthermore, we characterize the mirrored optical lever system in terms of the frequency response. To this end, we generate the periodic virtual tilt signal swept over 1 Hz – 100 kHz using the acousto-optic deflector. This virtual angular signal is detected by the SPD and subsequently lock-in amplified. As shown in <ref>(b), we confirm that the mirrored optical lever is capable of reducing the angle fluctuations by 50–60 dB in a broad frequency range up to 100 kHz. Thine residual noise may be due to the imperfect polarization division, the finite extinction ratio of the PBS ∼ 3000, and other systematic errors.
In Fig. 2(a) of the main text, excess noise is observed in the low-frequency regime below 1 kHz Fourier frequency. Potential sources of this extraneous noise include seismic noise and residual spatial noise caused by air current-induced refractive index fluctuations. (Laser intensity noise is significantly suppressed by balanced photodetection at the SPD, and the optical lever is insensitive to the phase quadrature of the laser beam.) To elucidate the origins of this excess noise, we examine the coherence spectrum between the angular displacement noise S_θ[Ω] and seismic displacement noise S_x^sei[Ω]. For this purpose, seismic acceleration S_a^sei is measured and low-pass filtered to obtain S_x^sei[Ω]=S_a^sei[Ω]/Ω^4.
<Ref> shows the coherence spectrum, |S_θ x^sei[Ω]|^2/S_θ[Ω]S_x^sei[Ω], between angular displacement and seismic noise. A relatively high coherence is observed in the range of 300 Hz – 1 kHz, resembling the excess noise seen in Fig. 2(a) of the main text. Therefore, the noise peaks in this frequency range are likely attributable to seismic noise, while the long-term noise below 300 Hz may be caused by air current-induced refractive index fluctuations.
§ ANGLE CALIBRATION
§.§ Calibration of mirrored optical lever
In conventional optical lever techniques, the angular displacement of the torsion pendulum, δθ, is inferred from the geometry of the lever: a laser beam deflected with a tilt of 2δθ propagates over a distance z, thereby amplifying the transverse displacement by the lever arm, i.e., δ x = 2zδθ.
This suggests that extending the lever arm could infinitely amplify the SPD signal gain.
However, this naive calibration is only valid within the framework of ray optics; beyond the Rayleigh length of the laser beam, the gain of the SPD signal eventually saturates.
For accurate measurement of the intrinsic thermal motion of a high-Q torsion pendulum, calibration is often performed by fitting the theoretical model of thermal motion to the SPD voltage spectrum.
However, this method can lead to inaccuracies when extraneous noise, such as photothermal effects or external disturbances, contaminates the angular displacement spectrum.
To overcome this limitation, we employ a direct calibration method that relates the SPD signal to angular displacement using an acousto-optic deflector (AOD).
As described in the main text, an AOD (M1377-aQ80L-1, ISOMET) is positioned at the input plane of the entrance lens of a 4f imaging system, with the torsion pendulum (and retroreflector) located at the output plane of the second lens.
The 4f system, composed of two lenses with identical focal lengths, reproduces the angular tilt of the laser beam at the input plane at the output plane.
We use the first-order diffraction beam from the AOD, with its tilt angle precisely controlled by the relation Δθ_cal = (λ/v_c) Δ f_cal, where Δ f_cal is the drive frequency change and v_c ≈5700m/s is the acoustic velocity of the AOD quartz crystal.
Using this relation, we determine the calibration factor for converting the SPD voltage spectrum into an angular displacement spectrum.
Since the laser beam acquires an angular displacement that amounts to twice the torsional motion of the pendulum upon reflection, the calibration factor is defined as α_cal=Δθ_cal/2Δ V_cal, where Δ V_cal represents the voltage change in the SPD signal.
To precisely attain the calibration factor, we frequency modulate the AOD drive with a known depth (Δ f_cal), and observe the corresponding SPD voltage modulation amplitude (Δ V_cal) in the time domain.
In this process, we trigger and average the calibration tone 100 times to mitigate extraneous noises in the SPD signal.
Then, the calibration factor is used to convert the SPD spectrum to the angular displacement spectrum: S_θ[Ω]=α_cal^2 S_V[Ω], where S_V[Ω] is the measured SPD spectrum.
Our typical choice for the AOD calibration is a 10 kHz frequency modulation tone with a 100 kHz frequency depth (the corresponding tilt amplitude of 18.7 rad), centered around 80 MHz (AOD operation frequency).
Note that, although this calibration tone can be detected in the power spectral density (PSD), the measured voltage amplitude may be inaccurate or underestimated due to the broadening of the linewidth in the PSD analysis.
<ref> illustrates the application of AOD-based calibration in our experiments.
We vary the frequency modulation depth of the AOD drive and monitor the SPD signal in the time domain. The triggering and averaging of the modulation signal is performed by an oscilloscope (Moku:Pro, Liquid Instruments).
The voltage amplitude is then estimated from these averaged data.
<ref>(b) displays the voltage amplitudes as a function of modulation depths, corresponding to beam tilt amplitudes, which provides the calibration factor for converting the SPD spectrum into the angular displacement spectrum.
The linear relationship is confirmed with a coefficient of determination R^2 = 0.99996.
Additionally, <ref>(c) demonstrates the linear dependence of voltage amplitude on incident optical power.
Figure 2(b) in the main text is calibrated from the SPD voltage spectrum to the angular displacement spectrum using the calibration method described above.
To verify that the imprecision noise in the spectrum is quantum-noise-limited, we measure the noise floor of the photocurrent density, referred back to the optical power, at a frequency of Ω = 2π·80kHz under varying optical power levels.
As shown in <ref>(a), at power levels below 20 µW, the dark noise is dominant, while shot noise starts to prevail beyond this threshold.
The curve fitting reveals that the quantum efficiency of the balanced photodetector used in this study is 0.81.
<ref>(b) shows the imprecision noise in the angular displacement obtained by implementing the calibration method.
The beam width at the position of the flat mirror is 587 m measured by a beam profiler (BC207VIS, Thorlabs).
The angular sensitivity improves as the optical power increases, and, at the maximum power of 5 mW, the imprecision noise reaches 2.56 prad/√(Hz).
This value is 16% higher than the shot noise level (2.21 prad/√(Hz)) calculated under the assumption of ideal detection (<ref>). Therefore, the detection efficiency is 0.75.
§.§ Calibration of thermal noise spectra
When measuring the torsion pendulum, we reduced the beam size from 587 µm to 180 µm to minimize clipping loss of the measurement beam at the finite surface width of the pendulum (500 m).
This reduction was achieved by focusing the beam onto the AOD at the input plane of the 4f system.
However, it was observed that the AOD calibration method became inaccurate as the beam was focused onto the AOD crystal: despite applying a pure tone frequency modulation, the SPD signal was significantly distorted with harmonics.
This appears to be due to the incoherent addition of diffraction angles as the beam is focused on the AOD crystal.
To address this, we adopted a two-step calibration process: (1) calibrating the thermal motion of the torsion pendulum using a reliable, larger beam size (w_0 ≈587 m), and (2) using this thermal motion as a reference to calibrate the SPD spectrum obtained with the smaller beam size (w_0 ≈180 m).
The second step was performed with 1 mW optical power, which kept the pendulum at room
temperature (290 K) (see <ref>).
The calibration factor was determined by fitting the spectra to the thermal noise at room temperature, resulting in a calibration factor of α_cal = 1.65×10^-9 rad/V with a 2% error.
Through two-step calibration, the imprecision noise at the SPD was found to be
S_θ^imp=1.06×10^-22 at 10 mW optical power, corresponding to a detection efficiency of 0.244.
The lower sensitivity is largely due to the reduced beam size (<ref>).
The ideal imprecision derived in (<ref>), which assumes an intact spatial mode profile upon reflection, does not hold for the torsion pendulum measurement: that derivation assumes that tilt is induced by an infinite
plane, whereas the finite extent of the torsion pendulum clips the wings of the Gaussian beam, leading to
distortion of the reflected beam beyond the simple model.
Nonetheless, the imprecision noise shown in Fig. 3(a) of the main text is quantum-noise-limited, as confirmed by its good agreement with the expected shot noise scaling (<ref>).
§ MEASUREMENT-BASED FEEDBACK COOLING
§.§ Theory
The physical angular displacement of the torsion pendulum reacts to external torques, consisting of thermal, back-action, and feedback torques:
χ_0^-1[Ω]θ_phys[Ω]=τ_th[Ω]+τ_ba[Ω]
+τ̂_fb[Ω],
where χ_0[Ω] is the susceptibility of the torsion pendulum.
In measurement-based feedback control <cit.>, the observed motion [<ref>]
θ_obs[Ω]=θ_phys[Ω] + θ_imp[Ω]
is used to synthesize a feedback force, in this case a torque,
τ_fb[Ω]=-χ_fb^-1[Ω]θ_obs[Ω]
so that the physical motion
θ_phys[Ω] = χ_eff[Ω](τ_tot[Ω]
-χ_fb^–1[Ω]θ_imp[Ω])
is modified via the effective susceptibility
χ_eff=(χ_0^-1+χ_fb^-1)^–1;
here τ_tot=τ_th+τ_ba+τ_fb.
In the presence of feedback, the observed motion is also modified:
θ_obs[Ω] =
χ_eff[Ω](τ_tot[Ω]+χ_0^–1[Ω]
θ_imp[Ω])
Cooling by this kind of feedback can be affected by the choice -χ_fb = i I ΩΓ_fb around
mechanical resonance Ω_0.
In this case, the spectrum of the observed motion assumes the form
S_θ^obs[Ω] =
S_τ^tot[Ω]/I^2[(Ω_0^2-Ω^2)^2+(ΩΓ_eff)^2]+(Ω_0^2-Ω^2)^2+(ΩΓ_0[Ω])^2/(Ω_0^2-Ω^2)^2+(ΩΓ_eff)^2S_θ^imp[Ω];
this model is used to fit the data in Fig. 3 of the main text.
The spectrum of the physical motion
S_θ^phys[Ω] = S_τ^tot[Ω]/I^2[(Ω_0^2-Ω^2)^2+(ΩΓ_eff)^2]+(ΩΓ_fb)^2/(Ω_0^2-Ω^2)^2+(ΩΓ_eff)^2S_θ^imp[Ω],
allows inference of physical properties such as the average phonon occupation achieved by feedback cooling.
If we take the torsional motion to be a harmonic oscillator with creation/annihilation operators
b̂,b̂^†, then the angular motion is θ̂ = θ_zp(b̂+b̂^†),
where θ_zp = √(ħ/(2I Ω_0)). Then the variance of the motion is
*θ̂^2 = 2θ_zp^2 (*b̂^†b̂+12); thus the phonon occupation
n_eff = *b̂^†b̂
can be inferred from the variance of the (physical) angular motion via
n_eff + 1/2 = *θ̂^2/2θ_zp^2
= ∫S_θ^phys[Ω]/2θ_zp^2Ω/2π.
Using the model in <ref> gives
n_eff + 1/2 = ( n_th+n_ba + 1/2)Γ_0/Γ_eff
+ n_impΓ_eff/Γ_0,
where n_ba = S_τ^ba[Ω_0]/(4ħχ_0^-1[Ω_0]) is the effective back-action phonon occupation,
and n_imp = S_θ^imp/(2S_θ^zp[Ω_0]) is the phonon-equivalent measurement imprecision.
§.§ Implementation of feedback
The motion of the pendulum is actuated by radiation torque from a secondary laser beam (“actuation beam”)
focused on the edge of the pendulum’s backside.
A fiber-coupled electro-optic Mach-Zehnder amplitude modulator (LNX1020A, Thorlabs) is employed to
modulate the radiation torque around the mechanical frequency.
The modulator’s response to the external voltage is sinusoidal due to the nature of the Mach-Zehnder
interferometer in it; we bias the modulator around its linear operating point, which corresponds to the
bias voltage at which half the maximum optical power is transmitted.
At this operating point, the actuation beam exhibits a sensitivity of 15.7 mW/V in response to the external voltage
applied on the modulator.
<Ref>(a) is the measured frequency response of the feedback filter χ_fb^-1, optimized for feedback gain to achieve a phonon number of approximately 6000. This configuration is implemented using an FPGA-based controller (Moku:Pro, Liquid Instruments). The feedback filter comprises a bandpass filter (34–40 kHz) centered around the resonance frequency of the fundamental mode, and a phase delay that aligns the total loop delay to be π/2. As depicted in <ref>(a), the delay at the resonance frequency Ω_0/2π=35.95kHz is precisely set to π/2.
§.§ Extraneous noise in feedback
Extraneous noise in the feedback loop can limit the performance of feedback cooling if they are larger than the
imprecision noise in the measurement that drives the actuator.
We investigate this possibility by measuring and budgeting the noise in the actuator beam.
<Ref>(b) shows this budget referred to angular displacement at the SPD.
Two contributions can be distinguished: (a) voltage noise from the feedback controller,
referred to angle (blue); and, (b) intensity noise in the actuation beam referred to angle (red).
Grey shows the motional signal at the SPD, with black dashed showing the quantum-noise-limited imprecision.
Clearly, the extraneous noise in the feedback loop lies more than 30 dB below the imprecision noise.
Thus, feedback cooling is governed primarily by the observed motion.
Next, we investigate whether the actuation beam induces mechanical torques
in excess of that due to radiation pressure. For example, thermoelastic torques <cit.>
arising from concentrated photothermal heating at the edge of the torsion pendulum.
The nature of the thermoelastic effect can be understood through the absorption and diffusion of heat within
the pendulum.
This suggests that the thermoelastic effect can be distinguished from pure radiation pressure
torque <cit.>: the angular displacement response to thermoelastic torque follows a
single-pole low-pass filter characteristic at a specific cutoff frequency, while the radiation pressure
torque remains frequency-independent.
<Ref> is the frequency-response of the torsion pendulum as the intensity in the actuator beam
is modulated.
The response remains frequency-independent from 1 kHz up to the resonance for various beam positions,
both at the edge and near the center of the pendulum.
This suggests that no thermoelastic effect is observed during feedback actuation, and that the actuation is
dominated by radiation torque.
§.§ Estimation of mode temperature
To investigate the role of photothermal heating, we measured the mode temperature as a function of optical
power (from now on, the optical power is referred to as that reflected from the torsion pendulum). For this, we fit the torsion pendulum’s intrinsic motion to a Lorentzian model:
S_θ^obs[Ω]=S_θ^imp+S_θ^th[Ω_0]/(1+4Q_0^2(Ω-Ω_0)^2/Ω_0^2).
Since the resolution bandwidth of the spectrum (0.25 Hz) is larger than the linewidth of the torsional
mode (Γ_0/2π = 2.6mHz), we excluded the vicinity of the peak from the fitting range.
Using these fittings, we estimated the mode temperatures shown in <ref>.
For optical powers below 5 mW, no signs of photothermal heating were observed.
The fits allow us to infer the effective moment of inertia I = 5.54e-17kg· m^2 with the independently
measured values of Ω_0=2π· 35.95 kHz, Q=1.365·10^7, and T = 290K.
This value is 30% smaller than that obtained by finite element simulation
(COMSOL Multiphysics), (7.87e-17kg· m^2) and larger by 12% compared to the analytical model
<cit.> I=ρ L h w^3/24 ≈4.91e-17kg· m^2 of an ideal rectangular beam (which
our device is not). This provides an independent consistency check of our angle calibration procedure and
mode temperature.
|
http://arxiv.org/abs/2409.03621v1 | 20240905153324 | Attend First, Consolidate Later: On the Importance of Attention in Different LLM Layers | [
"Amit Ben Artzy",
"Roy Schwartz"
] | cs.CL | [
"cs.CL"
] |
Photoelectron – residual-ion entanglement in streaked shake-up ionization of helium
Hongyu Shi and Uwe Thumm
September 9, 2024
===================================================================================
§ ABSTRACT
In decoder-based LLMs, the representation of a given layer serves two purposes: as input to the next layer during the computation of the current token; and as input to the attention mechanism of future tokens.
In this work, we show that the importance of the latter role might be overestimated.
To show that, we start by manipulating the representations of previous tokens; e.g. by replacing the hidden states at some layer k with random vectors.
Our experimenting with four LLMs and four tasks show that this operation often leads to small to negligible drop in performance. Importantly, this happens if the manipulation occurs in the top part of the model—k is in the final 30–50% of the layers. In contrast, doing the same manipulation in earlier layers might lead to chance level performance.
We continue by switching the hidden state of certain tokens with hidden states of other tokens from another prompt; e.g., replacing the word “Italy” with “France” in “What is the capital of Italy?”. We find that when applying this switch in the top 1/3 of the model, the model ignores it (answering “Rome”). However if we apply it before, the model conforms to the switch (“Paris”).
Our results hint at a two stage process in transformer-based LLMs: the first part gathers input from previous tokens, while the second mainly processes that information internally.
§ INTRODUCTION
The attention mechanism in transformer-based <cit.> LLMs allows information to flow from the hidden representation of one token to another.
While this process is the same for all model layers,
previous work has shown that different layers capture different types of information <cit.>.
It is therefore not entirely clear that this flow of information is equally important for all layers. Can we find a distinction between layers that aggregate information from previous tokens, and those that process this information internally?
To better understand these dynamics, we apply various manipulations to the hidden states of all tokens barring the current one, and evaluate their impact on the model's performance over various tasks.
We consider several different manipulations, e.g., replacing the hidden state at layer k with random vectors; and replacing the hidden states of all upper layers (ℓ > k) with those of the last unmanipulated layer (k). We note that none of the manipulations in this work involves further training or fine-tuning.
We experiment with four LLMs (, ; Mistral-7B, ; Yi-6B, , and Llemma-7B, ) across four tasks: question answering, text summarization, and two additional tasks we compile to allow further analysis.
Our results show that transformers are surprisingly robust to manipulations of their previous tokens. Freezing up to 50% of the layers results in some cases in no loss in performance across multiple tasks. Moreover, replacing up to 30% of the top layers with random vectors also results in little to no decrease. Importantly, we identify a distinct point where LLMs become robust to these manipulations: applying them at that point or later leads to little to moderate drop in performance, while doing it earlier leads to a drastic drop in performance.
To further study this phenomenon, we consider a third manipulation: switching the hidden states of certain tokens with a hidden state computed based on a separate prompt. E.g., in factual question answering tasks (“What is the capital of Italy?”), we replace the hidden state of the token “Italy” with that of the token “France” from another prompt. Our results are striking: in accordance with our previous results, doing this manipulation at the top 1/3 of the model leads to no change in prediction. However, doing it earlier leads to the generation of the output corresponding to the change (“Paris” instead of “Rome”).
Finally, we consider dropping the attention mechanism altogether, by skipping the attention block in all layers starting a given layer k. As before, we find in some cases a high variance in how important attention mechanism is across layers: doing this at the bottom layers leads to severe performance degradation, while doing it at higher layers results in smaller drops, and even matches baseline performance in some tasks.
Our results shed light on the way information is processed in transformer LLMs. In particular, they suggest a two-phase process: in the first, the model extracts information from previous tokens. In this phase any change to their hidden representation leads to substantial degradation in performance. In the second phase, information is processed internally, and the representation of previous tokens matters less. They also have potential implications for making transformer LLMs more efficient, by allowing both to skip upper attention layers, and accordingly, reducing the memory load of caching these computations. We will publicly release our code.
§ MANIPULATED LLM GENERATION
Decoder-only transformers consist of a series of transformer blocks. Each block contains an attention block and a feed-forward block.[Normalization and residual connections are omitted for brevity.] Formally, to generate token n+1, we process the n'th token by attending to all previous tokens i≤ n. Formally, for layer ℓ, we define the attention scores A^ℓ_n as follows:
A^ℓ_n = softmax(q^ℓ_n · K^ℓ_n/√(d))· V^ℓ_n
where
q^ℓ_n=W_q^ℓ*x^ℓ-1_n; K^ℓ_n=W_k^ℓ *X^ℓ-1_1,…,n; V^ℓ_n = W_v^ℓ*X^ℓ-1_1,…,n
In this setup, W^ℓ_q/k/v are weight matrices, d is the hidden size dimension, x^ℓ-1_n is the representation of the current token in the previous layer, and X^ℓ-1_1,…,n is a matrix of the hidden representation of all tokens from the previous layer.
We highlight two important properties of transformers. First, the K^ℓ_n and V^ℓ_n matrices are the only components in the transformer block that observe the previous tokens in the document. Second, all transformer layers are defined exactly the same, as described above. In this work we argue that perhaps this uniformity across layers is unnecessary.
We aim to ask how sensitive a model is to observing the exact history tokens, or in other words—how much will observing manipulated versions of them impact it. Below we describe the manipulations we employ. We stress that all the manipulations in this work operate on the pre-trained model, and do not include any training or fine-tuning. See <ref> for visualization of the different approaches.
§.§
First, we ask whether the content of the hidden state even matters. To do so, we replace the hidden states at layer k of all previous tokens (X^k_1,…,n-1) with random vectors. The next layer (k+1) gets these random vectors as input, and the following layers continue as normal.
We use two policies for introducing noise to the history of tokens, both ensuring the noisy hidden states have the same norm as the original hidden states: shuffle, where we take the original hidden state and shuffle its indexes in a random permutation; and random where we randomize a vector of variance 1 and mean 0 then re-scale it such that the norm would be the same as the original hidden state.
If the model is successful in this setup at some layer, we may conclude that it has already gained the majority of the relevant information from the tokens prior to that layer, and in practice it is not making exccesive use of this information in higher layers.
§.§ Freezing
We next turn to address the question of whether deep processing of the previous tokens is needed. To do so, we freeze the model at layer k and copy the hidden state at that layer to all subsequent layers. [This is common practice in early-exit setups (), where some tokens require deeper processing than the previous ones that performed an early-exit.] Formally, for ℓ > k, we set X^ℓ-1_1,…,n-1=X^k_1,…,n-1.
If this manipulation would result in large performance degradation, we may conclude that subsequent processing in higher layers is important. Alternatively, a minor to no drop in performance would indicate that perhaps the processing up to layer k is sufficient.
§ EXPERIMENTAL SETUP
§.§ Datasets
We aim to understand the flow of information in various test cases. For this purpose, we consider four benchmarks, which we describe below: two that we curate, which allow us to perform nuanced, meticulous manipulations; and two additional benchmarks for standard tasks—question answering and summarization.
Capitals We devise a simple fact-extraction QA dataset, which consists of 194 country-capital pairs. The dataset is in the format of “What is the capital of X?”. To align the model to return the capital only, rather than a full sentence such as “The capital of X is Y”, we use a 1-shot setting. We report exact match accuracy.
We compile a second dataset, consisting of simple math exercises of addition and subtraction of single digit numbers. We consider two variants: 2-term (i.e., the subtraction/addition of two numbers, e.g., “1 + 2 =”), and 3-term (e.g., “1 + 2 + 3 =”). In both cases, we only consider cases where the answer is also a single (non-negative) digit. In 3-term, we also verify that each mid-step can be represented as a single token. This results in 110 2-term instances, and 1210 3-term instances.
This dataset has a few desired properties. First, it has a clear answer, which facilitates evaluation. Second, perhaps surprisingly, it is not trivial—the math-tailored LLM we experiment with for this task (Llemma; ) only reaches ≈80% on 2-term and ≈50% on 3-term.
Third, it allows us to easily increase the level of difficulty of the problem (by considering 2-term vs. 3-term). We report exact match accuracy.
SQuAD We also consider the Stanford Question Answering
Dataset (SQuAD; ), a dataset
consisting of question-paragraph pairs, where one
of the sentences in the paragraph contains the answer to the corresponding question. The task is to correctly output the segment that contains the answer. We sample 1,000 instances from the SQuAD test set and report exact match.
CNN/Daily Mail This dataset <cit.> contains news articles from CNN and the Daily Mail. The task is to generate a summary of these articles. We sample 100 instances from the CNN/Daily Mail test set and report averaged rouge-1 and rouge-l scores <cit.>.
§.§ Models
For the textual tasks (Capitals, SQuAD, and CNN/Daily Mail), we experiment with three open-source, decoder-only models, each containing 32 layers: <cit.>, Mistral-7B <cit.>, and Yi-6B <cit.>.
For the dataset, we observe that these models perform strikingly low, so we experiment with Llemma-7B <cit.>, a Code Llama <cit.> based model finetuned for math.
§ RESULTS
Capitals
Figure <ref>
shows the results of our manipulations on the capitals dataset. We first note that all LLMs are surprisingly robust to the different manipulations. When freezing the top ≈50% of the model, all models reach similar performance as the baseline (unmanipulated model).
For the manipulations, we observe the same trend, though at later layers: For both manipulations, the model matches the baseline performance if applied at the top ≈30% of the model.
We also note that, interestingly, in almost all cases we observe a critical layer k', for which the model performs almost at chance level if manipulated in layers i ≤ k', while substantially improving if applied afterwards. While this point varies between models and manipulation types, e.g., from layer 8 (, freeze) to 25 (Yi, shuffle), the phenomenon in general is quite robust.
Our results hint that LLMs exhibit a two-phase processing: the first part gathers information from previous tokens. At this part the content of previous hidden states is highly important. In contrast, at the second part, the model mostly consolidates this information, and is far less sensitive to such manipulations.
We next consider the dataset (<ref>).
Again, we observe that freezing the top 1/2 of the model results in a similar performance to the baseline in both setups (2-term and 3-term). However, here the shuffle and random manipulations perform similar to the baseline only when applied at the top 10% of the model.
Here we also observe that we do not see a clear transition point from chance level performance to baseline-level, but rather a more steady increase.
Interestingly, the trends in the 2-term and 3-term settings are similar; despite the fact that the 2-term problems are substantially easier to the model than 3-term ones.
SQuAD
We now turn to consider common NLP tasks, and start with SQuAD (<ref>).
We first observe for two of the three LLMs, applying the freeze manipulation leads to the same trend as before: comparable (of even better!) results as the baseline. This happens starting layer 20 ) or 25 (Yi). In contrast, for Mistral, the manipulated model only matches the performance of the full model after 30 (of 32) layers.
Considering the two manipulations, we observe that for , both variants also reach the baseline performance after 25–27 layers. However, the other two models never fully reach it. Nonetheless, we note that for these models, we clearly observe the transition point observed in the capitals experiments: Between 15–23 layers, model performance is at chance level, and afterwards it dramatically improves. These results further support the two-phase setup.
CNN / Daily Mail
Finally, we consider the CNN/ Daily Mail dataset (<ref>).
As in SQuAD, using the freeze manipulation, the model performance is similar or even slightly better than the baseline if applied in the final ≈30% of the layers. Results for the other two models are close, though clearly inferior to the baseline. In contrast, the two manipulations lead to substantially lower scores.
Discussion
Our results demonstrate a few interesting trends. First, we observe that in almost all cases, models are robust to freezing, in some cases as early as after 50% of the layers.
We also note that in some cases (capitals, SQuAD with ) LLMs are robust to adding noise, but in general this does lead to noticeable performance degradation. Nonetheless, we still observe a rather consistent phenomenon with these manipulation, which shows that applying them too early leads to chance-level performance, and at a certain layer, results suddenly improve dramatically (albeit not reaching the baseline performance).
Our results suggest that a two-step phase in the processing of LLMs: a first step that gathers information from previous tokens, and a second which consolidates it.
We next turn to further explore this hypothesis, by presenting two additional manipulations—replacement and skipping the attention mechanism altogether.
§ INJECTING INFORMATION FROM A DIFFERENT PROMPT
To further test the two-phase hypothesis, we study the impact of “injecting” new information to the model, in the form of replacing the hidden representation of a given token with that of another token from a different prompt.[This process is often called “patching” <cit.>.]
For example, given the question “What is the capital of Italy?”, we replace the hidden states corresponding to the word “Italy” in layer ℓ with the hidden states corresponding to “France” at layer ℓ from another prompt.
Our results are shown in <ref>.
For each model, we identify a clear transition point: before it, the model answer conforms to the patched value (e.g., “Paris” in the example above), and afterwards the model is unimpacted by the manipulation, returning the original answer (“Rome”).
These results further illustrate the two phases we observed in previous experiments.
§ IS ATTENTION NEEDED AT TOP LAYERS?
Our results so far indicate that the role of previous tokens is far more important in the bottom layers of the model than in top ones. A question that arises now is whether the attention mechanism is even needed in those top layers.
To study this question, we experiment with skipping the attention block in those layers, and only applying the feed-forward sub-layer.[Following preliminary experiments, we also skip the normalization prior to the attention.]
We find that in three of the four datasets (capitals, <ref>; SQuAD, <ref>; and CNN / Daily Mail, <ref>), the effect of this process is similar to that of the shuffle manipulation. I.e., in some cases the models are surprisingly robust to this process, though not in all cases.
In contrast, and perhaps surprisingly, we find that skipping the attention block in the dataset leads to chance-level performance in all cases (<ref>).
This might indicate the nature of this problem, where each and every token is critical to give the final answer, forces the model to use the attention mechanism all the way through.
§ RELATED WORK
To better understand the inner-working transformers, previous work has explored the roles of the different transformer layers. <cit.> found that linguistics features can be extracted from hidden representations.
<cit.> demonstrated that lower layers are associated with shallow patterns while higher layers are associated with semantic ones.
<cit.> and <cit.> have trained from scratch transformers with different sub-layer ordering and found some variants that outperform the default ordering. Related to our work, they observed that orederings that with more attention layers at the bottom half and more feed-forward at the top tended to perform better, hinting that the attention is more important at the bottom layers.
Early exit methods <cit.>, which speed up LLMs by processing only the bottom part of the model, also provide evidence that the top layers of the model has already gained relevant information from previous tokens.
Some of our setups are similar to previous work. We allow top layers to use hidden representations from lower layers, as used by <cit.>. We also experiment with patching technique in our analysis. Previously, <cit.> explored hidden representation in few shot settings and demonstrated that patching an operator (e.g., the “→" token) from in-context tasks to another context preserves the operation. <cit.> showed that patching can be seen as a generalization of various prior interpretability methods and demonstrated how this method can be used in other cases, e.g., feature extraction. Both of these works aimed to study the hidden representations in transformers and the features they encode. In contrast, we propose to patch different vectors into some context to learn more about the flow of information between the tokens in context.
§ CONCLUSION
We investigated the role of the attention mechanism across a range of layers.
We applied various manipulations over the hidden states of previous tokens, and showed that their impact is far less pronounced when applied to the top 30–50% of the model. Moreover, we switched the hidden states of specific tokens with hidden states of other tokens from another prompt. We found that there is a distinct point, at the top 1/3 of the model, where before it the model conforms to the switch, and afterwards it ignores it, answering the original question. Finally, we experimented with dropping the attention component altogether starting from a given layer. We found again, that in some cases (though not all), doing this at the top 30% of the model leads to a small effect, while much larger earlier.
Our results shed light on the inner workings of transformer LLMs, by hinting at a two-phase setup of their text generation process: first, they aggregate information from previous tokens, and then they decipher the meaning and generate new token.
Our work could further be potentially extended to reduce LLMs costs: First by skipping the attention component in upper layer, and second by alleviating the need to cache their output for future generations.
§ ACKNOWLEDGEMENTS
This work was supported in part by NSF-BSF grant 2020793.
|
http://arxiv.org/abs/2409.02639v1 | 20240904121427 | Do anomalously-dense hot Jupiters orbit stealth binary stars? | [
"Tanvi Goswamy",
"Andrew Collier Cameron",
"Thomas G. Wilson"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
firstpage–lastpage
[
[
September 9, 2024
=====================
§ ABSTRACT
The Wide Angle Search for Planets (WASP) survey used transit photometry to discover nearly 200 gas-giant exoplanets and derive their planetary and stellar parameters.
Reliable determination of the planetary density depends on accurate measurement of the planet's radius, obtained from the transit depth and photodynamical determination of the stellar radius. The stellar density, and hence the stellar radius are typically determined in a model-independent way from the star's reflex orbital acceleration and the transit profile. Additional flux coming from the system due to a bright, undetected stellar binary companion can, however, potentially dilute
the transit curve and radial velocity signal, leading to under-estimation of the planet's mass and radius, and to overestimation of the planet’s
density.
In this study, we cross-check the published radii of all the WASP planet host stars, determined from their transit profiles and radial-velocity curves, against radiometric measurements of stellar radii derived from their angular diameters (via the Infrared Flux method) and trigonometric parallaxes.
We identify eight systems showing radiometric stellar radii significantly greater than their published photodynamical values: WASPs 20, 85, 86, 103, 105, 129, 144 and 171. We
investigate these systems in more detail to establish plausible ranges of angular and radial-velocity separations within which such “stealth binaries” could evade detection,
and deduce their likely
orbital periods, mass ratios, and flux ratios.
After accounting for
the dilution of transit depth and radial velocity amplitude, we
find that on average,
the planetary densities for the identified stealth binary systems
should be reduced
by a factor of 1.3.
stars: planetary systems –
stars: binaries: general –
planets and satellites: fundamental parameters planets –
planets and satellites: gaseous planets –
techniques: photometric –
techniques: spectroscopic
§ INTRODUCTION
The Wide-Angle Search for Planets (WASP) project <cit.>
has published the discoveries of over 178 transiting gas-giant exoplanets in
close orbits about their host stars <cit.>.
The validation and characterisation of a WASP planet candidate involves
photodynamical analysis of the transit light curve to establish the planet/star radius
ratio and the stellar density from the transit depth and duration respectively <cit.>.
Since most WASP host stars have masses close to solar, the inverse cube root
of the stellar density (in solar units) provides an approximate estimate of
the stellar radius. Follow-up spectroscopy yields the stellar effective
temperature, surface gravity and metallicity. This spectroscopic
characterisation allows the stellar mass to be estimated. Radial-velocity
observations of the host star's orbit then determine the planetary mass.
Since the release of the Gaia DR2 and
DR3
catalogues
<cit.>,
the availability of precise
parallaxes has made it possible to determine WASP host-star radii independently. The
stellar angular diameter can be estimated from the effective temperature
and an estimate of the bolometric flux received at Earth, via the Infrared Flux method ("IRFM")
of <cit.>. The angular diameter and Gaia parallax
together yield a direct geometrical estimate of the the stellar radius.
The radii determined via this method can be compared directly with the radii inferred from
photodynamical fits to their planets' transit profiles.
The stellar radius estimates obtained via these two methods should agree
unless the light of the host star is diluted significantly by a stellar
binary companion. In such cases, the additional flux dilutes both the
transit depth and the radial velocity amplitude.
This in turn leads to an overestimation of the planet’s density and an
underestimation of its radius and mass. Stellar binaries are usually
detectable if the orbit is small enough that the Doppler-shifted
spectral lines of the two stars are resolvable; or wide enough
for the binary to be resolved through direct imaging. We coin the term
"stealth binaries" to characterise systems whose orbital separations
lie in the range in which they cannot be resolved by either method.
<cit.> have carried out a similar study, discussing the effects of undetected multiplicity on planetary radii for Kepler Objects of Interest (KOIs). For each KOI, they found the best-fit Dartmouth isochrone, and considered all stars along the isochrones that had absolute Kepler magnitudes fainter than the primaries as viable companions. Their derivation of the theoretical correction factor X_R - by which the planetary radius would have been underestimated - is similar to the equations derived in Sections 4.3 and 4.4 of this paper. They derived mean values of X_R for all possible scenarios up to a multiplicity of 3. <cit.> developed this concept further, using the relationship between the mean stellar density and stellar effective temperature to identify which of the stellar components in eight marginally-resolved multi-star systems were the hosts of transiting planets discovered with Kepler/K2.
In this paper we compare the stellar radii obtained from the
spectral-energy distributions and Gaia parallaxes of a large sample of WASP planet-host stars, against those calculated from transit fitting and spectroscopic characterisation. In the majority of cases the host stars are not known a priori to be binaries. However, if previously-undetected binaries are present in the WASP catalogue, the photodynamical host-star radii derived from the stellar density via the transit duration will be smaller than those determined from the stellar angular diameter and parallax. In cases where the discrepancy is significant,
their planets' bulk densities will have been over-estimated.
Our methods are discussed in detail in Section 2. In Section 3, we present the comparison of results from both methods.
In Section 4.1, we examine these systems in more
detail to predict the limits on angular separation and
the limiting difference in radial velocities of the two stars
that would allow the secondary star to remain undetected.
These limits can be used to classify these systems
as “stealth binaries” and estimate the plausible range of
orbital periods of the stellar binaries. Section 4.2 discusses
WASP-85AB – a known stellar binary which we
use to verify some of our methods. In Section 4.3,
we estimate the most probable flux ratio and mass
ratio for each stealth binary system. The factor by
which the radii from both methods differs gives us
information about how much additional flux is
being received from the system – and thus an
estimate on the luminosity ratio of the two stars in
the binary. We then use the evolutionary tracks and isochrone tables of
<cit.> to estimate the binary mass ratios
from the luminosity ratios. Finally, in Section 4.4,
we assess the effect of contamination of
observations on the derived system parameters, and
hence recalculate them after having accounted for
the secondary star. Corrections to parameters for some WASP planets based on similar studies have
previously been made by <cit.>,
<cit.>, and <cit.>;
which will be discussed in Section 4.5. We conclude the study and
suggest follow-up observations in Section 5.
§ METHODS
We start by comparing the published radii of a sample of
178 host stars from the WASP survey, obtained by photodynamical
modelling of their transit profiles, with radii estimated
from their angular diameters and parallaxes.
Photodynamical modelling of planetary transits is carried out routinely
as part of the discovery process leading to the announcement of a new planet.
<cit.> reviewed the
planetary and stellar parameters that can be measured from precise
photometry of exoplanet transits combined with radial-velocity follow-up.
The transit duration (T) gives us an estimate of the stellar
density as per Eq. <ref> for central transits, and the transit depth (δ) gives
us the ratio of the planetary radius (R_p) to the stellar
radius (R_s), as per the relation √(δ) = R_p/R_s.
T/3 h≈ (P/4 days)^1/3*(ρ_s/ρ_⊙)^-1/3
Here, P is the orbital period of the planet, and ρ_s is
the stellar density. Since most FGK main sequence
stars have a mass close to unity, their radius R_s can
be estimated directly from the density. Dilution of
the transit curve due to flux from a potential binary
companion will not cause a change in the transit
duration – which is why we can rely on the R_s
calculation from this method (hereon "R_trans") to compare
results from the IRFM. However, it will cause a
decrease in δ and thus an underestimation of R_p.
R_s can also be derived from the spectrum of the star
using the IRFM, which gives us the star’s effective
temperature (T_eff). This method was first proposed by
<cit.>, after which various
improved scales have been proposed. The basic idea
of the IRFM is to compare the ratio between the
bolometric flux (f_bol) and the flux in a given IR bandpass (f_IR) –
both received at the Earth’s atmosphere; to the ratio
between the stellar surface bolometric flux (σ T_eff ^4) and the surface flux in the same IR bandpass (F_IR, model), which is determined theoretically using the stellar T_eff, stellar surface gravity log g, and [Fe/H] values
from the star’s spectrum <cit.>.
This is shown in Eq. <ref>, where T_eff is the only
unknown quantity:
f_ bol/f_ IR=σ T_ eff^4/F_ IR, model
Using T_eff along with the f_bol received from the star
and the parallax from Gaia
DR3
<cit.> extracted using VizieR[https://vizier.cds.unistra.fr/viz-bin/VizieR-3?-source=I/355/gaiadr3] <cit.>, R_s (hereon "R_IRFM") can be calculated using Eq. <ref>:
f_bol=σ T_eff ^4 ( R_s/d )^2
Here, d is the distance to the star given by the inverse of the
parallax, and σ is the Stefan-Boltzmann constant.
In this study we estimate the angular diameter
by fitting the apparent magnitudes in eight optical/IR bandpasses:
Gaia BP, G and RP <cit.>;
2MASS J, H and Ks <cit.>;
and WISE W1 and W2 <cit.>, with
synthetic photometry derived from the stellar model atmospheres of <cit.>. While other bandpasses can be used in addition to these, this set has two advantages: they are available for all the WASP host stars, and the angular-diameter values derived from them are independent of degeneracies between stellar effective temperature and interstellar reddening <cit.>.
At the distances of typical WASP host stars, the parallax and the angular diameter
combine to yield the stellar radius to a precision
of order 1 to 2 percent.
This is comparable to the precision achievable with asteroseismology (e.g. ).
Our aim is to compare the values of R_s using both the
methods above for all the WASP systems. The stars
that have a larger R_s from the IRFM than from photodynamical modelling (i.e.,
R_IRFM>R_trans) are our targets of interest.
§ RESULTS
We have used TEPCat[https://www.astro.keele.ac.uk/jkt/tepcat/allplanets-noerr.html] <cit.> to extract all our data on the WASP systems.
Because the discovery papers for the majority of these planets were published
prior to the Gaia DR2 and DR3 data releases, the published R_s values have in most cases been
obtained using the transit duration method. We performed the IRFM calculations using Python
routines[Routines used are in the file “GaiaIRFM_EDR3.py”] developed by the authors, based on the astroquery <cit.>, pysynphot <cit.> and pyphot <cit.> packages, on all the systems to check the consistency with the
published values. The inferred stellar radii are plotted against the values catalogued in TEPCat, in Fig. <ref>. We have also extracted the renormalised unit weight error (RUWE) values for each star from Gaia DR3, which should be ≃ 1 for single sources <cit.>.
The RUWE is similar in character to the reduced χ^2 statistic; <cit.> note that values in excess of 1.4 are indicative of an extended or binary source in Gaia DR2, while <cit.> and <cit.> suggest a lower threshold of order 1.25 for Gaia DR3.
The points on the graph in Fig. <ref> are colour-coded by their RUWE values to highlight outliers with high RUWE.
The radiometric stellar radii were derived from the angular diameter (obtained from the apparent bolometric flux and effective temperature derived from the synthetic photometry via eqs. <ref> and <ref>) and the Gaia parallax. These were then compared with the photodynamical stellar radii R_ trans obtained from TEPCat, which in the source publications had generally been computed under the assumption that no contaminating light was present.
We have identified 8 clear outliers in Fig. <ref> which lie significantly above the
R_ trans=R_ IRFM line, but near or below the R_ IRFM=√(2)R_ trans
line along which binaries with equal-luminosity components should lie. Six of these eight WASP hosts (WASP-85, 103, 105, 129, 144, 171) have
R_trans < R_IRFM≲√(2) R_trans, while WASPs 20 and 86 lie above the line. Half of these also have high RUWE values, as indicated by their colours on the graph, which provides further evidence of binarity <cit.>. We note that WASP-86 <cit.>, has recently been identified as the same star as KELT-12 <cit.>. The apparent discrepancy in the stellar radii reported in these two discovery papers has recently been reconciled by <cit.>.
This star has a long and shallow transit, causing the initial discrepancy in measurement of its properties in the two discovery papers. We will hence not include WASP-86 in our analysis. The revised radius can be found in <cit.>. Out of the remaining seven stars, WASP-20 and WASP-85 are known binary systems. We will look at WASP-85 in more detail in a later section as we have sufficient information about its binary companion from the discovery paper itself. WASP-20 will be discussed in Section 4.5.
§ DISCUSSION
Since we have identified the systems where we
suspect the presence of a bright, unresolved secondary star diluting the
transit curve, we can now begin our investigation of
these systems in more detail. The next few sections
discuss the various studies that we carried out on
these 7 systems.
We first investigate the reasons why the companion
star has not been detected yet.
§.§ Radial velocity difference and angular
separation
The transit depth can be diluted due to background stars that may not be bound to the primary star.
It is unlikely, however, that such a chance alignment would involve
two physically unrelated stars with indistinguishable radial velocities.
The absence of two resolvable spectra therefore allows us to assume that the
two stars form a wide but bound pair, and that the dilution is
caused by a secondary star that has a RV very close to that of the
primary. For the two stars' spectra to be unresolved with a radial-velocity
spectrometer such as SOPHIE, CORALIE or HARPS, the Doppler shift between the two stars' spectral lines
would have to be close enough for their cross-correlation function to have a single
peak. To estimate the detection threshold for such a binary, we approximated
the CCFs of slowly-rotating solar-type stars as Gaussian profiles with the same width as
the CCF of the Sun observed with the HARPS-N solar telescope feed <cit.>, as shown in Fig. <ref>.
For RV separations less than
ΔRV = 8 km/s, the combined signal appears
like that of a single star, with no sign of the presence
of a companion. Above that, the combined signal broadens
and develops significant kurtosis, before separating into
a recognisably bimodal profile. We therefore set our upper
limit on for stealth binaries at Δ RV < 8 km s^-1.
It is also possible for stealth binaries to be mistaken for planet-host systems in the absence of high-resolution spectra, as discussed by <cit.>. They found that the primary false-positive scenario for astrometric exoplanet detections is an unresolved binary system with alike components and a small
photocentric
orbit.
Another factor that can
allow a contaminating secondary star to evade detection
during the RV follow-up is the
angular separation (θ) between the two stars. If θ is
less than the fibre diameter of the instrument used,
then the measured RV of the primary star will be
contaminated. Spectroscopic follow-up for the
WASP systems is usually done using the SOPHIE
<cit.> or CORALIE <cit.>, instruments. These have fibre-aperture diameters of 3" and 2" respectively – which
means any binaries with θ significantly less than the fibre diameter would
not be detectable. However, the aperture size of the
HARPS <cit.> spectrograph is 1" and
this instrument was used for follow-up on WASP-85, which is why we set our lower limit for
detection at θ = 1".
Given the masses of the two stars and an estimate of the orbital period,
the physical separation a between the two stars follows from Kepler’s 3rd law.
Together with the parallax (π̂) from Gaia, this yields the angular separation.
In solar units,
θ/1 arcsec =
(π̂/1 arcsec)
(P/1 y)^2/3(M_1+M_2/M_⊙)^1/3.
The maximum radial-velocity separation Δ R V, assuming a circular orbit
viewed edge-on, follows from energy conservation and Kepler’s 3rd law:
Δ RV/v_⊕ =
(M_1+M_2/M_⊙)^1/2(a/1 au)^-1/2 =
(M_1+M_2/M_⊙)^1/3(P/1 y)^-1/3,
scaling to the Earth's orbital velocity v_⊕≃ 30 km s^-1.
In Fig. <ref> we plot the Δ RV values obtained
using Eq. <ref> as a function of the angular separation θ
computed with Eq. <ref> for
each of the seven WASP host stars with anomalous radii, on
a logarithmic grid of periods spanning the range 1-10^5 years.
The total mass is assumed to be between 1.5 and 2 times that of the planet-hosting star. Planets are preferentially identified around the brighter components of binaries when the observations are signal-to-noise-limited. The mass ratio distribution of binary systems is quite flat, so a reasonable assumption might be that the total mass is on average ∼ 1.5 times that of the planet-host star if its stellar companion is bright enough to affect the overall angular diameter. If we also make allowance for cases where the luminosity ratio is close to 1 and the planet orbits the fainter star, an assumed mass ratio closer to 2 might be preferable. Figure 3 shows that the difference between these two assumptions has little practical impact on the inferred angular and radial-velocity separations.
The blue box indicates the limits on Δ RV and θ
below which we anticipate the binary to remain undetected; i.e., to lie within
the hiding zone. Each point on the curve for any
WASP host corresponds to a particular estimate on
the period of the two stars. Thus, using the
boundaries of our “stealth binaries box”, we
calculate the minimum and maximum period, as
well as the minimum possible angular separation.
These are summarised in Table <ref>. Increasing
the assumed system mass to twice that of the primary decreases
the maximum period by about 15 percent and increases
the minimum period by 30 percent. The minimum angular
separation increases by about 35 percent.
The periods obtained are in the range of a hundred
to a few thousand years. This explains naturally why none of these
systems could have been detected as a spectroscopic binary in
the decade or so since their discovery.
§.§ WASP-85
The binary companion of WASP-85A was detected
during its discovery by <cit.>. From
the discovery paper, we took the values for the mass
of the companion, and the observed angular
separation from Gaia DR3 to get the
corresponding orbital period from Eq. <ref>.
To estimate the true orbital separation from the observed angular separation,
we must take into account the random orbit orientation. The distribution of apparent angular
separation as a fraction of true separation depends on the angle between the
line connecting the stellar centres (assumed randomly oriented in space) and the line of sight. Fig. <ref>
shows this probability distribution, with a mean of
0.79. Thus, on average, we only observe 80% of the
true angular separation.
The Gaia DR3 catalogue gives the observed angular
separation of the WASP-85AB binary system as
θ= 1.28” ; this suggests a true angular
separation of about 1.6”. On the plot of Δ RV vs θ in Fig. <ref>,
this angular separation corresponds to a Δ RV of 2.78 km/s and an
orbital period of 2500 years, which is in good
agreement with the estimate of 2000-3000 years in
the discovery paper.
The companion star (WASP-85B) is about 0.9 mag
fainter in the G band than the primary star, with a
flux ratio of about 0.5 in most bands. The discovery paper gives a mass
M_B=0.88 ± 0.07 M_⊙ and a radius R_B=0.77 ± 0.13 R_⊙.
If we take the photodynamical radius of the primary R_A, trans=0.935 R_⊙
and scale it by the square root of the flux ratio f_A+B/f_A≃ 1.5, we
expect to obtain R_ IRFM=1.16 R_⊙ when the light of both components is combined.
Since Gaia was able to resolve the system, we combined the
Gaia magnitudes for the primary and secondary stars to
recalculate the total flux coming from the system
and hence the combined Gaia BP, G and RP magnitudes.
Combining these with the
2MASS and WISE magnitudes (in which the binary is unresolved),
the resulting angular diameter and parallax yielded
R_ IRFM=1.16 R_⊙. This confirms that the angular radius
derived from the combined spectral energy distribution of both
binary components is overestimated by an amount that is
consistent with our knowledge of the two stars. This means that in cases
where R_ trans < R_IRFM≲√(2)R_ trans, there
is a strong possibility that the planet orbits the brighter component of a stealth
binary. There may indeed be cases where the luminosity ratio is close to 1, and
a planet orbiting the fainter component produces detectable transits.
In such cases, it would be possible to obtain
R_IRFM≳√(2)R_ trans. This could explain the location
of the system WASP-20, above the orange line in Fig. <ref>, as discussed in Section <ref>.
§.§ Predicting flux and mass ratios
We now use the information we have about the
companion star to make estimates on the flux and
mass ratios with respect to the primary star for each
of our outliers. We use isochrones from
<cit.>
to relate stellar masses
to magnitudes. Table <ref> shows some of the
parameters of the primary stars that we have used
for our analysis in this section. The stellar radii and
masses have been taken from TEPCat, while the
rough age estimates have been taken from the
discovery papers.
Using these isochrones, we estimated the ratio of fluxes of the primary to the companion star. We evaluated the flux ratio for every mass ratio q = M_2/M_1 from 0.7 to 1.2 in intervals of 0.01. We also computed the most probable flux ratio from the factor r by which R_IRFM is greater than the actual radius; i.e., 1+f_2/f_1 = r^2 where f_2 and f_1 are the fluxes received from the secondary and primary stars respectively. Each value of r corresponds to a different value of q, found by interpolating the isochrone for a star of the same age, assuming that the age of the companion is the same as that of the primary star. These results are summarised in Table <ref>, while Fig. <ref> shows the probability curves for the factor r, given the value that was observed.
All the most-probable mass ratios are within the range 0.89-1, barring WASP-20. For this system, r has a value greater than √(2), implying that f_2 > f_1; i.e., the secondary star is brighter than the primary, and the planet is orbiting the fainter and hence denser star. The identity of the planet-hosting component of the binary is discussed in Section <ref>.
§.§ Recalculating planetary parameters
Using the most probable flux ratios calculated in the previous section, we can obtain the value of the factor ‘r’ and use this to correct the planet’s radius R_p, mass M_P, and density ρ_P. We know that:
√(δ)∝ R_p and δ∝ R^2⇒R_p∝ r
where δ is the transit depth. From <cit.>:
ρ_P∝K/(R_p)^3 and K ∝ r^2⇒ρ_P∝ r^-1 and M_P∝ r^2
K is the RV amplitude, whose observed value is the flux-weighted average of all the K values for each star. Using the above relations and the published values of these quantities, we recalculated the planetary parameters, as summarised in Table <ref>.
The surface gravity of the planet varies directly with its mass and inversely with the square of its radius. After accounting for the stealth binarity; the mass goes up by r^2 while the radius goes up by r – which means that the planet's surface gravity remains unchanged. This is an interesting result, as it tells us that the planetary surface gravity is immune to stealth binaries.
§.§ WASP-20 and WASP-103
The binary companion of WASP-20A was
discovered at a separation of 0.26 arcsec by <cit.> using near-IR adaptive-optics imaging with the SPHERE instrument on the VLT. They inferred an
increase in planetary mass M_P and radius R_p by 4σ
and 1σ respectively, after having accounted for the
dilution to the transit curve and radial velocity
amplitude. They concluded that the planet orbits the brighter of the two stars, on the grounds that the inferred stellar density in the planet-orbits-fainter-star scenario yielded an implausibly old stellar age of 16 Gyr. The inferred separation of 61 au and system mass of order 2M_⊙ implies a period of order 340 years, and hence a radial-velocity separation K_A+K_B≃ 5.3 km s^-1,
placing it within the blue box in Fig. <ref>.
<cit.> made further
corrections to these parameters using updated light
curves and spectroscopic data. They were unable to rule out the planet-orbits-fainter-star scenario, finding a density 1.08 times solar and an age of 3.3 Gyr. In both cases, the planet's density is reduced relative to that obtained by assuming that the host star is single. In our own study, the balance of probability in Fig. <ref> suggests that the ratio of the angular and photodynamical radii is greater than 1.4 for an assumed age of 4 Gyr. Our study thus favours the planet-orbits-fainter-star scenario.
The planetary parameters for WASP-103b were also
corrected in the analysis by <cit.> for the presence
of a stellar companion at a separation of 0.23 arcsec. This companion was
discovered in a lucky-imaging survey by <cit.>.
It is 3.1 magnitudes fainter than the brighter star in the i' band
and 2.6 mag fainter in the z' band. We find that a discrepancy remains
between R_ IRFM and the value of R_ trans found by <cit.> even
after this correction, so we infer that the host star may have
another brighter and closer companion.
In their study of the tidal deformation and orbital decay rate of WASP-103b,
<cit.> recently found that the
the faint visual companion of the host star was too distant and insufficiently massive to explain the inferred positive RV acceleration a=+0.113 ± 0.058 m s^-1 day^-1, whose sign is contrary to expectations for tidal orbit decay. This suggests that the unresolved stellar companion responsible for the observed excess in the stellar angular diameter could also be the cause of the anomalous observed acceleration. Using our derived value q=0.92 for the stealth companion of WASP-103, we get M_2=1.11 ± 0.10 M_⊙, where M_2 is the mass of the companion. If we assume that the unseen companion is close to superior conjunction, the host star should be accelerating away from the observer. Using the observed acceleration and the inferred secondary mass, assuming a circular orbit and a high inclination we obtain an approximate estimate of the orbital separation:
d=√(GM_2/a)≃ 71 ± 18 au.
where d is the binary separation. At maximum elongation this would give an angular separation θ=0.15±0.04 arcsec and an orbital period P≃ 400 y. The maximum radial-velocity separation of the two stars would then be
K_1+K_2=2π d/P ≃ 1.1 au/yr, or 5.2 km s^-1. These inferred maximum angular and radial-velocity separations lie comfortably within the blue region in Fig. <ref>, explaining why the inferred binary companion has thus far evaded detection.
§ CONCLUSION AND SUGGESTED WORK
Various properties of transiting exoplanets can be derived from their transit photometry and RV measurements. If these get contaminated by other stars that are bound to the host star, then the calculated parameters need to be corrected by accounting for the additional flux in the system. We find that the surface gravity of the planet is a quantity that is not affected by the dilution of the transit and RV observations. We have recalculated the masses, radii, and densities for seven WASP planets (Table <ref>) where we suspect that the host star has a hidden stellar companion.
Our estimate of the flux ratio and orbital period for the WASP-85 binary system is consistent with that obtained in the discovery paper of <cit.>. Dilution of observations for WASP-20 and WASP-103 had been accounted for previously in literature.
Our comparisons of the R_IRFM values with estimates of the host-star radius based on the photodynamical stellar density suggest that the planet in the WASP-20 system may orbit the fainter of the two stars, and that the stealth companion of WASP 103's host star may be responsible for the apparent secular increase in the planet's orbital period.
Overall, the densities of the planets have gone down on average by a factor of 1.3. We also made estimates on the orbital periods, mass ratios and flux ratios for all seven stealth binary systems, which have been outlined in Tables <ref> and <ref>. The most probable flux ratios range from about 0.5 to 1, and mass ratios from 0.89 to 1 - barring WASP-20, which is an exception wherein both ratios are above 1 implying that the planetary host star is the denser and fainter of the two stellar binary components. The orbital periods range from a hundred to a few thousand years, which explains why the secondary stars have not been detected in radial-velocity observations yet. Nonetheless, the example of WASP-103 discussed above suggests that long-term RV monitoring could reveal secular accelerations in systems with companions of unequal luminosity.
These binary systems appear to have angular separations below 1^'' but are not close enough to give resolvable Doppler shifts. We also suggest that follow-up observations be made using speckle imaging, lucky imaging, or adaptive optics. Speckle imaging removes effects of turbulence in the atmosphere and provides simultaneous photometric and astrometric data at sub-arcsecond precisions <cit.>. The ‘Differential Speckle Survey Instrument’ <cit.> has successfully detected binary companions to stars in the ‘Kilodegree Extremely Little Telescope’ survey <cit.>, and can thus be used for these WASP systems as well - as was done
for WASP-103 <cit.>.
§ DATA AVAILABILITY
The stellar and planetary data for the WASP systems investigated in this research are available in the TEPCat database curated by Dr John Southworth at the University of Keele.
All other research data underpinning this publication and the PYTHON
code and notebook used to prepare all diagrams in this paper will
be made available through the University of St Andrews Research
Portal.
The research data supporting this publication can be accessed at https://doi.org/10.17630/1ceff0e3-c2aa-40d0-996a-0d3f8d81cf19 <cit.>.
The code supporting this publication can be accessed at https://doi.org/10.17630/72a692c8-dd85-4078-99d3-8a5551727ff9 <cit.>.
§ ACKNOWLEDGEMENTS
Tanvi Goswamy thanks Siddharth Rangnekar for helpful discussions and assistance with preparation of the manuscript.
Andrew Collier Cameron and Thomas Wilson acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1.
This research has made use of NASA’s Astrophysics Data System.
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France.
mnras
99
[Anderson et al.2015]And15 Anderson D. R., Collier Cameron A., Hellier C., Lendl M., Lister T. A., Maxted P. F. L., Queloz D., et al., 2015, A&A, 575, A61. doi:10.1051/0004-6361/201423591
[Anderson et al.2017]2017A A...604A.110A Anderson D. R., Collier Cameron A., Delrez L., Doyle A. P., Gillon M., Hellier C., Jehin E., et al., 2017, A&A, 604, A110. doi:10.1051/0004-6361/201730439
[Baranne et al.1996]1996A AS..119..373B Baranne A., Queloz D., Mayor M., Adrianzyk G., Knispel G., Kohler D., Lacroix D., et al., 1996, A&AS, 119, 373
[Barros et al.2022]2022A A...657A..52B Barros S. C. C., Akinsanmi B., Boué G., Smith A. M. S., Laskar J., Ulmer-Moll S., Lillo-Box J., et al., 2022, A&A, 657, A52. doi:10.1051/0004-6361/202142196
[Belokurov et al.2020]2020MNRAS.496.1922B Belokurov V., Penoyre Z., Oh S., Iorio G., Hodgkin S., Evans N. W., Everall A., et al., 2020, MNRAS, 496, 1922. doi:10.1093/mnras/staa1522
[Blackwell & Shallis1977]1977MNRAS.180..177B Blackwell D. E., Shallis M. J., 1977, MNRAS, 180, 177. doi:10.1093/mnras/180.2.177
[Bressan et al.2012]2012MNRAS.427..127B Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, MNRAS, 427, 127. doi:10.1111/j.1365-2966.2012.21948.x
[Brown et al.2014]2014arXiv1412.7761B Brown D. J. A., Anderson D. R., Doyle A. P., Maxted E. G. F. L., Smalley B., McCormac J., Almenera J. M., et al., 2014, arXiv, arXiv:1412.7761
[Casagrande et al.2010]2010A A...512A..54C Casagrande L., Ramírez I., Meléndez J., Bessell M., Asplund M., 2010, A&A, 512, A54. doi:10.1051/0004-6361/200913204
[Castelli & Kurucz2003]2003IAUS..210P.A20C Castelli F., Kurucz R. L., 2003, IAUS, 210, A20
[Castro-Ginard et al.2024]2024A A...688A...1C Castro-Ginard A., Penoyre Z., Casey A. R., Brown A. G. A., Belokurov V., Cantat-Gaudin T., Drimmel R., et al., 2024, A&A, 688, A1. doi:10.1051/0004-6361/202450172
[Ciardi et al.2015]2015ApJ...805...16C Ciardi D. R., Beichman C. A., Horch E. P., Howell S. B., 2015, ApJ, 805, 16. doi:10.1088/0004-637X/805/1/16
[Coker et al.2018]2018AJ....155...27C Coker C. T., Gaudi B. S., Pogge R. W., Horch E., 2018, AJ, 155, 27. doi:10.3847/1538-3881/aa9f0e
[Collier Cameron2016]2016AndrewCameron Collier Cameron, A. (2016) “Chapter 2: Extrasolar Planetary Transits” in Bozza, V. et al. (ed.) Methods of Detecting
Exoplanets: 1st Advanced School on Exoplanetary Science.
[Collier Cameron et al.2019]2019MNRAS.487.1082C Collier Cameron A., Mortier A., Phillips D., Dumusque X., Haywood R. D., Langellier N., Watson C. A., et al., 2019, MNRAS, 487, 1082. doi:10.1093/mnras/stz1215
Springer International Publishing AG, pp. 89-131
[Delrez et al.2018]2018MNRAS.474.2334D Delrez L., Madhusudhan N., Lendl M., Gillon M., Anderson D. R., Neveu-VanMalle M., Bouchy F., et al., 2018, MNRAS, 474, 2334. doi:10.1093/mnras/stx2896
[Evans et al.2016]2016ApJ...833L..19E Evans D. F., Southworth J., Smalley B., 2016, ApJL, 833, L19. doi:10.3847/2041-8213/833/2/L19
[Faedi et al.2016]2016arXiv160804225F Faedi F., Gómez Maqueo Chew Y., Pollacco D., Brown D. J. A., Hébrard G., Smalley B., Lam K. W. F., et al., 2016, arXiv, arXiv:1608.04225
[Fouesneau2022]2022zndo...7016774F Fouesneau M., 2022, pyphot, doi://10.5281/zenodo.7016775
[Gaia Collaboration et al.2023]2023A A...674A...1G Gaia Collaboration, Vallenari A., Brown A. G. A., Prusti T., de Bruijne J. H. J., Arenou F., Babusiaux C., et al., 2023, A&A, 674, A1. doi:10.1051/0004-6361/202243940
[Gaia Collaboration et al.2021]2021A A...649A...1G Gaia Collaboration, Brown A. G. A., Vallenari A., Prusti T., de Bruijne J. H. J., Babusiaux C., Biermann M., et al., 2021, A&A, 649, A1. doi:10.1051/0004-6361/202039657
[Gillon et al.2014]2014A A...562L...3G Gillon M., Anderson D. R., Collier-Cameron A., Delrez L., Hellier C., Jehin E., Lendl M., et al., 2014, A&A, 562, L3. doi:10.1051/0004-6361/201323014
[Ginsburg et al.2021]2021zndo...5804082G Ginsburg A., Sipőcz B., Parikh M., Brasseur C. E., Jcsegovia, Groener A., Norman H., et al., 2021, astropy/astroquery, doi://10.5281/zenodo.5804082
[Goswamy et al.2024a]goswamy2024a Goswamy T., Collier Cameron A., Wilson, T. G., 2024, Dataset, University of St Andrews Research Portal, https://doi.org/10.17630/1ceff0e3-c2aa-40d0-996a-0d3f8d81cf19
[Goswamy et al.2024b]goswamy2024b Goswamy T., Collier Cameron A., Wilson, T. G., 2024, Software, University of St Andrews Research Portal, https://doi.org/10.17630/72a692c8-dd85-4078-99d3-8a5551727ff9
[Hatzes2016]
Hatzes2016 Hatzes, A. P. (2016) “Chapter 1: The Radial Velocity Method for the Detection of Exoplanets” in Bozza, V. et al.
(ed.) Methods of Detecting Exoplanets: 1st Advanced School
on Exoplanetary Science. Springer International Publishing
AG, pp. 3-86.
[Hellier et al.2019]2019MNRAS.482.1379H Hellier C., Anderson D. R., Bouchy F., Burdanov A., Collier Cameron A., Delrez L., Gillon M., et al., 2019, MNRAS, 482, 1379. doi:10.1093/mnras/sty2741
[Horch et al.2009]2009AJ....137.5057H Horch E. P., Veillette D. R., Baena Gallé R., Shah S. C., O'Rielly G. V., van Altena W. F., 2009, AJ, 137, 5057. doi:10.1088/0004-6256/137/6/5057
[Marcussen & Albrecht2023]2023AJ....165..266M Marcussen M. L., Albrecht S. H., 2023, AJ, 165, 266. doi:10.3847/1538-3881/acd53d
[Matson, Howell, & Ciardi2019]2019AJ....157..211M Matson R. A., Howell S. B., Ciardi D. R., 2019, AJ, 157, 211. doi:10.3847/1538-3881/ab1755
[Maxted et al.2016]2016A A...591A..55M Maxted P. F. L., Anderson D. R., Collier Cameron A., Delrez L., Gillon M., Hellier C., Jehin E., et al., 2016, A&A, 591, A55. doi:10.1051/0004-6361/201628250
[Mayor et al.2003]2003Msngr.114...20M Mayor M., Pepe F., Queloz D., Bouchy F., Rupprecht G., Lo Curto G., Avila G., et al., 2003, Msngr, 114, 20
[Nielsen et al.2019]2019MNRAS.489.2478N Nielsen L. D., Bouchy F., Turner O. D., Anderson D. R., Barkaoui K., Benkhaldoun Z., Burdanov A., et al., 2019, MNRAS, 489, 2478. doi:10.1093/mnras/stz2351
[Ochsenbein, Bauer, & Marcout2000]2000A AS..143...23O Ochsenbein F., Bauer P., Marcout J., 2000, A&AS, 143, 23. doi:10.1051/aas:2000169
[Payne et al.2018]2018AJ....156..209P Payne A. N., Ciardi D. R., Kane S. R., Carter B., 2018, AJ, 156, 209. doi:10.3847/1538-3881/aae310
[Penoyre, Belokurov, & Evans2022]2022MNRAS.513.2437P Penoyre Z., Belokurov V., Evans N. W., 2022, MNRAS, 513, 2437. doi:10.1093/mnras/stac959
[Perruchot et al.2008]2008SPIE.7014E..0JP Perruchot S., Kohler D., Bouchy F., Richaud Y., Richaud P., Moreaux G., Merzougui M., et al., 2008, SPIE, 7014, 70140J. doi:10.1117/12.787379
[Pollacco et al.2006]2006PASP..118.1407P Pollacco D. L., Skillen I., Collier Cameron A., Christian D. J., Hellier C., Irwin J., Lister T. A., et al., 2006, PASP, 118, 1407. doi:10.1086/508556
[Queloz et al.2000]2000A A...354...99Q Queloz D., Mayor M., Weber L., Blécha A., Burnet M., Confino B., Naef D., et al., 2000, A&A, 354, 99
[Schanche et al.2020]2020MNRAS.499..428S Schanche N., Hébrard G., Collier Cameron A., Dalal S., Smalley B., Wilson T. G., Boisse I., et al., 2020, MNRAS, 499, 428. doi:10.1093/mnras/staa2848
[Silva Aguirre et al.2015]2015MNRAS.452.2127S Silva Aguirre V., Davies G. R., Basu S., Christensen-Dalsgaard J., Creevey O., Metcalfe T. S., Bedding T. R., et al., 2015, MNRAS, 452, 2127. doi:10.1093/mnras/stv1388
[Silva Aguirre et al.2017]2017ApJ...835..173S Silva Aguirre V., Lund M. N., Antia H. M., Ball W. H., Basu S., Christensen-Dalsgaard J., Lebreton Y., et al., 2017, ApJ, 835, 173. doi:10.3847/1538-4357/835/2/173
[Skrutskie et al.2006]2006AJ....131.1163S Skrutskie M. F., Cutri R. M., Stiening R., Weinberg M. D., Schneider S., Carpenter J. M., Beichman C., et al., 2006, AJ, 131, 1163. doi:10.1086/498708
[Southworth2010]2010MNRAS.408.1689S Southworth J., 2010, MNRAS, 408, 1689. doi:10.1111/j.1365-2966.2010.17231.x
[Southworth2011]2011MNRAS.417.2166S Southworth J., 2011, MNRAS, 417, 2166. doi:10.1111/j.1365-2966.2011.19399.x
[Southworth et al.2020]2020A A...635A..74S Southworth J., Bohn A. J., Kenworthy M. A., Ginski C., Mancini L., 2020, A&A, 635, A74. doi:10.1051/0004-6361/201937334
[Southworth & Faedi2022]SF21 Southworth J., Faedi F., 2022, Obs, 142, 1
[Stevens et al.2017]2017AJ....153..178S Stevens D. J., Collins K. A., Gaudi B. S., Beatty T. G., Siverd R. J., Bieryla A., Fulton B. J., et al., 2017, AJ, 153, 178. doi:10.3847/1538-3881/aa5ffb
[STScI Development Team2013]2013ascl.soft03023S STScI Development Team, 2013, ascl.soft. ascl:1303.023
[Winn2009]2009IAUS..253...99W Winn J. N., 2009, IAUS, 253, 99. doi:10.1017/S174392130802629X
[Wöllert & Brandner2015]2015A A...579A.129W Wöllert M., Brandner W., 2015, A&A, 579, A129. doi:10.1051/0004-6361/201526525
[Wright et al.2010]2010AJ....140.1868W Wright E. L., Eisenhardt P. R. M., Mainzer A. K., Ressler M. E., Cutri R. M., Jarrett T., Kirkpatrick J. D., et al., 2010, AJ, 140, 1868. doi:10.1088/0004-6256/140/6/1868
[Ziegler et al.2020]ziegler2020 Ziegler C., Tokovinin A., Briceño C., Mang J., Law N., Mann A. W., 2020, AJ, 159, 19. doi:10.3847/1538-3881/ab55e9
§
lll
R_IRFM and R_trans values along with their errors, used in Fig. 1
WASP host star R from IRFM Published R_trans
3c
Table continued from previous page
WASP host star R from IRFM Published R_trans
WASP-001 1.506 ± 0.015 1.470 ± 0.027
WASP-002 0.884 ± 0.013 0.821 ± 0.014
WASP-003 1.339 ± 0.011 1.298 ± 0.049
WASP-004 0.912 ± 0.007 0.910 ± 0.018
WASP-005 1.106 ± 0.008 1.088 ± 0.040
WASP-006 0.817 ± 0.006 0.864 ± 0.025
WASP-007 1.457 ± 0.009 1.478 ± 0.088
WASP-008 0.994 ± 0.012 0.976 ± 0.021
WASP-010 0.736 ± 0.011 0.678 ± 0.030
WASP-011 0.858 ± 0.013 0.772 ± 0.015
WASP-012 1.706 ± 0.025 1.657 ± 0.045
WASP-013 1.544 ± 0.008 1.657 ± 0.079
WASP-014 1.300 ± 0.011 1.318 ± 0.084
WASP-015 1.420 ± 0.011 1.522 ± 0.044
WASP-016 1.084 ± 0.008 1.087 ± 0.042
WASP-017 1.579 ± 0.019 1.583 ± 0.041
WASP-018 1.244 ± 0.007 1.255 ± 0.028
WASP-019 1.003 ± 0.007 1.018 ± 0.015
WASP-020 1.953 ± 0.223 1.242 ± 0.045
WASP-021 1.178 ± 0.009 1.136 ± 0.051
WASP-022 1.212 ± 0.017 1.255 ± 0.030
WASP-023 0.824 ± 0.007 0.819 ± 0.031
WASP-024 1.376 ± 0.013 1.317 ± 0.041
WASP-025 0.916 ± 0.006 0.924 ± 0.018
WASP-026 1.303 ± 0.011 1.284 ± 0.036
WASP-028 1.107 ± 0.008 1.083 ± 0.025
WASP-029 0.787 ± 0.011 0.808 ± 0.044
WASP-030 1.440 ± 0.013 1.389 ± 0.029
WASP-031 1.280 ± 0.012 1.252 ± 0.033
WASP-032 1.155 ± 0.013 1.110 ± 0.050
WASP-033 1.548 ± 0.016 1.509 ± 0.025
WASP-034 1.057 ± 0.005 0.930 ± 0.120
WASP-035 1.119 ± 0.008 1.090 ± 0.030
WASP-036 0.943 ± 0.008 0.985 ± 0.014
WASP-037 1.043 ± 0.012 1.003 ± 0.053
WASP-038 1.495 ± 0.015 1.331 ± 0.028
WASP-039 0.925 ± 0.006 0.939 ± 0.022
WASP-041 0.892 ± 0.005 0.886 ± 0.012
WASP-042 0.862 ± 0.006 0.892 ± 0.021
WASP-043 0.678 ± 0.018 0.667 ± 0.011
WASP-044 0.936 ± 0.012 0.865 ± 0.038
WASP-045 0.908 ± 0.012 0.917 ± 0.024
WASP-046 0.908 ± 0.008 0.858 ± 0.027
WASP-047b 1.151 ± 0.010 1.137 ± 0.013
WASP-047c 1.150 ± 0.010 1.137 ± 0.013
WASP-047d 1.151 ± 0.011 1.137 ± 0.013
WASP-048 1.757 ± 0.014 1.519 ± 0.051
WASP-049 1.024 ± 0.011 1.038 ± 0.037
WASP-050 0.876 ± 0.005 0.855 ± 0.019
WASP-052 0.841 ± 0.008 0.786 ± 0.016
WASP-053 0.837 ± 0.006 0.798 ± 0.023
WASP-054 1.699 ± 0.012 1.828 ± 0.086
WASP-055 1.087 ± 0.009 1.102 ± 0.019
WASP-056 1.199 ± 0.010 1.112 ± 0.024
WASP-057 1.079 ± 0.009 0.927 ± 0.033
WASP-058 1.172 ± 0.011 1.170 ± 0.130
WASP-059 0.710 ± 0.015 0.613 ± 0.044
WASP-060 1.488 ± 0.015 1.401 ± 0.066
WASP-061 1.356 ± 0.010 1.390 ± 0.030
WASP-062 1.238 ± 0.008 1.280 ± 0.050
WASP-063 1.755 ± 0.013 1.880 ± 0.080
WASP-064 1.072 ± 0.011 1.058 ± 0.025
WASP-065 1.101 ± 0.009 1.010 ± 0.050
WASP-066 1.785 ± 0.037 1.750 ± 0.090
WASP-067 0.867 ± 0.006 0.817 ± 0.022
WASP-068 1.684 ± 0.011 1.690 ± 0.085
WASP-069 0.847 ± 0.019 0.813 ± 0.028
WASP-070 1.250 ± 0.019 1.251 ± 0.079
WASP-071 2.144 ± 0.021 2.260 ± 0.170
WASP-072 2.180 ± 0.030 1.980 ± 0.240
WASP-073 2.213 ± 0.017 2.070 ± 0.135
WASP-074 1.531 ± 0.008 1.536 ± 0.026
WASP-075 1.306 ± 0.012 1.270 ± 0.020
WASP-076 1.836 ± 0.037 1.765 ± 0.071
WASP-077 1.071 ± 0.028 0.955 ± 0.015
WASP-078 2.031 ± 0.023 2.350 ± 0.095
WASP-079 1.590 ± 0.010 1.510 ± 0.035
WASP-080 0.650 ± 0.015 0.593 ± 0.012
WASP-081 1.241 ± 0.013 1.283 ± 0.039
WASP-082 2.086 ± 0.015 2.219 ± 0.087
WASP-083 1.057 ± 0.010 1.050 ± 0.050
WASP-084 0.817 ± 0.006 0.768 ± 0.019
WASP-085 1.157 ± 0.043 0.935 ± 0.023
WASP-086 2.136 ± 0.016 1.291 ± 0.014
WASP-087 1.623 ± 0.019 1.627 ± 0.062
WASP-088 2.131 ± 0.029 2.080 ± 0.090
WASP-089 0.904 ± 0.010 0.880 ± 0.030
WASP-090 1.892 ± 0.031 1.980 ± 0.090
WASP-091 0.856 ± 0.007 0.860 ± 0.030
WASP-092 1.286 ± 0.016 1.341 ± 0.058
WASP-093 1.629 ± 0.019 1.524 ± 0.040
WASP-094 1.567 ± 0.012 1.620 ± 0.045
WASP-095 1.213 ± 0.010 1.130 ± 0.060
WASP-096 1.081 ± 0.010 1.050 ± 0.050
WASP-097 1.090 ± 0.008 1.060 ± 0.040
WASP-098 0.735 ± 0.006 0.741 ± 0.021
WASP-099 1.688 ± 0.013 1.760 ± 0.085
WASP-100 1.723 ± 0.014 1.670 ± 0.145
WASP-101 1.311 ± 0.009 1.290 ± 0.040
WASP-102 1.389 ± 0.016 1.331 ± 0.013
WASP-103 1.757 ± 0.100 1.413 ± 0.045
WASP-104 0.935 ± 0.006 0.935 ± 0.010
WASP-105 1.093 ± 0.028 0.900 ± 0.030
WASP-106 1.481 ± 0.013 1.393 ± 0.038
WASP-107 0.661 ± 0.014 0.670 ± 0.020
WASP-108 1.247 ± 0.016 1.215 ± 0.040
WASP-109 1.412 ± 0.019 1.346 ± 0.044
WASP-110 0.876 ± 0.007 0.881 ± 0.035
WASP-111 1.887 ± 0.015 1.850 ± 0.100
WASP-112 1.082 ± 0.012 1.002 ± 0.037
WASP-113 1.741 ± 0.015 1.608 ± 0.105
WASP-114 1.414 ± 0.017 1.430 ± 0.060
WASP-117 1.213 ± 0.007 1.170 ± 0.063
WASP-118 1.858 ± 0.021 1.754 ± 0.016
WASP-119 1.107 ± 0.009 1.200 ± 0.100
WASP-120 1.707 ± 0.015 1.870 ± 0.110
WASP-121 1.473 ± 0.008 1.440 ± 0.030
WASP-122 1.454 ± 0.010 1.520 ± 0.030
WASP-123 1.221 ± 0.017 1.285 ± 0.051
WASP-124 1.147 ± 0.010 1.020 ± 0.020
WASP-126 1.192 ± 0.013 1.270 ± 0.075
WASP-127 1.354 ± 0.012 1.333 ± 0.027
WASP-128 1.203 ± 0.010 1.152 ± 0.019
WASP-129 1.185 ± 0.012 0.900 ± 0.020
WASP-130 1.035 ± 0.008 0.960 ± 0.030
WASP-131 1.673 ± 0.021 1.526 ± 0.065
WASP-132 0.740 ± 0.010 0.740 ± 0.020
WASP-133 1.507 ± 0.017 1.440 ± 0.050
WASP-134 1.165 ± 0.010 1.175 ± 0.048
WASP-135 0.889 ± 0.008 0.960 ± 0.050
WASP-136 2.071 ± 0.019 2.210 ± 0.220
WASP-137 1.634 ± 0.014 1.520 ± 0.110
WASP-138 1.448 ± 0.013 1.360 ± 0.050
WASP-139 0.818 ± 0.007 0.800 ± 0.040
WASP-140 0.817 ± 0.011 0.870 ± 0.040
WASP-141 1.316 ± 0.013 1.370 ± 0.070
WASP-142 1.642 ± 0.022 1.640 ± 0.080
WASP-143 0.992 ± 0.010 1.013 ± 0.032
WASP-144 1.108 ± 0.032 0.810 ± 0.040
WASP-145 0.674 ± 0.010 0.680 ± 0.070
WASP-146 1.314 ± 0.020 1.232 ± 0.072
WASP-147 1.441 ± 0.016 1.370 ± 0.080
WASP-148 0.926 ± 0.008 1.030 ± 0.200
WASP-150 1.710 ± 0.024 1.651 ± 0.027
WASP-151 1.220 ± 0.011 1.181 ± 0.020
WASP-153 1.654 ± 0.020 1.730 ± 0.095
WASP-156 0.824 ± 0.005 0.760 ± 0.030
WASP-157 1.091 ± 0.014 1.134 ± 0.051
WASP-158 1.571 ± 0.025 1.390 ± 0.180
WASP-159 2.065 ± 0.022 2.110 ± 0.100
WASP-160 0.847 ± 0.008 0.872 ± 0.030
WASP-161 1.612 ± 0.278 1.712 ± 0.078
WASP-162 1.168 ± 0.010 1.110 ± 0.050
WASP-163 1.194 ± 0.014 1.015 ± 0.039
WASP-164 0.957 ± 0.012 0.932 ± 0.029
WASP-165 1.699 ± 0.028 1.650 ± 0.090
WASP-166 1.240 ± 0.010 1.220 ± 0.060
WASP-167 1.916 ± 0.028 1.790 ± 0.050
WASP-168 1.087 ± 0.007 1.120 ± 0.060
WASP-169 2.247 ± 0.028 2.011 ± 0.139
WASP-170 1.016 ± 0.017 0.938 ± 0.059
WASP-171 1.989 ± 0.022 1.637 ± 0.069
WASP-172 2.101 ± 0.031 1.910 ± 0.100
WASP-173 1.069 ± 0.010 1.110 ± 0.050
WASP-174 1.381 ± 0.014 1.347 ± 0.018
WASP-175 1.244 ± 0.012 1.204 ± 0.064
WASP-176 2.005 ± 0.022 1.925 ± 0.046
WASP-177 0.802 ± 0.008 0.885 ± 0.046
WASP-178 1.700 ± 0.018 1.670 ± 0.070
WASP-180 1.177 ± 0.012 1.190 ± 0.060
WASP-181 1.017 ± 0.010 0.965 ± 0.043
WASP-182 1.241 ± 0.011 1.340 ± 0.030
WASP-183 0.873 ± 0.008 0.871 ± 0.038
WASP-184 1.761 ± 0.020 1.650 ± 0.090
WASP-185 1.586 ± 0.015 1.500 ± 0.080
WASP-189 2.343 ± 0.028 2.360 ± 0.030
WASP-190 1.811 ± 0.028 1.600 ± 0.100
WASP-192 1.317 ± 0.015 1.320 ± 0.070
|
http://arxiv.org/abs/2409.03186v1 | 20240905022415 | A graphical exploration of the relationship between parasite aggregation indices | [
"R. McVinish",
"R. J. G. Lester"
] | q-bio.PE | [
"q-bio.PE",
"60E15 (Primary) 92D40 (Secondary)"
] |
]A graphical exploration of the relationship between parasite aggregation indices
]R. McVinish
School of Mathematics and Physics, University of Queensland
[email protected]
]R.J.G. Lester
School of Biological Sciences, University of Queensland
§ ABSTRACT
The level of aggregation in parasite populations is frequently incorporated into ecological studies. It is measured in various ways including variance-to-mean ratio, mean crowding, the k parameter of the negative binomial distribution and indices based on the Lorenz curve such as the Gini index (Poulin's D) and the Hoover index. Assuming the frequency distributions follow a negative binomial, we use contour plots to clarify the relationships between aggregation indices, mean abundance and prevalence. The contour plots highlight the nonlinear nature of the relationships between these measures and suggest that correlations are not a suitable summary of these relationships.
[
[
September 5, 2024
=====================
§ INTRODUCTION
Investigations into parasite population dynamics frequently require an indicator of the level of aggregation in the parasite population <cit.>. As the concept of aggregation in parasites is poorly defined <cit.>, aggregation has been measured in various ways. Commonly used indices include prevalence, the Variance-to-Mean Ratio (VMR), and the k parameter of the negative binomial distribution. Closely related to VMR and k are mean crowding and patchiness <cit.> which can be seen as more direct measures of the competitive experience of parasites within a host <cit.>. Two other indices are derived from the Lorenz curve <cit.>, the most widely accepted quantification of inequality. <cit.> proposed using the Gini index <cit.>, which has since become widely used in parasitology <cit.>. The Hoover index (aka Pietra index) has more recently been proposed to measure parasite aggregation <cit.>.
This paper clarifies and extends our previous work on aggregation. It was stimulated by a recent paper by <cit.> which correlated aggregation indices with mean abundance and prevalence using simulated data. We present a more accurate representation using ‘contour plots’, calculated directly from the parameters of the negative binomial distributions. The plots provide a simple and more insightful way to comprehend the relationships.
§ CONTOUR PLOTS
The contours show combinations of two indices, specified on the vertical and horizontal axes, that give rise to similar values of the third index. Contour plots, developed in the 16th century <cit.>, are widely used in other disciplines but rarely in parasitology (e.g. <cit.>).
Our analysis assumed that parasite burden is adequately modelled by a negative binomial distribution <cit.>. Following the typical practice in parasitology, we parameterised the negative binomial distribution in terms of mean abundance, m, and the parameter k which controls the shape of the distribution. We did not make any assumption on the distribution of m and k. We used the range of values for m and k suggested by the extensive data of <cit.>. Their values for m, k and prevalence are superimposed on several of the contour plots as dot points.
To construct a contour plot of an aggregation index against m and k, we expressed the aggregation index as a function of m and k. The population values of several indices can be expressed simply in terms of m and k:
prevalence = 1 - (k/(k+m))^k,
VMR = 1 + m/k , mean crowding = m +m/k, and patchiness = 1 + 1/k. The Gini and Hoover indices lack simple expressions in terms of m and k however, they can still be evaluated numerically. The Hoover index can be expressed in terms of m and k by applying <cit.>,
H=F(m;k,m)-F(m-1;k+1,m+m/k),
where F(x; m, k) is the cumulative distribution function of the negative binomial distribution with k and mean m evaluated at x. Further details are given in the Appendix. The cumulative distribution function of the negative binomial distribution, F, is available in statistical packages such as R <cit.>. The Gini index can be expressed as
G= (1+m/k) _2F_1 (k+1,1/2,2;-4 m/k(1+m/k) ),
where _2F_1 is the Gaussian hypergeometric function <cit.>. This can be evaluated in R using the hypergeo package <cit.>. Calculating indices directly from the parameters of the negative binomial distribution rather than using simulated data obviates the need to consider the uncertainty of estimates and the effects of different sample sizes.
We also employed contour plots to examine the relationship between aggregation indices, k, and prevalence. This required first solving the equation
prevalence = 1 – (k/(k+m))^k
in terms of k for each pair of m and prevalence in the contour plot. This equation has a unique solution if m + ln(1 - prevalence) > 0. On the other hand, if m + ln(1 - prevalence) < 0, there is no solution to the equation. The solution was found numerically using the uniroot function in R. The expressions for the aggregation indices in terms of m and k is then used to construct the contour plot. Regions of m and prevalence that are inconsistent with a negative binomial distribution are represented as white in the contour plot.
All contour plots were produced in R using the ggplot2 package <cit.>. The values for m and k reported in <cit.> were heavily skewed and spanned several orders of magnitude with m ranging between 0.1 and 5200 and k ranging between 0.001 and 16.5. To make the plots clearer, log scaling has been applied to these variables.
§ RELATIONSHIP BETWEEN MEAN ABUNDANCE, K, AND PREVALENCE
The relationship between m, k, and prevalence in wild parasite populations has been examined by several authors with conflicting results <cit.>. While the expression of prevalence in terms of m and k is sufficiently simple to analyse, it is still instructive to construct the contour plot (Fig. <ref> left). In it, each colour represents a region of values of m and k that give rise to similar values of prevalence.
We see that prevalence is increasing in both m and k leading to contours that are roughly L-shaped on the range of m and k plotted so prevalence is small when either m or k are small, and prevalence is large when both m and k are large. The contours also show that there is a non-linear relationship between m and k when prevalence is considered fixed. The contours become almost parallel to the horizontal axis as k increases, a consequence of lim_k→∞prevalence = 1 - e^-m. On the other hand, the contours continue to move left as m increases, a consequence of lim_m→∞prevalence = 1. The contour plot shows that the rate at which prevalence approaches one as m increases is slow when k is small.
If we restrict our attention to a single-coloured band, i.e. those values of m and k giving rise to similar values of prevalence, we see that, after controlling for prevalence, there is a negative relationship between m and k. This relationship is forced by the negative binomial distribution, so it will hold true in natural systems to the extent that those systems are well modelled by the negative binomial distribution. The different widths of the contour lines show the non-linearity of the relationship between m, k and prevalence.
The dot points represent estimates of m and k from the 269 parasite-host systems reported in <cit.>. Although several parasite-host systems lie in a region of very high prevalence (both m and k large) or very small prevalence (either m or k small), many others occupy a region of the parameter space where a moderate change in the parameter values would result in a significant change in prevalence assuming a negative binomial distribution.
As Shaw & Dobson reported prevalences in their review, it is possible to compare these with the prevalence values implied by the negative binomial distribution (Fig. <ref> right). In general, there is good agreement; most points within a given contour having the same colour. This demonstrates the accuracy of the contour plots to interpret relationships in real life situations. The few points where the observed prevalences don’t agree with that determined by the negative binomial could be because these distributions did not conform to a negative binomial.
§ RELATIONSHIP OF HOOVER & GINI INDICES WITH MEAN ABUNDANCE, K, AND PREVALENCE
Contour plots of the Hoover index and Gini index as functions of m and k are shown in Fig <ref> left and right. The contour plots are qualitatively very similar and share some similarities with the contour plot of prevalence (Fig. <ref>). Both Hoover and Gini indices decrease in both m and k, taking values close to one when either m or k were small, and taking values close to zero when both m and k were large. The contours are L-shaped becoming almost parallel to the horizontal axis as m increases and almost parallel to the vertical axis as k increases.
The plots show both indices display some stability over a wide range of m and k. Restricting our attention to the Hoover index (Figure <ref> left), we see that for m>5 the value of the index is largely determined by the size of k. For m< 5 the value is less affected by k but more affected by m, as indicated by the number of contours crossed as m decreases. For example, starting from k=1 and m=6, as m decreases the value of the index increases quickly crossing several contours from 0.4 to 1. On the other hand, when m increases from the same point (1,6) the index stays in the same colour band and there is little change in the Hoover value (0.4 to 0.5). For many of the parasite-host systems reported in <cit.>, shown on the figure as dot points, an increase in m, that is moving the points vertically on the contour plot, does not appear to impact the Hoover index since the point would remain in the same-coloured region. On the other hand, in many of the samples, a moderate change in k, that is moving the point horizontally, has a large impact on the Hoover index. Similar behaviour is observed in the contour plot of the Gini index (Fig. <ref> right), with the Gini index appearing to be even less affected by changes in m.
There are two noticeable differences between the contour plots for the Hoover and Gini indices (Fig. <ref>). Firstly, the Gini index is always larger than the Hoover index <cit.> <cit.>. This causes the Gini index to have a smaller range over the region of values for m and k observed in wild populations. Specifically, for the values of m and k reported in <cit.>, the Gini index exceeds 0.9 in 42% (113/269) of cases compared to 20% (54/269) of cases exceeding 0.9 Hoover index. Second, the contours of the Hoover index are not smooth, unlike those of the Gini index. The bumps that occur on the contours of the Hoover index occur at integer values of the mean, the most prominent occurring when the mean is 1. These bumps quickly become much less noticeable as the mean increases.
The contour plots of the Gini and Hoover indices exhibit greater differences when considered as functions of m and prevalence (Fig. <ref>). First, unlike the Gini index, the contour lines of the Hoover index are parallel to the vertical axis when m is less than one. As noted by <cit.>, when all infected hosts harbour infrapopulations larger than or equal to the overall mean, the Hoover index is equal to one minus prevalence. For the negative binomial distribution, this implies the Hoover index is equal to one minus prevalence when the mean is less than or equal to one. Second, there is less variability in the widths of the contours for the Hoover index compared to the Gini index. This suggests the dependence of the Hoover index on prevalence is more regular. At a given m, a change of 0.1 in the prevalence will have roughly the same effect on the value of the Hoover index, regardless of the initial value of prevalence. In contrast, much of the contour plot of the Gini index is coloured yellow, corresponding to values greater than 0.9. Values of the Gini index less than 0.6 are restricted to small region of the plot, indicating that small changes in prevalence in that region will result in a large change in the Gini index.
The parasite data from Shaw and Dobson were taken from five taxonomic groups. The data, divided into taxa, were superimposed on the plots of m vs k with contour lines of prevalence and Hoover index. They did not show any obvious grouping.
§ LORENZ ORDER AND THE NEGATIVE BINOMIAL DISTRIBUTION
Both the Hoover and Gini indices are seen in Figure <ref> to be decreasing functions of m and k, as is 1 - prevalence (Figure <ref>). This behaviour is due to how these indices relate to the Lorenz curve and how the parameters m and k affect the Lorenz curve of the negative binomial distribution.
The Lorenz curve of a distribution with cumulative distribution function F is given by
L(u) = ∫^u_0 F^-1 (y) dy/m, u ∈ [0,1],
where m is the mean of the distribution and F^-1(x) = sup_x{x:F(x)≤ y} for y∈ (0,1) (Gastwirth, 1971). In our context, the Lorenz curve describes the proportion u of the host population that is infected with a proportion L(u) of the parasite population. When all hosts have the same parasite burden, the Lorenz curve is given by L(u) = u for all u in [0,1]. This is called the egalitarian line. Several indices can be defined in terms of the Lorenz curve. Specifically, the Gini index is twice the area between the Lorenz curve and the egalitarian line, and the Hoover index is the greatest vertical distance between the Lorenz curve and the egalitarian line. Even 1 -prevalence can be viewed as the largest value of u such that L(u) = 0.
The Lorenz curve induces a partial ordering of distributions. Assume F_A and F_B are two distribution functions with finite means. If the Lorenz curve of F_A is greater than the Lorenz curve of F_B for all u, then we say that F_A is smaller than F_B in the Lorenz order and write F_A≤ F_B. This ordering corresponds to the notion of aggregation put forward by <cit.>. From their connections with the Lorenz curve, we see that if F_A≤ F_B, then the Gini and Hoover indices as well as 1-prevalence will be smaller for F_A than for F_B.
The following result shows that the negative binomial distribution decreases in the Lorenz order as m increases and as k increases.
Let 𝖭𝖡(m,k) denote the negative binomial distribution with parameters m and k. If m_1 < m_2, then
𝖭𝖡(k,m_2) ≤_L𝖭𝖡(k, m_1).
If k_1 < k_2, then
𝖭𝖡(k_2,m) ≤_L𝖭𝖡(k_1,m).
The proof is provided in the Appendix.
The above result explains why Gini and Hoover indices and 1 – prevalence are all decreasing functions of m and k. Figure <ref> also shows that that the contours of both the Gini and Hoover indices become parallel with the axes. This is due to the limiting behaviour of the negative binomial distribution. Depending on how the parameters are allowed to vary, it is known that the negative binomial distribution will converge to either a Poisson distribution or a gamma distribution <cit.>. Fixing m and letting k increase, the negative binomial distribution converges to a Poisson distribution with mean m. This causes the contour lines to become parallel with the horizontal axis as k increases. Similarly, fixing k and letting m increase, an appropriately scaled negative binomial distribution converges to a Gamma distribution with shape and rate parameters both equal to k. Since the Gini and Hoover indices are scale invariant <cit.>, these indices approach their respective values for a Gamma (k, k) distribution as m increases. This causes the contour lines to become parallel with the vertical axis as m increases.
§ DISCUSSION
In choosing the index to use to measure aggregation, those based on Lorenz curves seem to be the favoured, such as the Hoover and Gini. The Gini returns closer values over a wider range of means, k and prevalence compared to the Hoover, making differences less discernible. The Hoover has a biological interpretation and may be easier to calculate. When mean abundances are below one, the Hoover index has restricted values whereas the Gini has no such restriction, suggesting that Gini may be preferred in such a situation. Nevertheless, both indices provide a figure that seems to measure the same phenomenon, a phenomenon that is still undefined.
The contour graphs provide an easily interpreted demonstration of the effects of the various parameters on the Hoover and Gini indices. These could be deduced by an analysis of the formulae used to calculate the indices but this is not straightforward; indices do not correlate with a particular parameter. When applying an index to compare aggregation between samples or species, it is useful to know which parameter is having the greatest effect on the index. The contour graphs provide the answer.
In producing the graphs we calculated indices directly from the parameters of the negative binomial distribution rather than using simulated data as done by <cit.>. This obviated the need to consider the uncertainty of estimates and the effects of different sample sizes. Our results demonstrated the deterministic functional relationships between the aggregation indices, and the parameters, mean abundance and prevalence. The relationships were not linear indicating that correlation and principal components analysis may not be the best methods to analyse the relationships <cit.>.
Listing the advantages and disadvantages of Hoover and Gini indices, <cit.> describe them as having the disadvantages of being “strongly negatively correlated with prevalence” and “weakly negatively correlated with mean abundance.” In contrast, the k parameter of the negative binomial distribution and patchiness are described as having the advantages of being “not necessarily correlated with mean abundance” and “only weakly correlated with prevalence.” These comments ignore the fact that the negative binomial distribution, and hence any index computed on that distribution, is completely specified by the mean and prevalence. In other words, the dependence of any index on mean and prevalence is perfectly deterministic. In fact, the dependence on any pair of quantities that can be used to parameterise the negative binomial distribution, like m and k is perfectly deterministic.
<cit.> argue that the Gini index is to be preferred over the Hoover index on the basis that Hoover index equals one minus prevalence when the mean is less than or equal to one whereas the Gini index has no such restriction. To decide between the Hoover and Gini indices, if one must choose, then the relationship between these indices and m, k and prevalence need to be considered more closely. Our contour plots (Fig. <ref> & <ref>) have shown other differences in the behaviour of the Gini and Hoover indices. Compared to the Gini index, the Hoover index has a greater range over the region of values for m and k (or prevalence) observed in wild populations and has more regular dependence on prevalence. Given these properties and the Hoover index’s clear biological interpretation, we argue that the Hoover index should be preferred over the Gini index, at least when m is greater than one.
Our analysis has used contour plots to examine how the Gini and Hoover indices are affected by changes in m, k, and prevalence. This approach could, in principle, be applied to construct contour plots from any three indices, provided two of these can be used to parameterise the negative binomial distribution. For example, one could construct a contour plot of the Gini index as a function of VMR and mean crowding as both m and k can be expressed in terms of VMR and mean crowding:
m = mean crowding - VMR + 1
and
k = mean crowding/VMR -1 -1.
The contour plot could then be constructed using the expression for the Gini index in terms of m and k given in Section 2. Further application of contour plots may unravel other complex relationships in ecological parasitology.
§ HOOVER INDEX OF THE NEGATIVE BINOMIAL DISTRIBUTION
In parasitology the negative binomial distribution is usually parameterised in terms of the the mean m and k. The probability mass function is then
f(x;k,m) = k+x-1 k-1(k/k+m)^k(m/k+m)^x, x ∈ℕ_0,
and we write 𝖭𝖡(k,m). Let F(·;k,m) denote the cumulative distribution function of the 𝖭𝖡(k,m) distribution. The first moment distribution of the 𝖭𝖡(k,m) distribution, F^(1)(·;k,m), is
F^(1)(x;k,m) = ∑_y≤ x y f(y;k,m)/m.
For any non-negative integer x
x f(x)/m = x/mk+x-1 k-1(k/k+m)^k(m/k+m)^x
= x/m(k+x-1)!/(k-1)! x!(k/k+m)^k(m/k+m)^x
= (k+x-1)!/k! (x-1)!(k/k+m)^k+1(m/k+m)^x-1
= (k+x-1)!/k! (x-1)!(k(1+1/k)/(k+m)(1+1/k))^k+1(m(1+1/k)/(k+m)(1+1/k))^x-1
= (k+1) + (x-1) +1 (k+1) -1(k +1 /k+1 + (m+m/k))^k+1(m+m/k/k+1 +(m+m/k))^x-1,
which is the probability mass function of the 𝖭𝖡(k+1,m + m/k) distribution evaluated at x-1. Hence,
F^(1)(x;k,m) = F(x-1;k+1,m+m/k).
<cit.> states that the Hoover index can be expressed as
H = F(m;k,m) - F^(1)(m;k,m).
Hence,
H= F(m;k,m) - F(m-1;k+1, m+m/k).
§ PROOF OF THEOREM <REF>
We first recall the definition of convex order, which is closely related to the Lorenz order <cit.>.
Definition: For random variables X and Y such that 𝔼 ϕ(X) ≤𝔼 ϕ(Y) for all convex functions ϕ : ℝ→ℝ for which the expectations exist. Then we say that X is smaller than Y in the convex order, denoted X ≤_cx Y
The convex order relates to the Lorenz order in the sense that
X/𝔼 X≤_cxY/𝔼 Y
if and only if X ≤_L Y, provided the expectations exist <cit.> or <cit.>.
For part (a), let X_2∼𝖭𝖡(k,m_2). Conditional on X_2, let X_1∼𝖡𝗂𝗇𝗈𝗆𝗂𝖺𝗅(X_2,m_1/m_2). Then X_1∼𝖭𝖡(k, m_1). As 𝔼 (X_1|X_2) = (m_1/m_2) X_2 and 𝔼X_1 = (m_1/m_2) 𝔼X_2, <cit.> implies (m_1/m_2) X_2≤_L X_1. Since the Lorenz order is invariant under a change of scale, 𝖭𝖡(k,m_2) ≤_L𝖭𝖡(k,m_1).
For part (b), standard conditioning arguments show that if (N, t≥ 0) is a standard Poisson process and Λ∼𝖦𝖺𝗆𝗆𝖺(α,β) (Gamma distribution with shape parameter α and rate parameter β), then N_Λ∼𝖭𝖡(α, α/β). Let T_i∼𝖦𝖺𝗆𝗆𝖺(k_i, k_i/m).
It is known that for every convex function ϕ, 𝔼ϕ(N_t) is a convex it t <cit.>. If we can show that T_1≤_cx T_2, then the result will follow from <cit.>.
By construction 𝔼 T_1 = 𝔼 T_2. Let g_i be the probability density function of 𝖦𝖺𝗆𝗆𝖺(k_i, β_i). Then T_1≤_cx T_2 if g_2 - g_1 exhibits exactly two sign changes in the sequence +, -, + <cit.>. As the log function is increasing, log g_2 - log g_1 has the same sequence of sign changes as g_2 - g_1. Then
log g_2(x) - log g_1(x)
= (k_2-1)log x - k_2/m x - ((k_1 -1) log x - k_1/m x ) + C
= (k_2 - k_1) log x - (k_1 - k_2)/m x + C,
where C is depends on k_1, k_2 and m but not x. There must be at least one sign change since both g_1 and g_2 integrate to one. For k_2 > k_1 this function is concave so there must be two sign changes. As this function is positive for x → 0 and x →∞ when k_2 > k_1 we have shown T_1≤_cx T_2. This completes the proof.
4
urlstyle
[Adell and De La Cal(1994)]AdlC:94
J.A. Adell and J. De La Cal.
Approximating gamma distributions by normalized negative binomial distributions.
Journal of Applied Probability, 31:0 391-400, 1994.
[Arnold and Sarabia(2018)]Arnold:87
B.C. Arnold and J.M. Sarabia.
Majorization and the Lorenz order with applications in applied mathematics and economics.
Springer, New York, 2018.
[Bezerra and Bocchiglieri(2023)]BB:2023
R.H.S. Bezerra and A. Bocchiglieri.
Ectoparasitic flies of bats (Mammalia: Chiroptera) in urban green areas of northeastern Brazil.
Parasitol. Res., 122:0 117–126, 2023.
[Crofton(1971)]Crofton:71
H.D. Crofton.
A quantitative approach to parasitism.
Parasitology, 62:0 179–193, 1971.
[Gastwirth(1971)]Gastwirth:1971
J.L. Gastwirth.
A general definition of the Lorenz curve.
Econometrica, 39:0 1037–1039, 1971.
[Gini(1914)]Gini:1914
C. Gini
Sulla misura della concentrazione e della variabilita dei caratteri.
Atti del R. Instituto Veneto di Scienze, Lettere ed Arti, 73:0 1203–1248, 1914.
[Hankin(2015)]Hankin:2015
R.K.S. Hankin.
Numerical Evaluation of the Gauss Hypergeometric Function with the hypergeo Package.
The R Journal, 7:0 81-88, 2015.
[Kura et al(2022)]KTCPGA:2022
K. Kura, J.E. Truscott, B.S. Collyera, A. Phillips, A. Garbae and R.M Anderson.
The observed relationship between the degree of parasite aggregation and the prevalence of infection within human host populations for soil-transmitted helminth and schistosome infections.
Trans. R. Soc. Trop. Med. Hyg., 116:0 1226–1229, 2022.
[Lester and Blomberg(2021)]LB:2021
R.J.G. Lester and S.P. Blomberg.
Three methods to measure parasite aggregation using examples from Australian fish parasites.
Methods Ecol. Evol., 12:0 1999–2007, 2021.
[Lloyd(1967)]Lloyd:67
M. Lloyd,
Mean crowding.
J. Anim. Ecol., 36:0 1–30, (1967).
[Lorenz(1905)]Lorenz:1905
M.O. Lorenz,
Methods of measuring the concentration of wealth.
Publication of the American Statistical Association, 9:0 209–219, 1905.
[Matos et al(2023)]Matos:2023
I. Matos, D. Silva, J. Oliveira, C. Gonçalves, R. Alves, N. Pereira, F. Catarino, O.M.C.C. Ameixa, J.A. Sousa, L.F. Rangel, M.J. Santos and C. Ayra-Pardo.
Body size-dependent effects on the distribution patterns of phoretic mite species assemblages on Rhynchophorus ferrugineus (Olivier, 1790).
Ecology and Evolution, 13:0 e10338, 2023.
[McVinish and Lester(2020)]ML:2020
R. McVinish and R.J.G Lester.
Measuring aggregation in parasite populations.
J. R. Soc. Interface, 17:0 20190886, 2020.
[Morato-Moreno(2017)]MM:2017
Morato-Moreno, Manuel.
Origins of the two-dimensional relief representation on some spanish american maps in the sixteenth century.
Boletín de la Asociación de Geógrafos Españoles, 73:0 493-499, 2017.
[Morrill et al(2023)]MPF:2023
A. Morrill, R. Poulin and M.R. Forbes.
Interrelationships and properties of parasite aggregation measures: a user’s guide.
Int. J. Parasitol., 53:0 763-776, 2023.
[Pennycuick(1971)]Pennycuick:71
P. Pennycuick.
Frequency distributions of parasites in a population of three-spined sticklebacks, Gasterosteus aculeatus L., with particular reference to the negative binomial distribution.
Parasitology, 63:0 389-406, 1971.
[Pielou(1977)]Pielou:77
E.C. Pielou.
The measurement of aggregation. In: Mathematical Ecology.
Wiley Interscience, New York, 1977.
[Poulin(1993)]Poulin:93
R. Poulin.
The disparity between observed and uniform distributions: A new look at parasite aggregation.
Int. J. Parasitol., 23:0 937–944, 1993.
[Poulin(2011)]Poulin:2011
R. Poulin.
Evolutionary Ecology of Parasites: (Second Edition).
Princeton University Press, 2011.
[R Core Team(2023)]RCT:2023
R Core Team
R: A language and environment for statistical computing.
R Foundation for Statistical Computing, Vienna, Austria, 2023.
[Ramasubban(1958)]Ramasubban:58
T.A. Ramasubban.
The mean difference and the mean deviation of some discontinuous distributions.
Biometrika, 45:0 549-556, 1958.
[Rodríguez-Hernández et al(2021)]Rod:2021
K. Rodríguez-Hernández, P. Álvarez-Mendizábal, P. Chapa-Vargas, F. Escobar, F. González-García and D. Santiago-Alarcon.
Haemosporidian prevalence, parasitaemia and aggregation in relation to avian assemblage life history traits at different elevations.
Int. J. Parasitol., 51:0 365-378, 2021.
[Schweder(1982)]Schweder:1982
T. Schweder.
On the dispersion of mixtures.
Scandinavian Journal of Statistics, 9:0 165–169, 1982.
[Scott(1987)]Scott:87
M.E. Scott.
Temporal changes in aggregation: a laboratory study.
Parasitology, 94:0 583-595, 1987.
[Shaked and Shanthikumar(2007)]SS:07
M. Shaked and J.G. Shanthikumar.
Stochastic orders.
Springer, New York, 2007.
[Shaw and Dobson(1995)]SD:95
D.J Shaw and A.P. Dobson.
Patterns of macroparasite abundance and aggregation in wildlife populations: A quantitative review. Parasitology, 111:0 S111–S133, 1995.
[Shaw et al(1998)]SGD:98
D.J. Shaw, B.T. Grenfell and A.P. Dobson.
Patterns of macroparasite aggregation in wildlife host populations.
Parasitology, 117:0 597-610, 1998.
[Taguchi(1968)]Taguchi:68
T. Taguchi.
Concentration-curve methods and structures of skew populations.
Annals of Institute of Statistical Mathematics, 20:0 107–141, 1968.
[Tinsley et al(2020)]Tinsley:2020
R.C. Tinsley, H.R. Vineer, R. Grainger-Wood and E.R. Morgan, ER.
Heterogeneity in helminth infections: factors influencing aggregation in a simple host-parasite system. Parasitology, 147:0 65-77, 2020.
[Wade et al(2018)]WFL:2018
M.J. Wade, C.L. Fitzpatrick and C.M. Lively.
50-year anniversary of Lloyd's “mean crowding”: Ideas on patchy distributions.
J. Anim. Ecol., 87:0 1221–1226, 2018.
[Wickham(2016)]Wickham:2016
H. Wickham.
ggplot2: Elegant Graphics for Data Analysis.
Springer-Verlag, New York, 2016.
|
http://arxiv.org/abs/2409.03732v1 | 20240905174317 | A Logarithmic Decomposition and a Signed Measure Space for Entropy | [
"Keenan J. A. Down",
"Pedro A. M. Mediano"
] | cs.IT | [
"cs.IT",
"math.IT",
"94A17",
"E.4"
] |
[
Hakan E. Türeci
September 9, 2024
=====================
SectionsP1/1-Introduction
SectionsP1/2-Refinement
SectionsP1/3-Construction
SectionsP1/4-Quantities
SectionsP1/5-Inclusions
SectionsP1/6-ContinuousVariables
SectionsP1/7-TriadicDyadic
SectionsP1/8-Conclusion
plain
SectionsP1/9-SimplexMeasures
SectionsP1/10-Appendix
|
http://arxiv.org/abs/2409.03597v1 | 20240905145638 | Multimodal Laryngoscopic Video Analysis for Assisted Diagnosis of Vocal Cord Paralysis | [
"Yucong Zhang",
"Xin Zou",
"Jinshan Yang",
"Wenjun Chen",
"Faya Liang",
"Ming Li"
] | cs.SD | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
Multimodal Laryngoscopic Video Analysis for Assisted Diagnosis of Vocal Cord Paralysis
Yucong Zhang, Xin Zou, Jinshan Yang, Wenjun Chen, Faya Liang^∗ and Ming Li^∗
(The first two authors contributed equally to this work.)(Corresponding authors: Faya Liang, Ming Li.)
Ming li and Yucong Zhang are with the department of computer science, Wuhan University, Wuhan 430072, China and the Suzhou Municipal Key Laboratory of Multimodal Intelligent Systems, Duke Kunshan University, Suzhou 215316, China (e-mail: [email protected], [email protected])
Faya Liang, Xin Zou, Jinshan Yang and Wenjun Chen are with the Sun Yat-sen Memorial Hospital of Sun Yat-sen University, Guangzhou 510000, China (e-mail: [email protected], [email protected], [email protected], [email protected])
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper presents the Multimodal Analyzing System for Laryngoscope (MASL), a system that combines audio and video data to automatically extract key segments and metrics from laryngeal videostroboscopic videos for clinical assessment. MASL integrates glottis detection with keyword spotting to analyze patient vocalizations and refine video highlights for better inspection of vocal cord movements. The system includes a strobing video extraction module that identifies frames by analyzing hue, saturation, and value fluctuations. MASL also provides effective metrics for vocal cord paralysis detection, employing a two-stage glottis segmentation process using U-Net followed by diffusion-based refinement to reduce false positives. Instead of glottal area waveforms, MASL estimates anterior glottic angle waveforms (AGAW) from glottis masks, evaluating both left and right vocal cords to detect unilateral vocal cord paralysis (UVFP). By comparing AGAW variances, MASL distinguishes between left and right paralysis. Ablation studies and experiments on public and real-world datasets validate MASL’s segmentation module and demonstrate its ability to provide reliable metrics for UVFP diagnosis.
Glottal Midline Estimition, Glottis Segmentation, Keyword Spotting, Laryngoscope, Vocal cord Paralysis
§ INTRODUCTION
vocal cord Paralysis (VP) is a condition where one of the vocal cords fails to move properly, leading to voice changes, difficulty swallowing, and potential breathing problems <cit.>, <cit.>. VP can result from nerve damage due to surgery, injury, infection, or tumors, significantly impacting a patient's quality of life <cit.>. Accurate diagnosis of VP is crucial, as it informs the appropriate medical or surgical intervention, which can restore vocal function, improve airway protection, and enhance overall patient outcomes. Clinicians often use laryngeal videostroboscopy to check the vocal cord vibration in details. Laryngeal videostroboscopy is a specialized diagnostic tool used to evaluate the function of the vocal cords and the larynx. The stroboscopic component is key because it allows for the visualization of vocal cord vibrations, which are often too rapid to be observed directly <cit.>. Stroboscopy uses a light source that flashes at a slightly different frequency than the vocal cord vibration. This creates a slow-motion effect, enabling the clinician to see the individual phases of vocal cord vibration in great detail.
With the advent of the artificial intelligence, deep learning methods are developed to extract useful parameters <cit.> to assist clinicians, track the motion of vocal cords <cit.>, and even make predictions <cit.> or classifications <cit.> to help clinicians to conduct diagnosis. However, the diagnosis of VP requires to inspect the complete phonation cycles of patients. Previous works default to using video frames or images that contain the vocal cords, but in most of the time, the raw recordings from the endoscopic inspection carry useless information other than patient's phonation cycles. For instance, no phonation cycles in the beginning of the inspection, as the laryngoscope is not in its position, still finding the vocal cords.
Although experienced experts can provide valuable insights by analyzing the video captured by the endoscope, this methodology heavily relies on personal diagnosis and lacks objectiveness, which on one hand reduces patient's confidence, on the other hand increases the risk of misdiagnosis. Thus, analysis tools <cit.> are developed to assist experts in making better diagnoses with the endoscopic video. <cit.> relies on physical features like color and shape to segment glottis area. <cit.> uses supervised method to keep track of the vocal cords' movement to extract the anterior glottic angle (AGAW). However, those three tools <cit.> focus solely on one single modality to assist the clinicians and cannot be used for left or right VP detection. Researchers in <cit.> extract more than one features from the laryngoscope, but those features all come from the AGA between two vocal cords, which cannot be used to detect LVP or RVP. Moreover, current tools are restricted to pre-processed laryngoscopic video or short video segments without stroboscopic examinations, which cannot be directly used on raw and long laryngoscopic videos. As a result, labor works are still needed beforehand, which makes it inconvenient for the experts to use.
In addition to video processing, audio-related methods have shown great potential to detect voice pathology <cit.> as well. Such method typically involves converting audio clips to spectrograms using short-time Fourier transform (STFT), which are then fed into various models for downstream tasks. For instance, deep neural networks (DNN) is utilized to predict vocal cord pathology <cit.>. Low et al. <cit.> builds machine learning model to detect anomalous patterns in the spectrogram. Compton et al. <cit.> uses an artificial neural network (ANN) to predict vocal cord pathology, claiming to outperform expert assessments. However, these approaches focus solely on the patient's audio data, neglecting the crucial visual information, which contains more explainable and comprehensive insights into vocal cord conditions.
In this article, we present the Multimodal Analyzing System for Laryngoscope (MASL), a novel system that leverages both the audio and video modalities to automatically process laryngeal endoscopic videos. The main contributions of our work are listed as follows.
* Our proposed system employs a multimodal approach, integrating glottis detection with keyword spotting (KWS) technique, to automatically identify and extract multiple video segments that include at least one complete phonation cycle of the patient. In addition, we design a strobing video extraction module by analyzing hue, saturation and value (HSV) of the video frames.
* We propose a robust two-stage segmentation process: an initial U-Net model followed by a diffusion-based refinement that reduces false alarms and improve the segmentation precision.
* We introduce a quadratic fitting method to get a better estimation of AGAWs for both left and right vocal cords and the glottal midline. To the best our knowledge, we are the first to extract AGAW for separate vocal cords. The experimental results show great potential on both LVP and RVP.
The organization of this paper is as follows: Section <ref> reviews some related works about U-Net segmentation and AGAW analysis. Section <ref> details our proposed method for key frames extraction, including KWS audio processing, glottis detection and HSV analysis. Section <ref> and Section <ref> focus on the analysis module. Section <ref> presents the two-stage glottis segmentation technique, including the initial U-Net segmentation and the diffusion refinement processes. Section <ref> describes our method to analyze VP, including AGA and glottal midline estimation by quadratic fitting and multimodal VP prediction using audio and AGAWs. Finally, section <ref> presents experimental results, and Section <ref> concludes the paper with a discussion of future work and potential applications.
§ RELATED WORK
§.§ U-Net Based Glottis Segmentation
U-Net's encoder-decoder structure with skip connections has proved to be a robust framework for biomedical image segmentation <cit.>. Modifications to this architecture, such as the introduction of separable convolutions in S3AR U-Net <cit.> and the incorporation of attention mechanisms in Attention U-Net <cit.>, have demonstrated improved performance by capturing more salient features and reducing computational demands. The pursuit of efficiency and clinical applicability has led to the development of models like Efficient U-Net <cit.>, which offers a practical inference time while maintaining high segmentation quality. Yet, the trade-off between segmentation accuracy and computational cost is a recurring theme, with models like VGG19 U-Net <cit.> showing longer inference times. Despite the computational efficiency of U-Net variants, the need for substantial training data and the potential for overfitting remain concerns <cit.>. To counter this, weakly supervised learning approaches, like the one proposed by <cit.>, have gained traction, requiring only point annotations and demonstrating a remarkable balance between segmentation accuracy and convergence speed. However, these methods may still struggle with complex anatomical structures and the need for precise boundary localization.
While the field has made significant strides, challenges persist in achieving a harmonious balance between segmentation accuracy, and generalizability across varied clinical datasets.
The limited size of the glottis segmentation mask often leads to a high incidence of false positives when using U-Net, resulting in segmentation outputs even in the absence of glottal regions. This can significantly impair the accuracy of subsequent analyses. To address this issue, we incorporate a diffusion-based refinement stage, which enhances the precision of the initial U-Net segmentation.
§.§ AGAW Extraction and Glottal Midline Estimation
In the early years, researchers find out that by analyzing the maximum separation between vocal cords, one can effectively detect UVP <cit.>. Later on, researchers begin to use AGA to represent the separation between vocal cords, and AGAW analysis begin to play a critical role for VP detection. However, the existing methods typically assess AGAWs without differentiating between the left or right vocal cords <cit.>. To tackle this, our proposed approach introduces a technique that separately extract AGAWs for each vocal cord. By contrasting the AGAWs of the left vocal cord with the right one, we show that LVP and RVP are distinguishable.
However, determining left or right AGAW requires identifying the glottal midline, since AGAW for each vocal cord is a reflection of the distance to the glottal midline. Previous methods rely on extensive labeled data <cit.> to form supervised learning, which is labor-intensive. Our method can automatically estimate the midline of the glottic area without using any annotated data.
§ MULTIMODAL KEY FRAMES EXTRACTION
§.§ System Design
Our system aims to facilitate efficient clinical examinations by extracting key segments from laryngoscopic videos and providing objective indicators for specific laryngeal diseases. As Fig. <ref> shows, comprising two main modules – the voice module, the video module – our system ensures the accurate observation of vocalization cycles and the clear visualization of the glottal area.
The voice module initially processes the audio extracted from the video using short-time Fourier transform to obtain spectrograms. Through keyword spotting (KWS) technique, each frame is analyzed to detect patient vocalizations. This enables the preliminary segmentation of vocalization segments within the video.
Subsequently, the video module refines these vocalization segments to obtain key frames to form laryngoscopic highlights, ensuring the visibility of the vocal cords. Specifically, by utilizing the glottis detection model, MASL can identify regions containing the vocal cords and glottis in each frame. Moreover, given the importance of the stroboscopic portions in laryngoscopic videos for subjective analysis by physicians, we also include a stroboscopic video extraction method into our system.
§.§ Audio Processing Module
To get a better examination of the status of the vocal cords, clinicians often request the patients to pronounce "ee". This pronunciation is easy to be made and can largely enable the vibration of the vocal cords. Hence, by capturing the segments that pronounce "ee", phonation cycles might be captured. To accomplish this, we developed a KWS model that is used to detect keywords in a sentence, such as "Hey Siri", "Ok Google", and in our case, the word "ee".
The overall pipeline is illustrated in Fig. <ref>. Initially, the input audio is transformed into a spectrogram using the Short-Time Fourier Transform (STFT), converting the time-domain signal into a time-frequency representation for more effective analysis. The spectrogram is then segmented into chunks along the time axis, with each chunk containing a fixed number of frames. During the training phase, these chunks are randomly selected from each audio clip to ensure a diverse set of training samples. This method enhances the model's generalization capabilities by incorporating different spectrogram chunks from various audio clips into a robust batch of training data.
The spectrogram chunks are subsequently fed into the KWS model, the architecture of which is detailed in TABLE <ref>. The model comprises multiple convolutional blocks and residual blocks <cit.>, designed to efficiently extract relevant features from the spectrogram chunks. Within the model, we use max pooling to progressively reduce the spatial dimensions. In this way, the features are compressed and become more meaningful. In the end, we use an adaptive average pooling layer to aggregate the features before they pass through two fully connected layers, with the final layer producing the classification score output.
During inference, the input spectrogram is sliced into chunks using a sliding window. Each chunk is processed by the trained KWS model to generate decision results, as depicted in Fig. <ref>. This carefully constructed pipeline ensures that the KWS model reliably detects the "ee" vocalization, providing critical prior knowledge for subsequent analysis.
§.§ Video Processing Module
Although the KWS model is employed to detect the vocalization video frames of the phonation of "ee", these detected frames may not always successfully capture the vocal cords and glottis. To address this issue, we train a vocal cord detection model, utilizing the famous object detection model structure YOLO-v5 <cit.>, to further refine the time masks.
Given the limitations of the open-source data for training a detection model, we utilize the public glottis segmentation dataset, BAGLS <cit.>, to construct our training dataset. To generate labels for vocal cord detection, we designed an automatic bounding box generator. As illustrated in Fig. <ref>, the process begins by obtaining the coordinates of the top, bottom, left, and right vertices from the glottis mask. These coordinates are denoted as U(x_1, y_1), D(x_2, y_2), L(x_3, y_3), and R(x_4, y_4), respectively. Next, these vertices are manually expanded by a fixed number of pixels to ensure adequate coverage around the vocal cords. Using the expanded points, a bounding box is computed, as shown on the right side of Fig. <ref>.
As depicted in Fig. <ref>, our pipeline integrates a strobing video extraction module alongside the detection modules. This module is utilized to isolate the strobing segments of laryngeal videos by analyzing hue, saturation, and value (HSV) of video frames. The HSV parameters represent type, intensity, and brightness of colors, respectively. Transitions marked by empty frames typically occur at the beginning and end of strobing segments. We employ a unit step function to identify and mark all empty frames with a zero value in color (illustrated by the yellow line in Fig. <ref>). This technique segments the video into several continuous non-empty frame sequences. Within these non-empty segments, we examine the variations in HSV values. Due to the rapid color changes characteristic of strobing videos (as shown in Fig. <ref>), we are able to identify the strobing segments by calculating the frequency of HSV fluctuations within each non-empty segment, selecting the segment with the highest number of fluctuations.
§ GLOTTIS SEGMENTATION
In the evaluation of laryngeal function and pathology, accurate segmentation of the glottis is essential. This segmentation enables the extraction of objective metrics that clinicians can utilize for diagnosis and review. In our system, we
implement a two-stage approach comprising a U-Net-based method followed by a diffusion-based refinement. The initial segmentation is performed using a naive U-Net model, which is simple yet effective in medical image segmentation tasks. This model provides a robust initial estimate of the glottis boundaries.
However, the U-Net model tends to produce false positives, especially in cases where the glottal area is not visible (see Table <ref>). These incorrect segmentations can complicate subsequent analysis stages. To address this issue and enhance the precision of the segmentation results, we further refine the U-Net outputs using a diffusion model. This additional step helps to correct any inaccuracies and produce a more accurate and reliable glottis mask. The combination of these two methods ensures high-quality segmentation, which is crucial for subsequent analysis and metric computation.
The following subsections provide detailed descriptions of each component in our glottis segmentation pipeline, including the U-Net-based method and the diffusion-based refinement.
§.§ U-Net-based Method
The U-Net model is a convolutional neural network specifically designed for biomedical image segmentation. Owing to its proven effectiveness and accuracy, we employ a U-Net as our segmentation model. As detailed in Table <ref>, the model comprises a series of convolutional blocks, referred to as ConvBlocks (see Table <ref>), which include convolution operations followed by batch normalization and ReLU activation functions. The U-Net architecture is symmetrical, featuring a contracting path that captures contextual information and an expansive path that facilitates precise localization through upsampling operations. The contracting path consists of ConvBlocks with progressively increasing numbers of channels (64, 128, 256, and 512), interspersed with MaxPool2D layers for downsampling. This is followed by a sequence of upsampling operations and ConvBlocks that reduce the feature map dimensions, effectively reconstructing the image to its original resolution. The final layer utilizes a ConvBlock with a single output channel to generate the segmentation mask.
There are several advantages to using a U-Net model with this structure. First, the architecture captures both low-level and high-level features through its deep structure and skip connections, which concatenate features from the contracting path to the corresponding layers in the expansive path. This facilitates precise segmentation by preserving spatial information. Second, the use of ConvBlocks with batch normalization stabilizes and accelerates the training process, while ReLU activation functions introduce non-linearity, enabling the model to learn complex patterns. Additionally, the upsampling layers allow for finer segmentation outputs by gradually reconstructing the image details.
§.§ Diffusion-based Refinement
The U-Net model excels at extracting image masks. However, the generated masks may tend to yield false alarms (see Table <ref>). To further enhance the quality, we explore the integration of the diffusion-based model. By incorporating the diffusion model, we can inject more diversity into the masks extracted by U-Net, making the generated images richer and more diverse.
Diffusion models consist of two stages: forward diffusion and reverse diffusion. During the forward process, Gaussian noise is gradually added to the segmentation label x_0 over a series of steps T. Conversely, in the reverse process, a neural network is trained to recover the original data by reversing the noise addition, as represented by the following equation:
p_θ(x_0: T-1| x_T)=∏_t=1^T p_θ(x_t-1| x_t),
where θ stands for the parameters for the reverse process.
Consistent with the conventional implementation of the Diffusion Probability Model (DPM) <cit.>, a U-Net model is employed for training. Following the idea of MedSegDiff <cit.>, we incorporate the original glottis images as priors for the step estimation function and employ dynamic conditional encoding to fuse the encoding outcomes from both the raw image and the segmentation mask at each step. Hence, for each step, the estimation function ϵ_θ is written as:
ϵ_θ(x_t, I, t)=D((E_t^I+E_t^x, t), t),
where θ stands for the learning parameters, D is the decoder, I is the raw image prior, t is the current diffusion step. E_t^I and E_t^x are the embeddings encoded from the raw image and segmentation mask at step t respectively.
Different from the traditional training procedure, we do not start the diffusion process from a standard Gaussian noise, p_θ(x_T)=𝒩(x_T; 0, I_n× n) for an n× n image. Instead, by integrating the U-Net result introduced in Section <ref>, we start the diffusion process from a customized Gaussian noise, which is:
p_θ(x_T)=𝒩(x_T; μ^', I_n× n),
μ^' = (1 - (α·(1-m^')+(1-α)· m^'))× 10^-3,
where α∼U(0,0.3) is a random parameter sampled from a uniform distribution, m^' is the mask generated by the U-Net model in Section <ref>.
We determine the new mean for the Gaussian noise in the diffusion process by computing the weighted average between the glottis mask initially generated by U-Net and its complement (see Equation <ref>). This essentially guides the diffusion process to pay more attention to areas outside the glottal region, encouraging the model to refine the segmentation boundaries for improved accuracy.
§ MULTIMODAL VOCAL CORD PARALYSIS ANALYSIS
To detect VP, we adopt a two-stage process. First, we build a binary classification model to check if it belongs to VP. Then, with our proposed metrics, LVP and RVP can be identified by comparing the left and right vocal cord movement via the AGA.
§.§ Anterior Glottic Angle Extraction with Quadratic Fitting
To diagnose the laryngeal paralysis in a more detailed way, instead of the whole glottal angle, we extract the glottal angles for left and right vocal cords separately. As a result, we can provide the contrastive diagnosis of laryngeal paralysis. The procedure for extracting glottal angles from a segmented glottis involves a systematic series of steps, detailed in Algorithm <ref>, with visual representations of the intermediate results provided in Fig. <ref>.
Initially, the algorithm acquires the coordinates of the top, bottom, left, and right vertices from the glottis mask, computing their center point denoted as U(x_1, y_1), D(x_2, y_2), L(x_3, y_3), R(x_4, y_4), and C(x_c, y_c) respectively (see Fig. <ref>(a)). Subsequently, a line connecting points C and D is established, and the function f(x) passing through C and D is computed.
Equidistant points C_1, C_2, ..., C_N-1 are then positioned along the line segment intercepted by the glottal mask. For each C_k, orthogonal functions f^'_k(x) to f(x) are calculated, with their intersection points L_k and R_k with the glottis mask determined for k∈ [1, N-1] (see Fig. <ref>(b)).
To ensure uniformity, the coordinate system is rotated by an angle γ, aligning all intersection points along the y-axis after rotation, denoted as L_γ,k and R_γ,k. Leveraging these points, a quadratic curve q_γ(x) is approximated in the rotated coordinate system. The lowest point D_q of q_γ(x) is identified and mapped back to the original coordinate system (see Fig. <ref>(c)).
Subsequently, a calibrated middle line f^*(x) connecting points C and D_q is established, intersecting the glottis mask at point D^'. Finally, glottal angles are derived by connecting points {L_k}_k∈ [1, N-1] and {R_k}_k∈ [1, N-1] with D^', yielding {∠ L_kD^' C}_k∈ [1, N-1] and {∠ CD^' R_k}_k∈ [1, N-1] respectively.
§.§ Multimodal vocal cord Paralysis Detection
We developed a model that integrates both audio and video modalities for enhanced diagnosis. As illustrated in Fig. <ref>, the model utilizes the audio spectrogram and the AGA movements as inputs. The audio spectrogram is encoded using EfficientNet-b0 <cit.>, a compact and efficient model renowned for its performance in image classification tasks. Given that multiple video highlights are extracted from a single laryngoscope video, with corresponding AGA movements for each, multiple AGA movement time series is generated. These time series are treated as multi-channel inputs to a ConvLSTM <cit.> model, comprising a convolutional layer followed by an LSTM layer. The ConvLSTM model is adept at handling multi-channel time series; the convolutional layer captures features across all channels, while the LSTM layer processes temporal information effectively.
To further differentiate between LVP and RVP, we analyze the variance of the AGA movements for the left and right vocal cords. Intuitively, the paralyzed side exhibits less activity during phonation, leading to a smoother AGA movement time sequence and consequently a lower variance. Therefore, by simply comparing the variance of the left and right AGA movements, we distinguish between LVP and RVP.
§ EXPERIMENTS AND RESULTS
§.§ BAGLS Dataset
We use the Benchmark for Automatic Glottis Segmentation (BAGLS) <cit.> as the dataset for glottis segmentation. It consists of 59,250 endoscopic glottis images acquired from hundreds of individuals at seven hospitals, which is split into 55,750 training images and 3,500 test images.
§.§ SYSU Dataset
This is the dataset collected from real-world scenario by Sun Yat-sen Memorial Hospital of Sun Yat-sen University (SYSU). The dataset contains 520 video samples, including 106 normal samples and 414 paralysis samples (257 LVP and 157 RVP). All the video clips are recorded using a laryngeal videostroboscope, including one or two strobing video segments for each clip.
§.§ Keyword Spotting Model
The detailed architecture of the KWS model is presented in Table <ref>. For audio input, we utilize a Mel-spectrogram with 80 Mel filters, with the number of FFT points and hop length set to 1024 and 512, respectively. A sliding window of 400 samples and a hop length of 64 samples are employed to extract frame-level information. The model is trained for 100 epochs, optimized using the Adam optimizer with a learning rate of 0.005.
Since the KWS model outputs a posterior score rather than a direct decision value, a threshold is necessary to determine the final result. Table <ref> lists the thresholds applied in the system. The precision-recall (PR) curves and receiver operating characteristic (ROC) curves in Fig. <ref> illustrate the classification performance across different thresholds. Consequently, the threshold is selected based on the highest F1-score, a balanced metric that considers both precision and recall, providing a comprehensive measure of classification performance.
Table <ref> displays the results for both doctor and patient. The detection performance for the doctor's voice is inferior to that for the patient's voice due to the doctor's broader range of spoken words compared to the patient's singular utterance of "ee". Our system prioritizes frames where the patient is speaking "ee", resulting in a high F1-score on the test dataset, which underscores the efficacy of our KWS model.
§.§ Glottis Segmentation
We train the U-Net baseline with the same strategy as <cit.>. For the diffusion model, it is built on the model of <cit.>, except that we slightly modify the noise to meet the prior knowledge of U-Net results. The diffusion step is set to 1,000. We train the diffusion model with 100,000 steps. The model is optimized by a AdamW optimizer with a learning rate of 0.0001.
We evaluate our segmentation model on the public glottis dataset BAGLS. We use intersection over union (IoU) as the metric, and the results are shown in Table. <ref>. From the table, we can see that the diffusion model can have a slightly better IoU performance than the traditional U-Net, By refining the U-Net results with the diffusion ones, the IoU performance can be further improved.
Besides IoU, we also compute the false alarm rate (FAR) when using different segmentation methods. The false alarm (FA) is defined as the situation where the glottis mask is generated when no glottis is seen. The FAR is calculated as the number of FA divided by the total number of images with no glottis to be seen. We care about FA as we need the segmentation masks for later analysis. FAs might produce wrong measurements and mislead the prediction results of VP. From Table <ref>, the traditional U-Net model shows a tendency to produce FA masks with a leading FAR of 15.8%. By contrast, diffusion-based segmentation model has a much lower FAR of 4.5%, which is more than two times decrease. The result may indicates that, in terms of the glottis segmentation, the diffusion model performs more robust when dealing with image with no targets. Hence, after we refine the U-Net result with the diffusion model, we can further achieve a result with a lower FAR but a better IoU.
§.§ vocal cord Paralysis Detection
We manually divided the SYSMH dataset into training and testing subsets. The training set comprises 470 samples (81 normal, 389 paralysis), while the testing set includes 50 samples (25 normal, 25 paralysis). The split was performed randomly, ensuring an equal number of normal and paralysis samples in the test set to facilitate a balanced evaluation of our model's performance. We conduct the same experiment for 5 times with 5 different random seeds, so that the train-test-split operation is different for each run, resulting in performance variability. This approach allows for a comprehensive assessment of the model's overall performance, independent of the specific data subsets used in each iteration.
Ablation studies are conducted using different system configurations to show the effectiveness of our proposed system, as shown in Table <ref>. The naive system uses the images segmented by the U-Net baseline without quadratic fitting or diffusion refinement. Table <ref> summarizes the performance for vocal cord paralysis classification across various system configurations. Each configuration's precision, recall, and F1-score are reported for both normal and paralysis classes, as well as their weighted averages. In the naive settings, the system achieves a precision of 82.0% and recall of 69.0% for normal cases, and a precision of 73.2% and recall of 84.0% for paralysis, with a weighted average F1-score of 76.5%.
Steady and accurate middle line prediction is critical for analyzing left and right vocal cord movement, since every time when measuring the AGA movement, distance between the sample points on the vocal cord and middle line is computed. However, the laryngeal camera is not always in fixed positions in real-world application. As a result, using the traditional way to estimate the middle line may introduce high variations in middle line positions, producing less robust results. It turns out that calibrating the middle line by fitting a quadratic function might be a good way to enhance the middle line prediction, resulting in a better classification performance. The results from Table <ref> shows that the proposed quadratic fitting method is able to improve the performance by a large margin, improving the balanced precision and recall to approximately 81.5% for both classes.
After further incorporating the diffusion refinement to the glottis segmentation process, the classification results can be improved further, slightly increasing the weighted average F1-score to 82.5%. The diffusion refinement only involves the images where glottis cannot be detected by the YOLO-v5 detector (described in Section <ref>), since we want to use this method to reduce FAs (described in Section <ref>).
We also conduct experiments on the effect of multimodality. As a result, by removing the AGA movement metrics, the classification performance drops dramatically, with the weighted average F1-score falling to 72.5%. This result demonstrate the critical role of integrating both audio and AGA movement modalities for robust performance.
We argue that the high standard deviation over all runs due to the fact that we randomly split the datasets for train and test for each run in the experiments. The SYSMH dataset is a challenging dataset with various kinds of recording situations, such as various types of laryngeal videostroboscopes, recording environment, etc., causing domain shifts.
In summary, the comparison shown in Table <ref> underscores the efficacy of quadratic fitting and diffusion refinement in enhancing classification accuracy and also show the critical role of integrating both audio and AGA movement modalities for better performance.
§.§ Potential of Unilateral Vocal Cord Paralysis Detection
Fig. <ref> illustrates the glottal area and the AGA movement time series of a patient with RVP. These metrics, extracted by MASL, are derived from a single video clip in the SYSU dataset. As three video highlights are extracted from the clip, three distinct time series for each of the two metrics are presented. Fig. <ref>(a), (c), and (e) depict the variations of glottal area across time, indicating the phonation cycle through the opening and closing movements of the vocal cords. However, while the glottal area data reveals the vocal cord activity, it does not specify the movement of each vocal cord individually. By computing the AGA movement time series for both vocal cords, as shown in Fig. <ref>(b), (d), and (f), MASL provides clinicians with more detailed and actionable metrics for inspecting UVP.
From Table <ref>, by comparing the variance of left and right AGA metrics, we are able to distinguish the left or right paralysis for a UVP patient. We also conduct ablation studies to show effectiveness of different modules in our proposed system. Table <ref> illustrates that incorporating quadratic fitting improves the classification results. For both left and right vocal cord paralysis and the weighted average, all the metrics have improved by a large margin. Moreover, adding diffusion refinement can further improve all the metrics, with a weighted average of precision, recall and F1-score of 92%.
These improvements indicate that the quadratic fitting technique and the diffusion refinement method has the ability to enhance the system to model the vocal cord movement, providing a more robust diagnostic tool to the clinicians.
§ CONCLUSION
In this paper, we introduced the Multimodal Analyzing System for Laryngoscope (MASL), which integrates both the audio and video processing techniques to enhance the diagnosis of laryngeal disorders. MASL is able to produce high quality laryngoscopic video highlights for better diagnosis by clinicians. It incorporates KWS that ensures the complete phonation cycle of the patients, and video processing modules that ensure the occurrence of the vocal cords. Moreover, MASL is able to extract the stroboscopic slices from the raw laryngoscopic video, saving clinicians' time finding the stroboscopic slices.
To produce objective metrics for analysis, MASL employs a two-stage segmentation method with a U-Net followed by the diffusion-based refinement to accurately delineate the glottal area, significantly reducing false positives produced by U-Net. With the segmented masks, MASL adopts quadratic fitting and extracts AGA movements for both left and right vocal cords, producing a new modality for VP. With multi-modality, MASL improves the VP performance. Simply by comparing the variance between the AGA movements for left and right vocal cords, MASL yields 92% of F1-score on the real-world dataset. The Experimental results demonstrate MASL's robustness and effectiveness, offering clinicians a powerful tool to improve diagnostic accuracy and efficiency.
§ REFERENCES
IEEEtranbib
|
http://arxiv.org/abs/2409.02527v1 | 20240904083620 | A background-estimation technique for the detection of extended gamma-ray structures with IACTs | [
"Tina Wach",
"Alison Mitchell",
"Lars Mohrmann"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.HE"
] |
Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, 91058 Erlangen, Germany
[email protected]
Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany
Estimation of the amount of cosmic-ray induced background events is a challenging task for Imaging Atmospheric Cherenkov Telescopes (IACTs).
Most approaches rely on a model of the background signal derived from archival observations, which is then normalised to the region of interest (ROI) and respective observation conditions using emission-free regions in the observation.
This is, however, disadvantageous for the analysis of large, extended γ-ray structures, where no sufficient source free region can be found.
We aim to address this issue by estimating the normalisation of a 3-dimensional background model template from separate, matched observations of emission-free sky regions. As a result, the need for a emission-free region in the field of view of the observation becomes unnecessary.
For this purpose, we implement an algorithm to identify observation pairs with as close as possible observation conditions. The open-source analysis package is utilized for estimating the background rate, facilitating seamless adaptation of the framework to many γ-ray detection facilities. Public data from the High Energy Stereoscopic System (H.E.S.S.) is employed to validate this methodology.
The analysis demonstrates that employing a background rate estimated through this run-matching approach yields results consistent with those obtained using the standard application of the background model template. Furthermore, the compatibility of the source parameters obtained through this approach with previous publications and an analysis employing the background model template approach is confirmed, along with an estimation of the statistical and systematic uncertainties introduced by this method.
A background-estimation technique for the detection of extended gamma-ray structures with IACTs
T. Wach 1 A. Mitchell 1 L. Mohrmann 2
September 9, 2024
===============================================================================================
§ INTRODUCTION
Imaging Atmospheric Cherenkov Telescopes (IACTs) have considerably expanded our knowledge of the very high energy γ-ray sky. With the advent of the upcoming Cherenkov Telescope Array <cit.>, IACTs will remain a crucial tool for γ-ray astronomy for the foreseeable future. Despite this, recent results from Water Cherenkov Detectors (WCDs), such as the High-Altitude Water Cherenkov Array <cit.> and Large High Altitude Air Shower Observatory <cit.>, have highlighted a limitation of IACTs. Whilst their superior angular resolution enables IACTs to distinguish between γ-ray sources situated in close proximity to one another and measure the extension of those sources with high precision, identifying large extended structures of γ-ray emission remains challenging.
This is primarily due to the effect the comparatively small field of view (FoV) of IACTs has on the detection of extended γ-ray sources <cit.>. While WCDs are survey instruments that continuously monitor the overhead sky, IACTs need to be pointed towards a target source and can only observe the sky in a region a few degrees across. This affects the estimation of background rates in the case of studying extended source regions that cover a large part of the FoV. The background consists of non-γ-ray induced extensive air showers, predominantly due to hadrons interacting with the Earth's atmosphere, albeit at lower energies cosmic-ray electrons also play a significant role. By implementing selection criteria on the reconstructed shower parameters <cit.>, this background can be significantly reduced, but not fully removed. The residual background rate is then often estimated from source-free regions in the FoV of the observation <cit.>. [Another approach, initially proposed by <cit.>, is to estimate the background rate from a separate set of events that are more cosmic ray-like than the selected gamma-ray candidates. This approach requires detailed knowledge about the acceptance of the system to these cosmic ray-like events, however, and will not be discussed further in this work.]
However, it is not always possible to find a sufficiently large source-free region, especially in the case of large, extended sources filling a significant fraction of the FoV, which can lead to the subtraction of significant emission in the whole FoV <cit.>. The choice of a region that is not free of gamma-ray emission can then lead to absorption of these structures in the background estimation.
To circumvent this problem, a spectromorphological background model template, constructed from archival data, enabeling a three-dimensional (3D) likelihood analysis, can be used (for more information see <cit.>).
Due to the large amount of observations, acquired under similar observation conditions, used for the creation of the background model template, this approach is very stable and suffers only little from statistical fluctuations in the background estimate. Previous studies have shown that the background estimation employing a background model template facilitates the detection of large, extended structures with IACTs <cit.>. However, the background model template approach still requires a re-normalisation for each observation run, to account for differences in atmospheric conditions or slight hardware degradation, again requiringat least a small emission-free region in the FoV.
This problem was highlighted in a recent study of emission around the Geminga pulsar (PSR J0633+1746) with the High Energy Stereoscopic System (H.E.S.S.) <cit.>. The study revealed that while it is possible to detect the extended emission around the pulsar, an absolute measurement of its properties was not possible due to the lack of emission-free regions in the FoV <cit.>. Instead, only a relative measurement was feasible, where the background level was estimated in regions of fainter γ-ray emission.
In regions where no emission-free region can be identified, a so-called “ON/OFF” approach can be used, whereby every observation of the targeted source (ON run) is matched to an observation conducted in a part of the sky that is predominantly source-free (OFF run). Such an analysis is for example described in <cit.>
This approach is a standard technique for background rejection in IACTs <cit.>, which was already employed by the Whipple observatory <cit.>. Though this standard approach can be very powerful for the observation of extended sources, because no assumptions regarding the background acceptance across the FoV needs to be made, it also has major disadvantages. In the classical approach, an empty sky region needs to be observed for a comparable duration and under a similar zenith angle (with an allowed Zenith angle deviation between ON and OFF run of typically 30’) <cit.>. Additionally, the best description can be achieved if the OFF run is recorded shortly after the ON run, in order to avoid the influence of effects like degradation of the system or changes in the atmosphere. This means that at most half of the observation time can be spent on the actual target of the study. Additionally, a background estimate based on only one observation run is necessarily afflicted with relatively large statistical uncertainties.
In this work, we alleviate the disadvantages of both background estimation techniques by combining the classical ON/OFF method with the 3-dimensional background model template constructed from archival data. For this purpose, the normalisation of the background model template for each run pair is determined from the OFF run and this normalisation is then used for the ON run. Employing this background model template differs from the classical approach, since the background rate is not only estimated from one OFF run, but many. This enables us to decrease the dependence of the background rate on the particular OFF run selected and thus to lower the uncertainty of the background estimate.
The new method is developed using the data structure of the High Energy Stereoscopic System (H.E.S.S.), an array of five IACTs located in the Khomas Highland in Namibia <cit.>. The original telescope array consisted of four telescopes with 12m-diameter mirrors (CT1-4) and was commissioned between 2000 and 2003. In 2012 a fifth telescope (CT5), with a mirror diameter of 28m was added in the centre of the 120m square spanned by CT1-4, lowering the energy threshold of the array considerably <cit.>. The method was then validated using data from the first H.E.S.S. public data release <cit.>. Since this method only requires common python packages, as well as <cit.>, it is in accordance with the goal of the Very-high-energy Open Data Format Initiative (VODF; ), to achieve a common, software-independent data format and develop open source analysis software <cit.>. This also means that the method can easily be adapted for other telescope arrays.
Henceforth, we will refer to the template constructed in <cit.> as `background model template', the standard method employed by the Whipple Observatory as `classical ON/OFF method' <cit.>, and the method developed in this work as `run-matching approach'. With this publication, we will release the scripts necessary to perform the background matching, after a set of OFF runs has been selected, as well as the results for all spectral and spatial fits on the public-dataset release.
For this study, only data acquired by the four smaller telescopes of the H.E.S.S. array is analysed. For all observations used, the data reduction was performed using HAP, the H.E.S.S. analysis package described in , and reconstructed using the Image Pixel-wise fit for Atmospheric Cherenkov Telescope algorithm <cit.>.
§ RUN-MATCHING
For the classical ON/OFF method, an observation of an empty region with similar properties to that of the target region needs to be conducted. This requirement is necessary to achieve a comparable system acceptance (relative rate of events passing selection cuts). While the use of a background model template allows for relaxed run-matching criteria with respect to the classical ON-OFF matching, allocating a comparable OFF run is nevertheless a critical aspect of the background estimation.
Previous works employing the classical ON/OFF method have shown that a variety of parameters influence the background rate and need to be considered in the matching process <cit.>. In this work, an OFF run is only considered if its pointing position is at a galactic latitude of |b| ≥ 10^∘, in order to avoid regions including many known γ-ray sources, as well as diffuse γ-ray emission. Furthermore, we only consider observations taken under good atmospheric conditions, and a good system response, and require that the same telescopes of the array participate in both the ON and OFF run. This quality selection was performed following the recommendations in <cit.>.
§.§ Matching parameters
The parameters found to have the largest influence are the zenith angle (the angle between the pointing direction of the telescopes and zenith), changes in hardware configuration and atmospheric conditions. The level of night sky background (NSB) light is also used for matching, due to its influence on the performance of the telescopes <cit.>. Since small changes of the atmosphere and degradation of the system can be absorbed by the fit of the background model template (see section <ref>), the run pairs do not need to be acquired in the same night, but can be up to a few years apart. It is however important to match runs using the same hardware configuration. Therefore we take optical phases into account. These are periods of stable optical efficiency, between abrupt changes due to, for example, cleaning of the Winston cones (light guides attached to the camera pixels) or mirror re-coating. Typically these phases span at least one year for the H.E.S.S. telescopes, therefore enabling the choice of many possible OFF runs for one ON run. The optical phases used for this work can be found in Table <ref>.
For the construction of the background model template the zenith angles of the OFF runs were grouped into bins with a size dependent on the available statistics, because of the strong influence of the zenith angle on the background rate <cit.>. The zenith angle bins from <cit.> were adopted as a validity range of the zenith angle deviation for this work, to ensure compatibility with the background model template (see Table <ref>). In order to have a comparable background model template between ON and OFF run, the azimuth angle bins from <cit.> where also adopted for this study.
Another important matching parameter is the so-called muon efficiency ε_μ. This quantity specifies how many photo-electrons are detected per incident photon, and is, therefore, a measure of the optical performance of the telescopes <cit.>. Its estimation through measurements on muon events becomes possible since the geometry of the muon images can be used to calculate the expected intensity, which can then be compared to the measured intensity. The second matching parameter introduced to account for atmospheric changes is the
To evaluate the differences in atmospheric conditions between two runs, two quantities were investigated. The first parameter is the Cherenkov transparency coefficient <cit.>. It describes the transparency of the atmosphere and can be calculated via:
τ = 1/N · k_N∑_i t_i = 1/N · k_N∑_i R_i^1/1.7-Δ/μ_i · g_i
with the number of participating telescopes N, the average amplification gain of the photosensors g_i, and the trigger rate R_i and muon efficiency μ_i of observation i. The term Δ allows for higher order corrections and k_N is a scaling factor <cit.>. The second matching parameter is the effective sky temperature in the FoV of the individual telescopes, measured with infrared radiometers <cit.>. Following this quantity will be referred to as radiometer temperature.
Additionally, we require the ON and OFF runs to have a comparable dead-time-corrected observation time.
In this study, the influence of the respective matching parameters on the background estimate has been estimated by calculating the distance correlation <cit.> between the matching parameters and the number of background events estimated in an OFF region using the background model template. In contrast to the Pearson correlation coefficient, the distance correlation allows the correlation between two parameters to be measured independently of the nature of their correlation.
Since the influence of the matching parameters can vary following major changes in the hardware configuration of the telescope array, the correlation coefficients have been calculated for each of the three hardware phases of the H.E.S.S. telescopes. HESS Phase 1 represents the data taken from the commissioning of the first four telescopes until the addition of the fifth telescope in 2012 <cit.>. The data taken with the five telescope array is called HESS Phase 2 <cit.>. The last set, called HESS Phase 1U, includes all data taken after the camera upgrade of the four small telescopes in 2017 <cit.>.
For a correlation coefficient of d_corr≥ 0.15, the influence of the matching parameter on the background rate was regarded as significant. A summary of all significant matching parameters, as well as allowed deviations and correlation coefficients for this work can be found in Table <ref>.
These correlation coefficients were estimated using the number of background counts over the whole energy range. We additionally computed correlation coefficients using only low-energy events and find that there is no significant differences to the coefficients presented in Table <ref>.
§.§ Fractional run difference
The deviations of the matching parameters are then used to quantify the difference between an ON and OFF run, by calculating the fractional run difference f:
f = ∑_j d_corr,j·x_on^j - x_off^j/x_on^j
with j, one of the matching parameters and d_corr,j the distance correlation for the respective matching parameter (see Table <ref>). The observation with the smallest fractional run difference is chosen as the OFF run for the corresponding ON run.
If no OFF run fulfilling all matching criteria can be found, only the parameters with the largest influence (top half of Table <ref>) were used for the matching.
In this case, the fractional run difference computed with all matching parameters can be large, and the sub-optimal matching is then taken into account by increased systematic errors (as will be described in Section <ref>).
§ BACKGROUND ESTIMATION
§.§ General Method
After matching every ON run with an OFF run, the background model template was normalised to the OFF run. During this process, the background rate R_BG underwent correction for minor discrepancies stemming from varying observation conditions, employing R^*_BG = Φ· R_BG· (E/E_0)^-δ, where E_0 = 1TeV is the reference energy and the spectral tilt δ and background normalisation Φ are determined through a 3D likelihood fit of the background model template to a emission-free region in the OFF run <cit.>.
For every ON and OFF run, the energy at which the deviation between true and reconstructed energy, the energy bias, reaches 10%, was estimated. This energy was then set as a safe energy threshold and the data at lower energies discarded. The maximally allowed offset between the reconstructed event direction and the pointing position of the camera was 2.0^∘.
<cit.> was used to create a dataset with a square region geometry of 4^∘× 4^∘ centred around the pointing position of the OFF run. This combines information such as a counts map (the observed number of events passing selection cuts in each bin), expected background map, and exposure. The size of the geometry was chosen such that all of the data passing the thresholds was included in the analysis. For a correct description of the amount of cosmic ray background in the observation, it is important to ensure that no γ-ray source is present in the region to which the parameters of the background model template are fitted.
Therefore, the regions containing previously identified extragalactic γ-ray sources in the OFF run were excluded from the fit of the background model template. Additionally, we excluded a circular region with a radius of 0.5^∘ around the observation target of the respective OFF run, to minimise the amount of γ-ray emission included due to sub-threshold γ-ray sources. The background model template was then fit to the OFF run. Thereafter, only the adjusted background model template was used.
Additionally to minimising the deviations between ON and OFF run by comparing the fractional run deviation and identifying the best-matching OFF run, deviations between the observations can be accounted for by applying correction factors.
One correction applied accounts for the deviation in duration (observation time during which the telescope system can trigger on a signal) between ON and OFF run:
b_i,ON = b_i,OFF· t_ON/t_OFF
with b_i the total number of background events per spatial pixel and t the total duration of the observation.
The second correction applied accounts for the deviation in zenith angle of the observation. Because of the matching within the zenith angle bins used in the computation of the background model template (see Table <ref>), some run pairs can have a deviation of up to 5^∘ in zenith angle Θ_z. The correction factor to account for this deviation is calculated using:
b_ON = b_OFF· p_1 cos(Θ_z)^p_2 .
This correction follows the relation between the cosmic ray rate, and therefore also the trigger rate, which is dominated by hadronic background events, and the zenith angle of the measurement established in <cit.>. The parameters p_1 and p_2 were estimated for every optical phase, by fitting equation (<ref>) to the trigger rate of the H.E.S.S. array for all observations performed in the respective optical phase. An example of such a fit to the trigger rates can be seen in Figure <ref>. The fit parameters for all optical phases, as well as more information about their computation, can be found in Appendix <ref>.
In the next step, another was created and filled with the reconstructed events observed in the ON run. The background model template was then assigned to this dataset, and the spectral tilt δ and background normalisation Φ derived from the fit of the template to the OFF run were assumed for this dataset. This allows us to adjust the background rate to the different observation conditions without the need to have γ-ray emission-free regions in the ON run.
§.§ Creation of validation datasets
In order to validate the above described background estimation method, the background rate and source parameters achieved by applying the run-matching approach should be compared to the standard background model template method. To achieve this, we construct validation datasets consisting of observations of gamma-ray sources that are either point-like or exhibit only marginal extension, facilitating the application of both methods. For this purpose, different datasets for all regions of interest (ROI) around the sources were constructed, to estimate the accuracy of the background description using the run-matching approach. For ease of reference, these different cases have been given numbers and an overview can be found in Table <ref>. First, a standard analysis of all ON runs for a ROI is performed as a reference. For this purpose, all observations of the target region are identified, the background model template is fitted to each ON run and the resulting are 'stacked', which means the measured data is summed over all observations, averaged IRFs are created and only one is returned for every ROI. Following, this dataset will be referred to as Case 0. Another dataset was constructed using the same approach, just with the run-matching approach, but without corrections, to estimate the background rate. This will be referred to as Case 1. One dataset with only the correction for the differences in duration was computed (Case 2) and one where both corrections were applied (Case 3). Two more datasets were constructed to estimate the influence of the systematic errors introduced due to the run-matching approach (see Section <ref>), they are labelled with Case 4+ and Case 4- respectively for increased and decreased background count rate.
§.§ Derivation of the correlation coefficients and validity intervals
The influence of the respective matching parameters on the background rate is identified by analising archival H.E.S.S. data taken on the γ-ray source PKS 2155-304 using the direct application of the background model template (Case 0).
PKS 2155-304 was chosen as a test region, as it has been continuously monitored since the commissioning of the first H.E.S.S. telescope and a large amount of data, with varying observation conditions has been acquired. Additionally, the γ-rays in this ROI are contained in a small, well-known region, resulting in a small uncertainty in the background rate.
The background model template was fitted to each observation in this dataset (see Section <ref>), and the number of background events was estimated. The data was then split into three groups depending on the hardware configuration of the telescope array. A total of 791 observations over the three hardware phases was used. A Pearson correlation coefficient between the different matching parameters and the background rate was then computed. The results can be seen in Table <ref>.
To estimate the valid parameter range for every matching parameter, a comparison of the background rate estimated for Case 0 and the run matching approach (Case 3) is carried out for a large number of observations. For this purpose all observation pairs from the sets of observations on PKS 2155-304, indicated in Table <ref>, are used.
For each of these observations, all possible OFF runs are identified. Then, the background rate of all pairs is computed for Case 0 and Case 3, and the deviation of these is calculated as:
Δ R_BG = R_BG, 0 - R_BG, 3/R_BG, 3
with R_BG, 0 the background rate of the dataset for Case 0 and R_BG, 3 the background rate for Case 3. Additionally, the deviation between all matching parameters of ON and OFF run as Δ x = (x_on - x_off)/x_on, with x a matching parameter, for each observation pair, is calculated. We then compute the mean background rate deviation Δ R_BG per Δ x, and define the valid parameter range as the Δ x at which |Δ R_BG| > 10%. This computation is performed individually for all four telescopes, and the smallest value is identified as the upper bound of the validity range. A visualisation of the distribution of Δ R_BG and its mean for the Muon efficiency can be seen in Figure <ref>.
§ SYSTEMATIC ERRORS
In order to estimate the systematic uncertainties introduced by employing the run-matching approach, a direct comparison between the background rate for Case 0 and Case 3 is made. For this purpose, a set of observations on a well-known target region is selected. For each of these observations, all possible OFF runs are identified. Then, the background rate of all pairs is computed for Case 0 and Case 3, and the deviation of these is calculated as indicated in Equation <ref>. This deviation, a measure of the systematic shift, is computed for all run pairs and the results are grouped according to the fractional run difference of the respective OFF run. Despite this comparison being a good estimation for the systematic uncertainty introduced by the run matching approach, it is limited by the available statistics, and a coarse bin size of Δ f = 0.1 is chosen. Subsequently, the standard deviation of the background deviation was computed in each bin with more than 10 entries, to ensure sufficient statistics for a stable result. The standard deviation was then used as systematic uncertainty on the background count rate for a run pair with the respective fraction run deviaton.
Because of strong variations in the optical efficiency of the telescopes, the systematic uncertainty can vary for observations obtained at different times. To account for this effect, the systematic uncertainties should be computed for each analysis respectively, by selecting observations recorded in a short time-span around the recording of the ON runs used for the source analysis. In this study, ON runs from three different time periods are used, therefore three different sets of observations for the estimation of the systematic uncertainties are computed. For all three sets observations on the source PKS 2155-304, excluding the observations which are part of the public data release, were used. Further properties of these sets, as well as which source analyses they were used for, can be seen in Table <ref>.
A visualisation of the systematic shift for all three sets can be seen in Figure <ref>. The systematic errors for set 1 and set 2 are comparable, whilst the errors for set 3 are marginally smaller. This is most likely caused by a camera update in 2016 <cit.>. Set 3 also extends to higher fractional run deviations f, since a larger amount of observations on extragalactic sources was acquired in this time period.
The influence of this systematic shift of the background rate on the source parameters is then estimated by computing two additional datasets for each test region. For the computation of these datasets, the fractional run difference of every run pair was used to identify the systematic shift expected for this pair and the background counts per pixel were then increased and decreased using the corresponding systematic factor. These datasets are identified as Case 4+, for the increased background rate and Case 4- for the decreased background rate.
§ VALIDATION
To validate the run-matching approach, data from the H.E.S.S. public dataset release <cit.> was analysed (Case 3) and the results compared to an analysis using the background model template (Case 0), as well as the results reported in <cit.>. Additionally, this work uses observations of sky regions devoid of γ-ray emission (which will be referred to as empty-field observations hereafter) acquired with the H.E.S.S. telescope array. We note that the archival data used for the construction of the background model template, as well as the OFF runs used for these analyses, and the observations of the empty-field observations are proprietary to the H.E.S.S. Collaboration and not publicly available.
The public data release only contains data passing a tight quality selection (for more information see <cit.>). Therefore all observations that are part of the data release can be used for this validation study. The empty-field observations were filtered to only contain runs taken under good atmospheric conditions. The properties of these datasets are listed in Table <ref>.
The spectral and morphological properties of the γ-ray emission from all sources contained in the public data release (see Table <ref>) were acquired by performing a 3-dimensional fit of a spectromorphological model to the dataset. A `stacked' analysis was performed using , i.e. the data is summed over all observations, and the likelihood minimisation of the model fit parameters is carried out over averaged IRFs.
All datasets were prepared using a spatial pixel size of 0.02^∘. The spatial extension of the s for each analysis region was chosen such that all events recorded in the observations were included in the analysis, and the energy axis for each dataset was chosen to be logarithmically spaced, with eight bins per decade.
§.§ Validation with empty-field observations
The quality of the background estimation was assessed via the <cit.> significance distribution. For an empty sky region, a Gaussian distribution centred around μ = 0, with a standard deviation of σ = 1, is expected, since the fluctuations of the background counts are Poisson distributed.
To test for the correct description of the background, three datasets comprised of observations centred on the empty-filed regions, namely regions around the dwarf spheroidal galaxies Reticulum 2, Tucana 2 and Sculptor Dwarf Galaxy, were examined. These regions were chosen because no significant γ-ray emission from the sources, as well as within a 4^∘ region around the sources, has been observed with H.E.S.S. <cit.>.Therefore, the regions can be used for an estimation of the background rate without contamination from a mismodelled γ-ray source.
For all regions, the significance of the number of events passing selection cuts in excess of the background prediction was computed. The correlation radius used for the construction of the significance maps is 0.06^∘, corresponding approximately to the point-spread function of H.E.S.S., for all empty-field observations. A Gaussian model is fitted to each significance distribution with the fit results for the different regions given in Table <ref>. An example distribution for the region around the dwarf spheroidal galaxy Tucana 2, and the corresponding Gaussian fit can also be seen in Figure <ref>.
For Case 0, all three regions show a distribution centred around zero and a standard deviation of approximately 1.0, confirming that no γ-ray sources are present. The mean of the distributions for Case 1, indicates a over-prediction of cosmic-ray background of up to 20%. This over-prediction is slightly decreased if a duration correction is applied (Case 2), and further reduced once the zenith correction is applied for all three empty-field regions.
The Gaussian fit to the significance histograms of these datasets shows that the nominal value derived for the Case 0 datasets is included in the range covered by the systematic error for all datasets. An example for the shift on the significance distribution caused by the inclusion of the systematic errors can be seen in Figure <ref>.
We also tested the validity of the background estimation method in different energy ranges, using again the three empty-field data sets. For this purpose, the data was divided into 8 logarithmically spaced energy bins, from 0.1TeV to 10TeV. This binning was chosen so that each bin included two energy bins of the initial dataset. The data above 10 TeV were not included in this comparison due to insufficient gamma-ray statistics. We find that for the three data sets, a Gaussian fit to the significance distribution yields comparable results between the Case 0 and Case 3 datasets in all energy bins. The mean and standard deviation for the Gaussian fits in energy bands are quoted in Tables <ref> - <ref> and a visual example for the region around Sculptor can be seen in Figure <ref>.
To check for variation in background counts as a function of energy due to the run matching, we computed the background counts per energy bin for all empty-field region observations were computed for Case 0 (N_BKG^0) and Case 3 (N_BKG^3). Then, the ratio between these counts N_BKG^3/N_BKG^0 was computed per energy bin. We find that the number of background counts for the Case 3 datasets is slightly underestimated for lower energies and slightly overestimated for higher energies compared to the number of counts derived for the Case 0 datasets. This deviation is however found to be below 6% for all energies and can be seen in Figure <ref>.
§.§ Public data release data
After verifying the background prediction in an empty sky region, the best-fit values for a source analysis should be verified between the different background estimation techniques.
For this purpose, data from the public data release of H.E.S.S. <cit.> was analysed for both background estimation methods, and the results are compared to those derived in <cit.>. As a spectral model for all datasets, a simple power-law was chosen and in all cases, the flux normalisation N_0 at a reference energy E_0 and the spectral index Γ were used as fit parameters. The power-law model is defined as:
dN/dE = N_0 ·(E/E_0)^-Γ .
The reference energy for all datasets was chosen to be equal to the values used in <cit.>, for the sake of comparison, and can be found in Table <ref>.
The correlation radius used for the computation of the significance maps and histograms for all datasets is 0.06^∘, except the dataset centred on the Crab Nebula which is 0.1^∘, due to the small size and limited statistics of the dataset.
For the sake of comparing the best-fit results between this analysis and the results obtained in <cit.>, the same spatial models are chosen and should not be interpreted as yielding the most accurate description of the region. The Crab Nebula, as well as PKS 2155-304, are described using a Point Source Model. For a more in-depth discussion of these sources, see <cit.> and <cit.> respectively. The pulsar wind nebula MSH 15-52 (analysed in detail in <cit.>) is described by an elongated disk model.
For the supernova remnant RX J1713.7-3946, no pre-defined spatial model could be used because of the complicated morphology of the source. For this reason an `excess template' was constructed (for more information about the construction of this excess template see <cit.>). More information describing the emission from RX J1713.7-3946 can be found in <cit.>.
An additional challenge for the analysis of these datasets is that, depending on the source location, misclassified cosmic rays are not the only source of background events. Observations centred in the galactic plane will also include events from the galactic diffuse emission <cit.>. For the background estimation employing the background model template, the galactic diffuse emission can partly be absorbed by increasing the normalisation of the background model template. Since, in the case of the run-matching approach, the background model template is normalised on observations outside of the galactic plane, absorption of the diffuse emission into the background is not possible.
This effect has been observed in the analysis of the datasets centred on the regions around the sources MSH 15-52, located at a galactic latitude of b=-1.19^∘ and RX J1713.7-3946, with a galactic latitude of b=-0.47^∘. For both datasets, an excess signal across the whole FoV was detected at low energies. Whilst it is likely that the observed excess emission is galactic diffuse emission, the data used in this validation is not extensive enough to model this signal or derive any of its physical properties. Therefore, we adopted a strict energy threshold for the analysis of the regions around MSH 15-52 and RX J1713.7-3946, effectively excluding the energy range in which a significant influence of diffuse emission can be observed for H.E.S.S..
This energy threshold was evaluated respectively for each analysis for Case 3. For this purpose, the number of background and signal events in a source-free region in the stacked dataset was estimated. The first energy bin after which the absolute difference between the number of signal and background events is less than 10% was adopted as energy threshold for the analysis. For the analysis of both MSH 15-52 and RX J1713.7-3946, this resulted in an energy threshold of 560GeV.
Additionally to the increased energy thresholds, a second modification of the background estimation needs to be included for the datasets for MSH 15-52 and RX J1713.7-3946. Because they were taken in 2004, shortly after the commissioning of the H.E.S.S. array, the optical phase is very short, due to fast system degradation, and few observations have been taken outside of the galactic plane in this time period. For this reason, no sufficient amount of OFF runs can be found within this optical phase and the first two optical phases have been combined.
The best-fit parameters for the datasets created using the run-matching approach for all sources can be found in Table <ref>. Additionally, the best-fit values derived in the previous analysis, as well as the results derived from the Case 0 datasets are also indicated in the table for comparison.
A visual comparison of the fit parameters for all sources can be found Appendix <ref>.
§.§.§ Point-like sources
The comparison of the best-fit parameters for the analysis of the γ-ray emission from the Crab Nebula and PKS 2155-304 can be found in Figure <ref> and Figure <ref> respectively.
The left panel of Figure <ref> shows a significance map for the region around the Crab Nebula. Indicated in the significance map are the 5 σ and 8 σ contours for Case 3 in dashed pink lines, and the contours for Case 0 as solid blue lines. The contours show good agreement.
The right panel of Figure <ref> shows the distribution of significance entries in the respective maps. A region of 0.5 ^∘ radius around the source has been excluded to avoid contamination of residual γ-ray emission from the source. The significance map and histogram of the region around PKS 2155-304 can be found in Figure <ref>. The distribution of the background counts shows a shift between the datasets for Case 0 and Case 3, indicating a slight over-prediction of the background rate for the Case 3 dataset. This shift is however within the expected range derived from a study of the influence of the systematics (see Table <ref>)
The best-fit results of the likelihood minimisation for the Crab Nebula can be seen in Figure <ref>. All parameters except the best-fit position are comparable within statistical errors with the results derived in <cit.>. The comparability of the Right Ascension between Case 0 and Case 3 suggests that a change in pointing reconstruction might be responsible for this deviation. A similar deviation can be observed for the best-fit parameters of PKS 2155-304 (see Figure <ref>), as well as a slight decrease in flux normalisation compared to the results derived in <cit.>. For both sets of SEDs, depicted in Figure <ref> and Figure <ref>, good agreement is seen. Only in one energy bin, around 3TeV is the deviation above 1 σ between the SED derived from the Crab Nebula. A reason for this deviation could be a change in the binning of the background model template that has been incorporated since the previous publication from <cit.>.
§.§.§ MSH 15-52
A significance map of the region around MSH 15-52, as well as the distribution of background can be seen in Figure <ref>. In this region a shift of the background distribution towards a smaller number of cosmic-ray signals can be observed. This shift is most likely caused by a lack of OFF runs in this optical phase, making it necessary to choose OFF runs from the next optical phase. It is, therefore, possible that the time-span between ON and OFF run is larger than 4years resulting in a high probability that the optical efficiency of the telescopes between both observations differs substantially. The shift can, however, be accounted for by the systematic uncertainty introduced by the run matching approach.
The best-fit values for all parameters agree within the error (see Figure <ref>). The strong dependence of the source extension on the correct estimation of the background rate can be well observed by the large systematic errors introduced by the systematic uncertainty due to the run-matching. The SED derived for this source shows a good agreement within the errors with the SED computed in <cit.> (Figure <ref>).
§.§.§ RX J1713.7-3946
Figure <ref> shows the significance map and distribution for the region around RX J1713.7-3946. Good agreement between the background rate estimated from Case 0 and Case 3 is again seen.
The best-fit parameters can be seen in Figure <ref>. The spectral index and flux normalisation for RX J1713.7-3946 agree with the results of the likelihood minimisation within the error. A comparison of the SED from RX J1713.7-3946 can be seen in Figure <ref>. The upper panel of the Figure shows the SED derived from the run-matching approach, compared to the SED derived in <cit.>, while the lower panel shows the deviation between both sets of SED. Again, a good agreement can be observed.
To verify that this background estimation is also stable for a large amount of observations, a dataset containing 53.4 hours of observations centred on RX J1713.7-3946, was analysed.
The results of this analysis show a good agreement between the Case 0 and Case 3. Since this dataset is not part of the public dataset release, the SED and best-fit parameters will not be presented in this work, but a more detailed analysis of this region using the same dataset can be found in <cit.>.
§ CONCLUSIONS
In this work, we present a method to combine the classical ON/OFF background estimation technique used by IACT arrays with a 3D background model template. This combination of techniques allows us to remove a major restriction of each method. Since the 3D background model template has been created from a large amount of OFF runs, it is robust and not subject to large statistical uncertainties. However, this method requires source-free regions in the FoV of the observation in order to normalise the background estimation for varying observation conditions. The classical ON/OFF background estimation does not require a source-free region, is however very sensitive to variations in observation conditions.
Even though we are able to achieve good agreement with the previously published results for all sources, the datasets used here already give an indication of the limitations of this technique. To achieve a comparable background rate between ON and OFF run, both runs need to be chosen in a period with comparable optical efficiency. Due to a limited amount of available OFF runs in these periods, large systematic errors can be introduced on the source parameters. This effect could be mitigated by constructing a background model template suitable for long-term hardware conditions, with corrections to the normalisation designed to account for short term variations, and therefore expand the pool of viable OFF runs for these periods.
While this study presents a detailed analysis of the systematic uncertainty on the run-matching approach, an additional source of uncertainties, which has not been examined in detail, is introduced because of the stacking of the observations. This method of combining the observations for the analysis can produce a slight gradient in the number of background counts over the FoV. While this effect is small, it should nevertheless be kept in mind when interpreting the results achieved by this method and becomes particularly relevant for large, extended sources.
An additional problem of this framework is how to treat the galactic diffuse emission. In this work, we have increased the energy threshold to exclude the influence of the diffuse emission, this is however not ideal. A better approach would be to construct an additional spectromorphological template from interstellar gas tracers.
While these disadvantages show that the run-matching approach presented here cannot compete with the 3D background model template method in regions containing few sources with a small extension, the non-dependence on source-free regions in the observation is a major advantage in sky regions with many sources or for the detection of diffuse extended structures, which could not be observed using the background model template alone. The presented background estimation technique is an extension of the background estimation using a 3D background model template, offering an alternative for the analysis of regions filled with significant emission and other techniques, which are subject to bigger statistical and systematic fluctuations, would need to be used. This application is vital considering that there are currently no Water Cherenkov Detectors observing the southern γ-ray sky, restricting the observable source population to structures that can be detected with the comparatively small FoV of the H.E.S.S. array. It also has the potential to increase the population of sources observable with CTA, opening up the possibility of using the superior angular resolution to expand the knowledge on large extended structures previously uniquely detected by the Water Cherenkov Detectors.
This work made use of data supplied by the H.E.S.S. Collaboration. We would like to thank the H.E.S.S. Collaboration, especially Stefan Wagner, Spokesperson of the Collaboration, as well as Nukri Komin, chair of the Collaboration Board, and Markus Boettcher, Chair of the Publication Board, for allowing us to use the data presented in this publication.
We also gratefully acknowledge the computing resources provided by the Erlangen
Regional Computing Center (RRZE).
This work was supported by the German
Deutsche Forschungsgemeinschaft, DFG project
number 452934793.
aa
§ ZENITH ANGLE CORRECTION
In order to account for differences in mean zenith angle of the observation between ON and OFF run, equation (<ref>) is fitted to the array trigger rates in every optical phase. The resulting fit parameters are given in Table <ref>.
§ ENERGY DEPENDENCE
A comparison of the number of background counts for all observations used for the analysis of the empty-field regions between the Case 0 and Case 3 datasets has been made. The ratio between the background counts can be seen in Figure <ref>. The shaded band depicts the error on the number of background counts estimated from the matched pairs used for the Case 4- and Case 4+ datasets.
The datasets computed for the analysis of the empty-field regions were also divided into bins, such that one bin consists of two energy bins of the original datasets. Then significance histograms were computed for all bins, and a Gaussian fit was performed. The significance histograms as well as the Gaussian fits for all bins for the dataset around Sculptor can be seen in Figure <ref> and Table <ref>. The results for the regions around Reticulum 2 and Tucana 2 have been summarised in Table <ref> and Table <ref> respectively.
§ FIT RESULTS
Table <ref> contains the best-fit values obtained using the background model template and run-matching approaches, for all source regions analysed in this study. Additionally, the best-fit values from <cit.> are listed for comparison. Figure <ref>, Figure <ref> and Figure <ref> show a visual comparison of the best-fit values including the systematic errors for the regions around the Crab Nebula, PKS 2155-304 and RX J1713.7-3946 respectively.
§ ADDITIONAL SIGNIFICANCE MAPS AND SPECTRA
Here we show the significance maps and distributions of the region around PKS 2155-304 (Figure <ref>), MSH 15-52 (Figure <ref>) and RX J1713.7-3946 (Figure <ref>). Although a small shift in the background distributions for the datasets estimated using the run-matching approach can be observed, this shift is within the systematic uncertainty (see Table <ref>).
The SED estimated for the Crab Nebula, PKS 2155-304 and MSH 15-52 are shown in Figures <ref>, <ref> and <ref> respectively. The lower panel of these figures shows the deviation between the two sets of fluxpoints and the best-fit spectral model derived in the analysis of the respective Case 3 datasets, defined by (x_1/2 - x_model)/x_model, with x_1 the differential energy flux in the respective energy bin for the reference fluxpoints derived in <cit.>, x_2 the differential energy flux for the fluxpoints derived from the Case 3 datasets and x_model the differential energy flux estimated from the best-fit spectral model.
|
http://arxiv.org/abs/2409.02558v1 | 20240904092641 | Low-characteristic-impedance superconducting tadpole resonators in the sub-gigahertz regime | [
"Miika Rasola",
"Samuel Klaver",
"Jian Ma",
"Priyank Singh",
"Tuomas Uusnäkki",
"Heikki Suominen",
"Mikko Möttönen"
] | quant-ph | [
"quant-ph",
"cond-mat.supr-con",
"physics.app-ph"
] |
APS/123-QED
[Corresponding author.
][email protected]
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
[Corresponding author.
][email protected]
QCD Labs, QTF Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 13500, FI-00076 Aalto, Finland
QTF Centre of Excellence, VTT Technical Research Centre of Finland Ltd., P.O. Box 1000, 02044 VTT, Finland
§ ABSTRACT
We demonstrate a simple and versatile resonator design based on a short strip of a typical coplanar waveguide shorted at one end to the ground and shunted at the other end with a large parallel-plate capacitor. Due to the shape of the structure, we coin it the tadpole resonator. The design allows tailoring the characteristic impedance of the resonator to especially suit applications requiring low values. We demonstrate characteristic impedances ranging from Z_c=2 to 10 and a frequency range from f_0=290MHz to 1.1GHz while reaching internal quality factors of order Q_=8.5e3 translating into a loss tangent of tan(δ)=1.2e-4 for the aluminium oxide used as the dielectric in the parallel plate capacitor. We conclude that these tadpole resonators are well suited for applications requiring low frequency and low charactersitic impedance while maintaining a small footprint on chip. The low characteristic impedance of the tadpole resonator renders it a promising candidate for achieving strong inductive coupling to other microwave components.
Low-characteristic-impedance superconducting tadpole resonators in the sub-gigahertz regime
Mikko Möttönen
September 9, 2024
===========================================================================================
§ INTRODUCTION
Superconducting quantum circuits have provided unforeseen opportunities to tailor quantum systems for various purposes. This development has led to a new field of quantum microwave engineering, serving a purpose beyond fundamental research with the goal of realizing useful quantum devices <cit.>. This field has already produced numerous groundbreaking results in quantum computation <cit.>, communication <cit.>, simulation <cit.>, and sensing <cit.>.
The coplanar waveguide (CPW) resonator <cit.> is perhaps the most utilized standard building block in quantum engineering. A CPW resonator is simply a strip of transmission line with shorted or open-circuit boundary conditions at each end. The advantages of CPW resonators include ease of modelling, a wide range of available parameters and design options complemented by simple fabrication <cit.>. Superconducting CPW resonators with frequencies in the gigahertz range and internal quality factors of several hundred thousands can be routinely achieved <cit.>.
The physical length of a superconducting CPW resonator is of the same order of magnitude as the wavelength of the fundamental microwave field mode in the resonator. In a typical setting regarding superconducting quantum circuits, the CPW resonators are fabricated on a low-loss substrate deposited with a superconducting metal film. The relative permittivity of the substrate largely determines the wavelength of the microwave mode, and therefore physical length of the resonator. For a typically used silicon substrate with ϵ_r≈ 11.9, a resonator at the 5 regime has a length of roughly 10. This can be easily fitted on a usual chip size by meandering, but the situation is different at sub-gigahertz frequencies. At 1, the CPW resonator has already a length of about 60, exceeding the typical chip dimensions sixfold. This size limits the number of devices per chip considerably and at even lower frequencies may prevent using CPW resonators in practice. More intricate physical geometries, such as twisting the resonator into a spiral <cit.>, may offer a solution in some cases, but may also raise other concerns with impedance matching, grounding, and parasitic modes <cit.>, and render coupling to other devices somewhat challenging.
Apart from problems arising from the large physical size of the low-frequency CPW resonators, another possible issue emerges when coupling these low-frequency resonators with high-frequency components. Theoretical models describing the quantum physics of superconducting quantum circuits typically only consider the fundamental modes of the circuit components. This approximation is fine as long as all the circuit components reside within the same relatively narrow frequency range. However, a low-frequency CPW resonator has a dense spectrum of harmonic modes in contrast to a high-frequency component, which may cause problems upon coupling the two. For example, a λ/2 CPW resonator with a 500 fundamental mode frequency has ten modes below 5. If these components are to be coupled for their fundamental modes, driving the 5 mode may excite multiple unwanted modes in the 500 resonator. For instance, the detrimental effects of spurious higher harmonics to the qubit lifetime and coherence are a known problem <cit.>.
Alternatives to CPW resonators in superconducting circuits have been demonstrated by using interdigital <cit.> and parallel plate <cit.> capacitors. Although the demonstrated devices solve some issues, most of them also display challenges such as difficulties to align with the fabrication process of a superconducting circuit, complex structures, higher harmonics, excessive losses, or large size. Furthermore, most of the earlier research focuses on devices in the few gigahertz frequency range, leaving uncharted territories in the sub-gigahertz regime.
In this article, we demonstrate an extremely simple and versatile lumped-element resonator design based on a strip of a traditional CPW transmission line shunted with a parallel-plate capacitor (PPC), thus forming a structure reminiscent of a tadpole. Combining the best of both worlds, our design is applicable in a wide range of frequencies, especially in the low-frequency end of the spectrum, while maintaining a relatively small on-chip footprint, retaining the ease of fabrication and implementation, and boasting an extremely robust structure. Importantly, the present design allows for the tuning of the inductance to capacitance ratio, i.e., the characteristic impedance of the resonator, in a wide range below the typical value of Z_c∼50, reaching values of the order of Z_c∼1. This has the benefit of confining the magnetic field of the resonator mode into a small spatial volume facilitating strong inductive coupling to the resonator. This is potentially beneficial for realizing certain types of superconducting circuits, such as the devices proposed in the references <cit.>.
The physical size of the tadpole resonator is compared to two other resonator designs in Fig. <ref>. Note that the tail of the tadpole may be more heavily meandered or even replaced by a meandering wire similar to that in the compact inductor-capacitor resonators, thus rendering the tadpole resonator the smallest of the considered designs.
§ LOW-CHARACTERISTIC-IMPEDANCE TADPOLE RESONATORS
§.§ Design and analysis
Let us begin by discussing our design on a general level. A detailed schematic of the resonator design can be found in Fig. <ref> for reference. Our resonator design consists of a strip of typical CPW line with one end shorted to the ground and the other shunted to the ground with a large PPC. In the limit of a large PPC and short CPW strip, the the CPW strip essentially provides the inductance and PCC the capacitance, i.e. C_≫ C_, where C_ and C_ are the capacitances of the PPC and CPW, respectively. Consequently, the physical size of this structure can be much smaller in all dimensions than the wavelength of the resonator mode it houses. This structure can be accurately modelled as a lumped-element resonator where the magnetic field is localized in the short CPW strip and the electric field resides within the PPC. Importantly, this scheme allows for a strong inductive coupling via the CPW strip with a relatively low inductance of the coupler since the total inductance of the resonator is low.
One advantage of the current design is that it is straightforward to estimate the resonance frequency of our resonator, even analytically, given that the effective permittivity of the CPW, ϵ_, and the capacitance per unit area, c_0, of the PPC are known. Although analytical expressions for ϵ_ exist <cit.>, it can be reliably found, either by finite element electromagnetic simulations, or by measurement. Capacitance per unit area, on the other hand, is typically known for established fabrication recipes with corresponding characterization data, but can also be evaluated analytically based on the relative permittivity of the chosen dielectric. Once these values are known, one can estimate the capacitance and the inductance of an arbitrary-length CPW strip by the well-known results <cit.> obtained by conformal mapping methods <cit.>. Furthermore, as long as the dielectric of the PPC is thick enough, so that there is no inductive shunt to ground through the dielectric, the capacitance of the PPC can be evaluated as C_=c_0A, where A is the surface area of the PPC. The frequency of the resonator is thus given by
f_0=[2π√(L(C_+C_))]^-1,
where L is the inductance of the CPW strip, i.e. the total inductance of the tadpole resonator, and C_ is the small capacitance contribution of the CPW strip. We ignore other possible sources of stray capacitance as they will be tiny as compared to C_. Also note that we neglect the kinetic inductance contribution to the total inductance here, which is typically found to be a good approximation <cit.>. In addition to the resonator frequency, we define the characteristic impedance of the resonator as Z_c=√(L/(C_+C_)).
We couple our resonators to a feed line by a notch-type coupling at the CPW strip, therefore achieving a mixed coupling with inductive and capacitive nature rather than the typical purely capacitive coupling. We leave a small strip of ground metal in between the resonator CPW strip and the feed line in order to not break the ground plane and hence to reduce the possibility of encountering spurious slotline modes. Although we limit ourselves to a relatively weak coupling here, the design naturally supports strong inductive coupling since the magnetic field is confined to a small volume <cit.>. Such strong inductive coupling may be achieved, for instance, via a SQUID <cit.>.
Let us briefly discuss the internal loss sources of the tadpole resonator. Currently, internal quality factors of several hundreds of thousands in CPW resonators can be achieved routinely <cit.>. The PPC, on the other hand, may introduce significant additional losses since it adds material prone to quantum two-level systems (TLSs) and losses thereof, such as additional dielectrics, metal–dielectric interfaces and metal–vacuum interfaces <cit.>. Thus it is reasonable to model the tadpole as a lumped-element resonator where the losses are dominated by the dielectric losses of the PPC, and an accurate estimate of the loss tangent of the PPC dielectric is provided by the inverse of internal quality factor: tanδ=1/Q_.
§.§ Fabrication and experimental methods
Our CPW strip has a typical cross-sectional geometry with the width of the center conductor w=10 and the gap between the center conductor and the ground plane s=6. For the sake of simplicity, we equip all our resonators with identical CPW strips of length 2000, but the length can be, of course, chosen differently to tune the characteristic impedance of the resonator. Furthermore, we fix the thickness of the dielectric layer in the PPC to d=42, and only vary the surface area of the PPC to tune the frequency of the resonators. Since we use Al2O3 as the dielectric in the PPC, the 42-nm thickness of the layer is more than enough to suppress any leakage through the oxide and their inductive contributions to the PCC <cit.>. We couple six tadpole resonators to one transmission line for multiplexed readout using a notch-type configuration <cit.>. We design two sets of six resonators: one set with designed frequencies ranging from 280450 and another with frequencies ranging from 450 to 1.1.
We fabricate all of the twelve tadpole resonators on a single chip of 10 x 10 high-purity, high-resistivity (ρ > 10) silicon substrate with a thermally grown 300 layer of silicon oxide. The total substrate thickness is 675. A Nb thin film of 200 is sputtered on top of the oxide layer. The niobium structures are defined onto AZ5214E photoresist using a Heidelberg Instruments MLA150 maskless aligner followed by a reactive ion etching process in order to remove the niobium from the defined areas.
For the PPC, we first use atomic-layer deposition (ALD) to grow the 42 dielectric layer of Al2O3 on the chip in a Beneq TFS-500 system. Next, we protect the dielectric layer at the desired capacitor regions with AZ5214E resist and wet-etch the rest of the aluminium oxide away with a mixture of ammonium fluoride and hydrofluoric acid. Before depositing the top metal of the PPC, we first use argon milling to remove the intrinsic oxide from the niobium contact pad in order to ensure a proper galvanic contact. As the last step before lift-off, dicing, and bonding, we deposit a 30 aluminium layer in an electron beam evaporator for the PPC top metal.
For characterization purposes, we cool down the sample in a commercial cryogen-free dilution refrigerator with a base temperature of about 25. We measure the S_21 transmission coefficient of each resonator with a Rohde & Schwarz ZNB40 vector network analyzer. The sample is mounted to a sample holder and connected to the electronics via aluminium bond wires. The room temperature control electronics are connected to the sample via coaxial cables with a total attenuation of 80 dB across the levels of the cryostat.
§ RESULTS
We extract the fundamental resonance frequencies and the quality factors of the tadpole resonators from the measured microwave transmission coefficients through the feed line with the help of a circular fit in the in-phase–quadrature-phase (IQ) plane, thus utilizing the complete data offered by the transmission coefficient as a function of frequency <cit.>. The extracted internal and external quality factors, Q_ and Q_, respectively, are presented in Fig. <ref> as a function of the probe power through the feed line. The fundamental resonance frequencies, f_0, and characteristic impedances of the resonators, Z_c, along with other relevant parameters can be found in Table <ref>.
From the values provided in Table <ref>, we observe that we can indeed reach very low characteristic impedances for the tadpole resonator. The tunability of the impedance is not limited to this range, however. By varying the length of the CPW strip one can reach values even lower than presented here, while obtaining higher values is also naturally possible.
Let us next discuss the quality factors. Figure <ref>(a) shows that, on average, we can reach internal quality factors of the order of several thousands at reasonably low probe powers. We find an average internal quality factor of 5.2e3 at the intermediate probe power of -116dBm. The internal quality factor increases linearly with probe power in the intermediate power range and displays saturation at both, low and high power limits. Note that the higher frequency resonators tend to exhibit saturation at higher power at the low-power regime, which is expected as per the higher photon energy. We find that in the low-power limit, the average internal quality factor saturates at Q_i=2.3e3 and in the high power limit at Q_i=8.5e3. The single-photon probe powers, P_in, are given in Table <ref>. As a final remark from Fig. <ref>(a), we note that the resonator D is an outlier, as its internal quality factor is significantly lower, even for a high probe power, as compared with the other resonators.
The external quality factors presented in the lower panel of Fig. <ref> are about an order of magnitude higher than the internal quality factors, confirming the rather weak coupling to the feed line. As expected, the external quality factors exhibit no dependence on power. Here, we note that resonator B seems to be somewhat of an outlier in addition to resonator D since it has a considerably higher external quality factor than the other resonators, especially at high probe power, even though the designed coupling is identical for all resonators. Resonator D was already deemed an outlier above based on the internal quality factor. It may be that these two resonators have a weaker coupling to the transmission line due to some fabrication inconsistency resulting in poor signal to noise ratio, and thus, appear as outliers in the data.
The above-mentioned linear behaviour with saturation at both ends of the power spectrum is expected and supports the TLS loss model of the PPC dielectric <cit.>.
Furthermore, we measured a control sample of CPW resonators without PPCs, fabricated with the same process, and found the quality factor of such resonators to be of order 3e5. This supports the assumption that most of the losses arise from the PPC dielectric.
Let us next compare our lumped-element model for the resonator frequency against the measured data. In Fig. <ref>(a), we fit the model, defined by equation <ref>, to the measured resonator frequency data as a function of the PPC plate area. The only fitting parameter is the capacitance per unit area, as we do not have a recent verification for that for the used fabrication recipe. The effective permittivity of the CPW, ϵ_eff, we find by finite element electromagnetic simulation and estimate the capacitance and inductance per unit length of the CPW analytically <cit.>. From this fit we find the capacitance per unit area to be c_0=1.39fF/ m^2, which matches very well to the value, c_0=1.4fF/ m^2, reported for the same recipe in reference <cit.>. From table <ref> we see that, on average, the fitted model can estimate the resonance frequency within 1.7 % of the measured value. In Fig. <ref>(b), we show the total capacitance, C_+C_, as a function of PPC plate area. The total capacitance values are obtained by solving equation <ref> for the total capacitance and plugging in the measured resonance frequencies and the determined c_0. The data is fitted with a simple straight line to highlight the linear dependence of total capacitance to PPC area. Based on these results, the tadpole resonator can be described by the lumped-element model to a remarkable precision.
In addition to the above characterization of the tadpole resonators at the base temperature of the cryostat, we present the temperature dependence of the internal quality factors and frequencies of resonators A–L in Fig. <ref>. We find that the quality factors remain relatively constant or slightly increase with increasing temperature and that the resonance frequencies increase with temperature. Both observations are expected in a system where the intrinsic dissipation is dominated by TLS losses <cit.>. Furthermore, we fit the resonance frequency data with a model describing the temperature dependence of the resonance frequency of a resonator within the TLS model <cit.> and find a good agreement with the experimental data. These findings strongly support the use of the TLS model of losses for our devices.
As shown above, most of the internal losses of the tadpole resonator can be attributed to the TLS losses in the PPC dielectric. Thus, assuming the lumped-element model for the tadpole resonator, one can extract the loss tangent of the Al2O3 used as the dielectric in the PPC. Based on the extracted internal quality factors, we find the average loss tangent to vary between tan(δ)=1.2e-4 and tan(δ)=4.3e-4 as a function of probe power, as shown in Fig. <ref>(a).
§ CONCLUSIONS
We demonstrated a simple and versatile low-characteristic-impedance lumped-element resonator design based on a strip of a conventional CPW transmission line shunted with a parallel-plate capacitor resulting in a structure shaped like a tadpole. We fabricated twelve tadpole resonators in the sub-10 range and characterize the them at subkelvin temperatures reaching internal quality factors of the order of Q_=1e4. We further demonstrated that the resonator can, indeed, be considered as a lumped-element resonator where most of the losses arise from the dielectric losses of the PPC dielectric, translating into a loss tangent of order tan(δ)=1/Q_=1.0e-4 for the used Al2O3.
We further showed that the frequency of the tadpole resonator can be modelled by a simple analytical model to a high accuracy. Typically, the values of the physical quantities needed for the model are known for well-established recipes, but the values can also be found by finite-element electromagnetic simulation or even by analytical expressions and tabulate values for a rough approximation of the resonance frequency.
While the tadpole resonator offered only a moderate internal quality factor, it boasts a number of advantageous properties including versatility, relatively simple fabrication process, small footprint on the chip even at extremely low frequencies, tunable characteristic impedance in fabrication, and inherent capability of strong inductive coupling using low coupling inductance. This renders the tadpole resonator a viable candidate for multiple novel low-frequency applications where reaching top-notch internal quality factors is not of great importance. For instance, the SQUID-mediated inductive coupling proposed in Refs. <cit.> would benefit significantly from the concentrated magnetic field of the tadpole resonator resulting in strong coupling.
We acknowledge the support from the members of the QCD and PICO groups at Aalto University. Especially, we thank Jukka Pekola, Bayan Karimi, Christoforus Satrya, Qiming Chen, and Suman Kundu for fruitful scientific discourse and other help.
This work was funded by the Academy of Finland Centre of Excellence program (project Nos. 352925, and 336810) and Academy of Finland grant Nos. 316619 and 349594 (THEPOW). We also acknowledge funding from the European Research Council under Advanced Grant No. 101053801 (ConceptQ), and Business Finland under the Quantum Technologies Industrial project (Grant no. 2118781).
|
http://arxiv.org/abs/2409.02704v1 | 20240904134058 | Calculation of The Abundance of $^{187}$Re-$^{187}$Os Nuclear Clock Nuclides in S-process and Sensitivity Analysis of Maxwellian-Averaged Neutron Capture Cross Sections | [
"Xinyu Dong",
"Yixuan Qiu",
"Kaisu Wu"
] | nucl-th | [
"nucl-th",
"02.30.Hq, 02.60.Cb, 25.40.Lw, 25.40.Hs, 29.85.Fj"
] |
Calculation of The Abundance of ^187Re – ^187Os Nuclear Clock Nuclides in S-process and Sensitivity Analysis of Maxwellian-Averaged Neutron Capture Cross Sections]Calculation of The Abundance of ^187Re – ^187Os Nuclear Clock Nuclides in S-process and Sensitivity Analysis of Maxwellian-Averaged Neutron Capture Cross Sections
1]Xinyu [email protected]
2]Yixuan Qiuyii [email protected]
[1]Kaisu [email protected]
[1]College of Mathematics and Physics, Beijing University of Chemical Technology, 15 North Third Ring East Road, Beijing, 100029, Beijing, China
[2]SEU-Monash Joint Graduate School, Southeast University, 399 Linquan Road, Suzhou, 215000, Jiangsu, China
In this paper, the network equations calculation of ^187Re – ^187Os clock-related nuclide abundance in s-process is studied, and the sensitivities of Maxwellian-Averaged neutron capture cross sections for each nuclide are analyzed in detail. Firstly, basing nuclear physical parameters, we give the branching s-process reaction network from ^184W to ^190Os, and establish the corresponding network equations. Using a single path s-process approximation, we obtain an analytical expression of the seed nuclide ^183W abundance of our branching network. Because of the stiffness of the system of network equations, we use the semi-implicit Runge-Kutta method to give the numerical solution of the network equations, and thus obtain the abundance of each nuclide related to the ^187Re – ^187Os nuclear clock in the s-process. Finally, with the numerical solution, the sensitivity analysis of the Maxwellian-Averaged neutron capture cross sections of the nuclear reaction involved in the ^187Re – ^187Os nuclear clock network equations is carried out. Therefore, we find that in s-process, the neutron capture reaction ^184 W+n →^185W has the greatest influence on the ^187Re – ^187Os nuclear clock reaction network, and the neutron capture reaction ^186 W+n →^187W has the greatest effect on the particular nuclides ^187Re and ^187Os. So the measurements of these two Maxwellian-Averaged neutron capture cross sections deserve the attention of experimental nuclear physicists.
[MSC Classification]02.30.Hq , 02.60.Cb , 25.40.Lw , 25.40.Hs , 29.85.Fj
[
*
September 9, 2024
=====================
§ INTRODUCTION
One of the key topics of cosmological chronology is determining the age of celestial bodies <cit.>, which provides fundamental information on the formation and evolution of celestial bodies, and is one of the most important parameters in astrophysics and cosmology. Cosmological chronology proposes that the long-lived radionuclides can serve as cosmic nuclear clocks <cit.>, such as ^40K <cit.>, ^87Rb <cit.>, ^176Lu <cit.>, ^187Re <cit.>, ^232Th and ^238U <cit.>.
Among them, the abundance ratio of ^187Re and its decay nuclei suggested by Clayton as the cosmic nuclear clock has unique significance. Besides, ^187Re is mainly produced by the r-process, ^186Os and ^187Os are mainly produced by the s-process and part of ^187Os is produced by ^187Re through β-decay. The ground state of ^187Re decays to ^187Os <cit.> with a half-life of 4.35 × 10^10 years <cit.>. Based on the above analysis, it can be seen that the reaction process is a branching s-process, and the production rates of ^186Os and ^187Os are independent of the uncertainty of the r-process. The ^187Re – ^187Os nuclear clock is also less affected by late perturbation events than the ^87Rb – ^87Sr nuclear clock. And compared with the ^40Kr – ^40Ar nuclear clock, the ^187Re – ^187Os nuclear clock has less nuclear fission caused by cosmic rays <cit.>. Therefore, there is less error in determining the stellar age by studying the ^187Re – ^187Os nuclear clock.
In 1982, J. M. Luck <cit.> pointed out that the half-life of ^187Re was similar to the age of the galaxy in order of magnitude and estimated the age range of the galaxy through the chemical experimental study of meteorite composition. Takahashi <cit.> and others found an uncertainty in the calibration of the nuclear clock. In 2002, Xixiang Bai <cit.> analysed the bound-state β-decay and its astrophysical significance, and proposed a calibration direction of the ^187Re – ^187Os cosmic nuclear clock, but did not carry out the actual data analysis. T. Hayakawa <cit.>, considered the effect of isomeric states.
^187Re nuclei are mainly obtained from r-process products by β-decay after the end of the r-process and located outside the main path of the s-process. The s-process nuclei ^186Os and ^187Os are not directly produced by the r-process because they are shielded by stable nuclei ^186W and ^187Re. Therefore, the pure s-process nuclei ^186Os can be used to normalize the s-process nuclei abundance in this mass region <cit.>. Notably, the ^187Re – ^187Os nuclear clock has the advantage that it avoids the uncertainty in the initial abundances calculated by the r-process model. Also, since part of ^187Os is produced by the r-process nucleus ^187Re through β-decay, the nuclear clock can be calibrated by subtracting the ^187Re and ^187Os abundances from the s-process contributions to these nuclei. So it is important to perform detailed calculations of the nuclide abundances of the ^187Re – ^187Os nuclear clock in the s-process, which is one of the aims of this study.
To calculate the abundance of ^187Re – ^187Os nuclear clock network nuclides in s-process in detail, it is necessary to construct the related reaction network, establish the network equations and solve the equations. The coefficients of equations are the Maxwellian-Averaged neutron capture cross sections and decay rates of the each nuclides. These data are currently obtained by nuclear physicists in experiments <cit.>. However, nuclear physics experiments usually contain experimental errors, which have an impact on subsequent calculations of the nuclide abundance. Therefore, it is necessary to analyze the sensitivity of Maxwellian-Averaged neutron capture cross sections. In this paper, the calculation of sensitivity is similar to the Ref. <cit.>. By using the control variable method, the reaction cross section of a certain neutron capture reaction is changed from -20% to 20% each time in the step of H=10%. After changing Maxwellian-Averaged neutron capture cross section each time, the abundance of all nuclides will inevitably change. Unlike that of the Ref. <cit.>, we define sensitivity as the sum of the absolute value of the change in abundance of each nuclide after changing Maxwell-Average neutron capture cross sections multiplied by the calculated step (which is actually a 1-norm of the function), so that the effect of this neutron capture reaction on the whole network is obtained. Therefore, it can be identified that the neutron capture reaction that has the largest effect on the abundance of ^187Re – ^187Os nuclear clock-related nuclides in the s-process is ^184 W+n →^185 W.
Subsequently, we use sensitivity analysis to obtain that the neutron capture reactions with the greatest impact on the special nuclides ^187Re and ^187Os is ^186W+n →^187W. We recommend that nuclear physics experimenters pay attention to these two nuclear reactions.
This paper is organized as follows. In the second section, according to the nuclear physics data, the s-process branching network path is given and network differential equations are obtained and simplified. We first find the analytical solution of the seed nuclide ^183W in the network, then get the numerical solution of differential equations by using the semi-implicit fourth-order Runge-Kutta method of the stiff equations and obtain the abundance variation curves for each nuclide. In the third section, we carry out the sensitivity analysis of the Maxwellian-Averaged neutron capture cross sections of the nuclear reactions involved in the ^187Re – ^187Os nuclear clock network equations. The Maxwellian-Averaged neutron capture cross sections vary from 80% to 120% and we obtain the Maxwellian-Averaged neutron capture cross sections that have the greatest effect on the total reaction and on the particular nuclides (^187Re and ^187Os). The summary is presented in section 4.
§ REACTION NETWORK AND NETWORK EQUATIONS OF THE ^187RE – ^187OS NUCLEAR CLOCK IN THE S-PROCESS
The nuclear reaction network of ^187Re – ^187Os nuclear clock in the s-process is a branching network. There are unstable nuclides that half-lives of them are same with the neutron capture time scale in the path of the s-process <cit.>. When the s-process passes through these nuclides, neutron capture and β-decay occur simultaneously <cit.>, so these nuclides are the branching points of the reaction process.
The s-process starts with ^56Fe. After a series of neutron captures and decays, ^183W is synthesized. Then, ^183W captures neutron to synthesize ^184W, and next the nuclide enters the nuclear reaction network of the ^187Re – ^187Os nuclear clock. We note that the half-life of ^186Re is 3.72 days, yet the the typical time scale of neutron capture in the s-process is usually 10 – 100 years. So the β-decay of ^186Re occurs before neutron capture and the reaction ^186 Re+n →^187 Re is ignored. Based on the above analysis, we give the path diagram of the nuclear reaction network of the ^187Re – ^187Os nuclear clock in the s-process.
According to the nuclear reaction network path shown in Figure <ref>, the corresponding differential equations of the branching network can be obtained as follows:
d N( ^184W)/d t = λ_n(^183W)N(^183W)-λ_n(^184W)N(^184W),
d N( ^185W)/d t = λ_n(^184W)N(^184W)-[λ_0mm-(^185W)+λ_n(^185W)]N(^185W),
d N( ^185Re)/d t = λ_0mm-(^185W)N(^185W)-λ_n(^185Re)N(^185Re),
d N( ^186Re)/d t = λ_n(^185Re)N(^185Re)-[λ_0mm-(^186Re)+λ_ec(^186Re)]N(^186Re),
d N( ^186W)/d t = λ_n(^185W)N(^185W)+λ_ec(^186Re)N(^186Re)-λ_n(^186W)N(^186W),
d N( ^187W)/d t = λ_n(^186W)N(^186W)-λ_0mm-(^187W)N(^187W),
d N( ^186Os)/d t = λ_0mm-(^186Re)N(^186Re)-λ_n(^186Os)N(^186Os),
d N( ^187Re)/d t = λ_0mm-(^187W)N(^187W)-[λ_0mm-(^187Re)+λ_n(^187Re)]N(^187Re),
d N( ^187Os)/d t
= λ_n(^186Os)N(^186Os)+λ_0mm-(^187Re)N(^187Re)-λ_n(^187Os)N(^187Os),
d N( ^188Re)/d t = λ_n(^187Re)N(^187Re)-λ_0mm-(^188Re)N(^188Re),
d N( ^188Os)/d t = λ_n(^187Os)N(^187Os)+λ_0mm-(^188Re)N(^188Re)-λ_n(^188Os)N(^188Os),
d N( ^189Os)/d t = λ_n(^188Os)N(^188Os)-λ_n(^189Os)N(^189Os).
Where N(A) is the abundance of nuclide A; λ_n = n_n<σ v> is the reaction rate of neutron capture; n_n is the number density of neutrons in the reaction process; <σ v> is the probability of the incident neutron reacting with the nucleus; λ_0mm- is beta decay rate; λ_ec is the electron capture rate in ε decay.
Similar to the Ref. <cit.>, we use the variable substitution introduced by Clayton D.D <cit.>.
τ≡∫_0^tn_n(t^')v_Tdt^'.
Meanwhile, the abundance of each nuclide is normalized to Fe <cit.> with the following transformation,
ψ(A)=σ(A)N(A)/N_0(^56Fe).
Here σ(A) is the Maxwellian-Averaged neutron capture cross section of the nuclide A. Then the abundance of Fe is set to 1. Nuclides with very short half-lives (T<1 day) are treated as extremely unstable nuclides, which abundance changes approximately to 0. The simplified differential equations are arranged as follows.
d/dτΨ=MΨ+σ(^184W)ψ(^183W)e_1.
Ψ≡[ ψ(^184W); ψ(^185Re); ψ(^186W); ψ(^186Os); ψ(^187Re); ψ(^187Os); ψ(^188Os); ψ(^189Os) ],e_1≡[ 1; 0; 0; 0; 0; 0; 0; 0 ].
M≡[ M_1 M_2 ].
The submatrices of the block matrix in Eq. (<ref>) are shown below.
M_1=[ 0mm-σ(^184W) 0 0 ; λ_0mm-(^185W)n_nv_Tσ(^185Re)σ(^185W)+λ_0mm-(^185W)/n_n v_T 0mm-σ(^185Re) 0; σ(^185W) σ(^186W)σ(^185W)+λ_0mm-(^185W)/n_n v_T λ_ec(^186Re) σ(^186W)λ_ec(^186Re)+λ_0mm-(^186Re) 0mm-σ(^186W) ; 0 λ_0mm-(^186Re) σ(^186Os)λ_ec(^186Re)+λ_0mm-(^186Re) 0; 0 0 σ(^187Re); 0 0 0 ; 0 0 0; 0 0 0 ].
M_2=[ 0 0 0 0 0; 0 0 0 0 0; 0 0 0 0 0; 0mm-σ(^186Os) 0 0 0 0; 0 0mm-[ σ(^187Re)+λ_0mm-(^187Re)/n_n v_T ] 0 0 0; σ(^187Os) λ_0mm-(^187Re)/n_n v_Tσ(^187Os)/σ(^187Re) 0mm-σ(^187Os) 0 0; 0 σ(^188Os) σ(^188Os) 0mm-σ(^188Os) 0; 0 0 0 σ(^189Os) 0mm-σ(^189Os) ].
From the expression of Eq. (<ref>), it is easy to see that to solve the network equations, the abundance of ^183W seed nuclide of the network must be obtained first. Since the half-life of ^183W is 1.9×10^18 years, so we can consider it as a stable nuclide in the s-process. We simplify the s-process path from ^56Fe to ^183W to a no-branching s-process. This is because on the s-process path, the abundance flow of the branching s-process network must converge to the main s-process path after passing through neutron capture and β decay. Therefore, when calculating the abundance of a certain stable nuclide, we can approximate it by the no-branching s-process (classical s-process), which is the main reason for the classical s-process analytical solution given by Clayton. D D et al. <cit.>. So, we use Clayton's method and give the expression of the abundance function of ^183W similar to Ref. <cit.>. The expressions of ψ(^183W) are as follows:
m_k=(∑_i=1^k1/σ_i)^2∑_i=1^k1/σ_i^2 ,
λ_k=∑_i=1^k1/σ_i∑_i=1^k1/σ_i^2.
ψ(^183W)=λ(λτ)^m-1Γ(m) e^-λτ.
Where, m_k and λ_k are determined by all Maxwellian-Averaged neutron capture cross sections involved in the s-process from ^56Fe to ^183W.
Similar to Ref. <cit.>, Jing Pan etal. <cit.> gives the analytic solution of network system Eq. (<ref>) by constant variation method. Unlike the above references, we will give the numerical solution of Eq. (<ref>) in this paper.
When solving equations by the direct integral with the constant variation method, a constant is generated in the denominator after the integration, which is the difference between the Maxwellian-Averaged neutron capture cross sections of the nuclides. Therefore, this presents an obstacle to the sensitivity analysis.
Table <ref> shows the Maxwellian-Averaged neutron capture cross sections of the nuclides involved in the reaction network <cit.>. We notice that there are nuclides with similar Maxwellian-Averaged neutron capture cross sections, such as ^185Re and ^186Re, ^186Os and ^188Os, ^187Re and ^189Os, and so on. When performing sensitivity analysis, the neutron capture cross section data need to be adjusted up and down so that the neutron capture cross section data are extremely similar or even equal, and the difference between these cross sections is in the denominator of the expression for the analytical solution, which makes sensitivity analysis poses difficulties. For this reason, we solve the Eq. (<ref>) with numerical methods.
Usually, network equations are stiff equations <cit.>, so we use the fourth-order semi-implicit Runge-Kutta method <cit.> to calculate the numerical solutions in the following format.
{
y_n+1= y_n+∑_i=1^4 w_i K_i,
K_1= h[J(y_n)+b_1 J(y_n) K_1],
K_2= h[f(y_n+β_21 K_1)+b_2 J(y_n+η_21 K_1) K_2],
K_3= h[f(y_n+β_31 K_1+β_32 K_2)+b_3 J(y_n..+η_31 K_1+η_32 K_2) K_3],
K_4= h[f(y_n+β_41 K_1+β_42 K_2+β_43 K_3).
+b_4 J(y_n+η_41 K_1+η_42 K_2+η_43 K_3) K_4].
.
Where J=∂ f/∂ y is the Jacobi matrix and h is the step.
In this way, we obtain the numerical solution of the network Eq. (<ref>). The calculated results are shown in Figure <ref>, where the values of the vertical coordinate abundances are taken as logarithms (Similar to Figures <ref> and <ref>).
§ SENSITIVITY ANALYSIS OF MAXWELLIAN-AVERAGED NEUTRON CAPTURE CROSS SECTION
Eight Maxwellian-Averaged neutron capture cross sections are involved in the simplified differential equations (Eq. (<ref>)): σ(^184W), σ(^185Re),
σ(^186W), σ(^186Os), σ(^187Os), σ(^187Re), σ(^188Os), σ(^189Os). At present, these data are measured by nuclear physics experimenters, but experiments usually have experimental errors. Therefore, we need to carry out sensitivity analysis on the Maxwellian-Averaged neutron capture cross section of the nucleus. It should be noted that the sensitivity analysis in this paper is indeed different from the sensitivity analysis in the traditional sense. Generally, sensitivity analysis is used to study and analyze the sensitivity of a system (or model) to changes in the state or output of the system parameters or conditions. Specifically, it is to change a parameter in the formula of the system (or model) and analyze the degree of change caused by the output of the system (or model), so as to judge the robustness of the system (or model) <cit.>. However, we calculated the changes in the abundance of all nuclides after the neutron capture cross section parameters were changed, and used this change to identify the importance of nuclear reactions to the ^187Re – ^187Os network. Therefore, in this paper, the sensitivity of a nuclear reaction cross section is defined as the sum of all the changes in nuclide abundance caused by the change of the cross section.
We change the Maxwellian-Averaged neutron capture cross section of each nucleus from 80% to 120% in the step of H = 10%, and bring it into the original Eq. (<ref>) to get new results. Then the solution to the equation necessarily changes with each change in the neutron capture cross section data. We subtract the numerical solution obtained after the change from the original numerical solution without the change, and define the sensitivity as follows.
D=∑_i=1^8∑_j=1^4× 10^4|ψ_ij^*-ψ_ij|h .
Where ψ_ij^* is the numerical solution after changing the Maxwellian-Averaged neutron capture cross section and ψ_ij is the original numerical solution. The superscript of the first summation sign (inner summation sign) represents the number of computational nodes for the numerical solution of the equation. The products of the absolute values of the difference between the function values at all calculation nodes and the steps are summed, which is essentially the 1-norm of the two functions. In this paper, the value of τ ranges from 0.5 to 4.5 in the calculation of integrals. The reason for starting from 0.5 is that the abundance values of all nuclides found below 0.5 are very small and have little contribution for integrals. The integral step is set as h=0.0001, so the number of nodes is 4 × 10^4. Eq. (<ref>) involves two summations. The innermost summation is the abundance change of a certain nuclide, and the outer summation is the sum of the abundance change of all the eight nuclides in the ^187Re – ^187Os network. Therefore, Eq. (<ref>) represents the total abundance change of the whole ^187Re – ^187Os network caused by changing the cross section parameters of a neutron capture reaction, which gives the total effect on the entire reaction network, as shown in Figure <ref>.
Here, it should be pointed out that the number of nodes is determined by the step of the integral calculation. When the step is less than 0.001, the integral has converged. According to the numerical integration theory, the error of the integral is less than 10^3. If higher accuracy is desired, the step should be smaller. In this paper, the step is h=0.0001.
In this way, we can get the increasing order of Maxwellian-Averaged neutron capture cross sections which affect the total ^187Re – ^187Os nuclear clock reaction network: σ(^184W), σ(^186W), σ(^188Os), σ(^187Re), σ(^189Os), σ(^186Os), σ(^185Re), σ(^187Os). Furthermore, the neutron capture reaction that has the greatest impact on the total ^187Re – ^187Os nuclear clock reaction network is: ^184 W+n →^185 W.
In order to study the ^187Re – ^187Os nuclear clock problem better, we further analyze the sensitivity of the abundance of the special nuclides (^187Re and ^187Os) in the nuclear clock, and calculate Eq. (<ref>) again. However, it should be noted, that the superscript of the outer summation sign is 2, indicating that only the two nuclides (^187Re and ^187Os) are summed. The results are shown in Figure <ref>.
Similarly, we obtain the increasing order of Maxwellian-Averaged neutron capture cross sections affecting the special nuclides (^187Re and ^187Os) is: σ(^186W), σ(^184W), σ(^187Re), σ(^187Os), σ(^186Os), σ(^185Re), σ(^188Os), σ(^189Os). Thus, the neutron capture reaction that has the greatest effect on the special nuclides (^187Re and ^187Os) is ^186 W+n →^187W.
§ SUMMARY
The ^187Re – ^187Os nuclear clock is an important cosmic nuclear clock for determining the age of the Galaxy. In this paper, we study the network equation calculation of the abundances of the nuclides associated with the ^187Re – ^187Os nuclear clock in the s-process and the sensitivity analysis of the Maxwellian-Averaged neutron capture cross section for each nuclide. Due to the stiffness of the network equations, we give numerical solutions to the network equations by the semi-implicit Runge-Kutta method and perform detailed calculations. Our results are useful for the calibration of nuclear clock.
In particular, using our numerical solutions and reaction flow of nuclides, this paper presents a detailed sensitivity analysis of the Maxwellian-Averaged neutron capture cross section for each nuclide of the ^187Re – ^187Os nuclear clock reaction network in the s-process, and the results of the sensitivity analysis show that in the s-process, the neutron capture reaction with the greatest impact on the total branching reaction network of the ^187Re – ^187Os nuclear clock is ^184 W+n →^185 W, and the neutron capture reaction with the greatest influence on the two important nuclides ^187Re and ^187Os is ^186 W+n →^187W. These results have positive significance for experimental nuclear physics, and we suggest that experimental nuclear physicists pay close attention to the measurements of these two Maxwellian-Averaged neutron capture cross sections.
|
http://arxiv.org/abs/2409.03697v1 | 20240905165220 | Classification and Prediction of Heart Diseases using Machine Learning Algorithms | [
"Akua Sekyiwaa Osei-Nkwantabisa",
"Redeemer Ntumy"
] | cs.LG | [
"cs.LG"
] |
Classification and Prediction of Heart Diseases using Machine Learning Algorithms
Akua Sekyiwaa Osei-Nkwantabisa ^1,2 and Redeemer Ntumy ^3
^1 School of Mathematical and Statistical Sciences,
College of Sciences, University of Texas Rio Grande Valley, USA.
^2Department of Statistics and Actuarial Science,
College of Basic and Applied Sciences, University of Ghana, Ghana.
^3 Department of Computer Science,
College of Basic and Applied Sciences, University of Ghana, Ghana.
==========================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT Heart disease is a serious worldwide health issue because it claims the lives of many people who might have been treated if the disease had been identified earlier. The leading cause of death in the world is cardiovascular disease, usually referred to as heart disease. Creating reliable, effective, and precise predictions for these diseases is one of the biggest issues facing the medical world today. Although there are tools for predicting heart diseases, they are either expensive or challenging to apply for determining a patient's risk. The best classifier for foretelling and spotting heart disease was the aim of this research. This experiment examined a range of machine learning approaches, including Logistic Regression, K-Nearest Neighbor, Support Vector Machine, and Artificial Neural Networks, to determine which machine learning algorithm was most effective at predicting heart diseases. One of the most often utilized data sets for this purpose, the UCI heart disease repository provided the data set for this study. The K-Nearest Neighbor technique was shown to be the most effective machine learning algorithm for determining whether a patient has heart disease. It will be beneficial to conduct further studies on the application of additional machine learning algorithms for heart disease prediction.
Keywords: Machine Learning, Logistic Regression, Heart Disease, Cardiovascular Disease, K-Nearest Neighbor, Support Vector Machine and Artificial Neural Networks
§ INTRODUCTION
The heart and blood arteries are affected by a group of ailments known as cardiovascular diseases (CVDs) and cardiac disorders. These conditions include deep vein thrombosis, coronary heart disease, peripheral arterial disease, and rheumatic heart disease. One of our body's most important organs is the heart. It is a muscular organ found directly beneath and just to the left of the breastbone. The World Health Organization estimates that CVDs account for 17.9 million deaths annually, making them the leading cause of death worldwide. The majority of these fatalities, which make up 85% of them and happen in low- and middle-income nations, are mostly caused by heart attacks and strokes. Furthermore, persons under the age of 70 account for one-third of these early mortality. Poor diet, inactivity, smoking, and binge drinking are a few risk factors for heart disease and stroke. These psychological risk factors can manifest physically as obesity, overweight, high blood lipid levels, high blood pressure, and high blood sugar. In order to take preventative actions and avoid the damage that these disorders can cause, early identification of cardiovascular diseases is essential. Underlying blood vessel issues may go untreated for a long time. The condition typically manifests first as heart attacks and strokes. Heart attack symptoms include pain or discomfort in the middle of the chest, in the arms, left shoulder, elbow, jaw, or back. The person may also have back or jaw discomfort, nausea or vomiting, lightheadedness, or fainting. Common stroke symptoms include trouble speaking or understanding speech, disorientation, difficulty seeing with one or both eyes, numbness on one side of the face, arm, or leg, difficulty walking, and a severe headache with no known reason. Due to a lack of basic healthcare facilities for early detection and treatment, cardiovascular diseases often have higher death rates in low and middle-income nations. People with CVDs have limited access to adequate, equitable healthcare services that can meet their needs in low and middle-income countries. Due to the disease's delayed identification, people get CVDs and pass away early. It has been demonstrated that lowering salt intake, increasing fruit and vegetable consumption, engaging in regular exercise, quitting smoking, and abstaining from excessive alcohol consumption all reduce the risk of cardiovascular disease. Additionally, recognizing people who are most at risk for CVDs and ensuring they receive the right care will help avoid early mortality. All primary healthcare facilities must have access to necessary medications and fundamental medical technology to guarantee that individuals in need receive treatment and counseling <cit.>.
Machine learning is becoming more and more popular as its value in different industries increases daily. Among the industries that apply machine learning include manufacturing, retail, healthcare, life sciences, travel and hospitality, financial services, energy, feedstock, and utilities. The healthcare sector is one of these applications' most significant industries <cit.>. The area of machine learning is expanding quickly as a result of the massive volumes of data being collected. Many big data experts predict that the volume of data generated will continue to rise sharply in the future. The International Data Corporation (IDC) projects that the world's datasphere will grow to 175 zettabytes by 2025 in its Data Age 2025 research. To put this into context, the stack would have covered two-thirds of the distance between the Earth and the Moon in 2013 if we converted this amount to 128GB iPads. This stack would be 26 times longer in 2025 <cit.>. Given this, it is crucial to comprehend data and obtain insights to comprehend the world of humans. Compared to human doctors, machine learning is more accurate and quicker at diagnosing. In the face of uncertainty, machine learning algorithms create models that make predictions based on data. These techniques train a model to produce accurate predictions using known sets of inputs and outputs. The biggest issue facing the medical industry today is making accurate and dependable predictions for disease diagnosis and treatment. Although there are techniques for forecasting cardiovascular diseases, they are usually expensive or ineffective at determining an individual's risk. Early detection of CVDs can considerably reduce the chance of fatalities and other issues associated to these conditions. Using data mining and machine learning approaches, automation can help solve the issue of low prediction accuracy. Data mining searches through massive data sets using a range of computational technologies in order to identify patterns and predict outcomes. In order to address the issue of poor CVD prediction accuracy, the goal of this project is to design and put into practice a system for heart disease detection and prediction utilizing machine learning techniques. Finding the best classifier for predicting and diagnosing cardiovascular diseases is the major goal of this study. Based on factors including gender, age, cholesterol levels, and other medical traits, the goal is to ascertain whether a patient has a cardiovascular disease. The system looks for and extracts helpful insights from prior data sets to aid in the diagnosis and prevention of heart disease.
§ LITERATURE REVIEW
Machine learning is a technique for automatically identifying patterns in data. Supervised, unsupervised, and reinforcement learning are the three main categories of machine learning. A technique called supervised learning, commonly referred to as supervised machine learning, uses labeled training data to learn how to predict results for new data.Support vector machines, decision trees, logistic regression, and naive bayes classifiers are a few supervised machine learning techniques. It is essential to identify and anticipate heart disorders early because doing so can lower mortality rates and overall consequences. As a result, various studies on the topic of heart disease have been carried out using data mining and machine learning approaches. In order to improve the precision of the prediction of cardiovascular disease, <cit.> suggested a unique method that makes an effort to identify crucial variables by utilizing machine learning techniques. Using the UCI heart disease data set, the accuracy of the recommended strategy was compared to various machine learning techniques. A unique technique known as Hybrid Random Forest with Linear Model (HRFLM) was developed for this investigation. In their hybrid HRFLM technique, the authors combined the traits of Random Forest (RF) and Linear Method (LM). HRFLM surpassed Decision Tree (DT), Random Forest (RF), and Linear Method (LM) in terms of the number of attributes and prediction error, demonstrating that it is quite accurate in predicting heart disease. In order to improve the accuracy of heart disease prediction and gain a deeper understanding of the key features, new feature-selection strategies can be developed, claim <cit.>. Several machine learning techniques were employed by <cit.> in their study to identify and forecast cardiovascular disease. The algorithms' performance was then evaluated using a variety of metrics, such as classification accuracy, sensitivity, specificity, and F measure, on the heart disease data set. Nine (9) machine learning classifiers were applied to the final data set both before and after the hyper parameter tuning. These included the AdaBoost Classifier (AB), the Logistic Regression (LR), the Extra Trees Classifier (ET), the Multinomial Naive Bayes (MNB), the Decision Trees (CART), the Support Vector Machine (SVM), the Linear Discriminant Analysis (LDA), the Random Forest (RF), and the XGBoost (XGB). Online repositories such as the Cleveland heart disease data set, Z-Alizadeh Sani data set, Statlog Heart, Hungarian Long Beach VA, and Kaggle Framingham data set were the sources of the data sets used by <cit.>. In conclusion, a variety of classifiers were employed to forecast the development of heart disease, with the Support Vector Machine emerging as the most reliable. In this study article by <cit.>, some disadvantages include the fact that the functioning of the earlier proposed systems is significantly lowered if the size of the data set is raised. Additionally, the classifier prediction accuracy increases as the size of the data set grows, but beyond a certain size, the accuracy of the classifier prediction decreases.<cit.> created a machine learning-based diagnosis method for heart disease prediction using a data set of heart disease. Cross-validation, three feature selection techniques, seven well-known machine learning algorithms, and metrics for classifier performance assessment such as classification accuracy, specificity, sensitivity, Mathews' correlation coefficient, and execution time were all used. This study employed the Cleveland heart disease data collection from 2016, which is popular among researchers. A hybrid intelligent machine learning-based prediction system was proposed for the diagnosis of heart disease. The seven well-known classifiers Logistic Regression, K Nearest Neighbor, Artificial Neural Network (ANN), Support Vector Machine (SVM), Naive Bayes, Decision Tree, and Random Forest were used to select the critical features using the three feature selection algorithms Relief, Minimal-Redundancy-Maximum-Relevance Feature Selection Algorithm (MRMR), and Least Absolute Shrinkage and Selection Operator (LASSO). Logistic regression with 10-fold cross-validation demonstrated the best accuracy when it was selected by the FS algorithm Relief. However, in terms of specificity, the MRMR algorithm surpassed SVM (linear) with feature selection. The authors concluded that more research is required <cit.>, to improve the efficiency of these predictive classifiers for the diagnosis of heart disease by using various feature selection algorithms and optimization strategies.In their work <cit.> established a technique for detecting the existence of heart disease using clinical data collected from subjects. The main objective was to create a predictive model for heart disease using a variety of characteristics. Different machine learning classification strategies were also tested and assessed using traditional performance metrics like accuracy in order to compare various machine learning algorithms. For this experiment, data from the UCI Heart Disease Data Collection were used. Machine learning methods such Random Forest, Support Vector Machine, K-Nearest Neighbor, Decision Tree, Artificial Neural Networks, Logistic Regression, and Naive Bayes were used to compare how well they predicted heart diseases.The Random Forest algorithm was shown to be the most reliable strategy for predicting cardiac illness, with an accuracy rate of 90.16 percent, according to <cit.>. In their conclusion, <cit.> recommended using large data sets in trials in the future to increase the reliability of their findings and help doctors better anticipate heart disease.From these related works, it shows that machine learning plays a critical role in the classification and prediction of heart disease.
§ DATA AND METHODS
This suggested model's data came from the UCI Machine Learning Repository as its data source. The Heart Disease Data Set was employed, and it has been a popular global resource for students, instructors, and researchers looking for machine learning data sets. David Aha and graduate students at UC Irvine produced the data set in 1987. Four datasets from different institutions make up the heart disease data set, including Cleveland ,Hungarian ,Switzerland, Long Beach VA and Statlog (Heart) Data Set.The Heart Disease Data Set has 76 attributes and is made up of four databases from different universities. Only 12 of these traits—including the projected trait are utilized in this investigation.The target field in the data set indicates whether a patient has heart disease or not, with 0 signifying no disease and 1 signifying the presence of a disease. The form of the data collection is (1190,12).
The machine learning algorithms used were K-Nearest Neighbors, Support Vector Machine, Logistic Regression and Artificial Neural Networks((Tensorflow). Before analysis was done on the data, the dataset was broken to training (80%, n =952) and testing (20%, n = 238).The training dataset was used to build and train the machine learning model while the testing data set was used to evaluate its performance. Evaluating the performance of a machine learning model is one of the crucial steps in its development. To evaluate the efficacy of the model, evaluation measures, sometimes referred to as performance metrics, are employed. These metrics enable us to assess how well the model processed the supplied data. The five metrics used were Confusion Matrix, Precision, Recall, F1- Score and Classification Accuracy.Hyper-parameter tuning and optimization was then performed to the models to help increase accuracy and determine the best model for predicting heart diseases. GridSearchCV, a frequent method for adjusting hyperparameters was used in this project. For each combination of the provided hyper parameter values, a model is created, evaluated, and the design that yields the best results is chosen <cit.>.
§ RESULTS AND DISCUSSION
The results of data analysis utilizing several machine learning models, such as K-Nearest Neighbors, Support Vector Machine, Logistic Regression, and Artificial Neural Network (Tensorflow), are presented in this chapter. Using the UCI data to forecast heart disease, these models' performance was assessed. Hyper parameter tuning was also done to improve the models' ability to accurately predict heart disease in patients under particular circumstances. 80 percent of the data was used for training, while 20 percent was used for testing. The data contained no missing values, and Python was the study's chosen programming language.
§.§ Graphical Summary of UCI Data set
§.§ Qualitative Variables
From the figure above, age and maximum heart rate are normally distributed hence can be used directly. Resting blood pressure and cholesterol may require transformations for improved modeling results because they are both skewed to the right. The skewness of oldpeak indicates that it must be handled delicately so as not to introduce any type of bias.
§.§ Quantitative Variables
It is evident that the imbalances here can affect the model by potentially biasing it towards the majority classes,which may necessitate the use of resampling or weighting methods for ensuring fair learning and accurate predictions across all categories.
§.§ Correlation Matrix
The correlations imply that in predicting heart disease, age, resting blood pressure and maximum heart rate may give more insight to the model to have a higher predictive power as compared to cholesterol. These stronger correlations should guide the modeling process while also considering other methodologies like feature engineering as well as interaction variables which aim at capturing these relationships properly.
§.§ Heart Disease Frequency for Sex
As seen in Figure 4, the frequency of heart disease is displayed by sex. Per the chart, it is evident that the data is highly dominated by the male sex. This imbalance is likely to negatively impact the model's performance.
§.§ Performance of Machine Learning Algorithms
Before tuning the model, Figure 5 shows the output of our baseline models. The model with the highest accuracy, almost 83%, was logistic regression. The K-Nearest Neighbors and Support Vector Machine algorithms came next, with accuracy rates of 72% and 73%, respectively. The accuracy of the tensorflow model was 65%.
The performance was significantly enhanced after model tuning. The Logistic Regression, Support Vector Machine, and K-Nearest Neighbors algorithms were tuned using GridSearchCV.
The tensorflow model on the other hand, had some strategic adjustments made. These included L2 regularization and leakyReLU activations which would prevent overfitting and enhance learning dynamics. Dropout rates are tuned from 0.5 to 0.2 and Adam optimizer learning rate was fine-tuned at 0.0005. More patience was added for early stopping (20 epochs) and a ReduceLROnPlateau callback was inserted for adjusting learning rate based on validation loss. The batch size was adjusted to 20, improving gradient estimates as well as convergence speed.
Figure 6 shows that, with the use of hyper parameter adjustment, all algorithms have an accuracy of over 80% except Artificial Neural Network. The Support Vector Machine (SVM) performs well for predicting cardiac disease in patients despite having the lowest accuracy, which is 81%. While Logistic Regression has an accuracy of 86 percent, Artificial Neural Networks (ANN) have an accuracy of 74 percent. The K-Nearest Neighbors algorithm has an 87 percent accuracy.
§.§ Performance Evaluations of Machine Learning Algorithms
The models were evaluated according to these four (4) measures; Accuracy, Precision, Recall and F1-score.
Table 4.2 and Figure 7 both reveal that the K-Nearest Neighbor algorithm delivered the greatest results, with accuracy rates of 87%, precision rates of 86%, recall rates of 90%, and F1-Score rates of 88%.
The least performing model, tensorflow had an accuracy of 74 percent, precision of 78 percent, recall of 74 percent, and F1-Score of 76 percent.
§.§ Confusion Matrix
A useful tool for providing more insight in how best a model predicts certain classes is the confusion matrix. The confusion matrices for the models are shown below
We see from the above comparison that KNN seems to perform better in terms of true positive identification, but Logistic Regression gives a more balance performance across both classes. ANN may need further optimization to have similar performances.
§ CONCLUSION
There are many important discoveries on these heart disease’s predictors. Age, resting blood pressure and maximum heart rate have moderate correlations with the presence of heart diseases which indicate their importance as predictors. On the other hand, cholesterol has weak correlations with other variables suggesting that it may have a little predictive power in the model. It is important to note that high cholesterol is a significant risk factor for heart disease despite the weak correlations between the dataset analysis and cholesterol. Mayo Clinic mentions that elevated low-density lipoprotein (LDL) cholestrol gradually causes plaques in the arteries which can cause chest pain, heart attacks and strokes <cit.>. Therefore, it is crucial to take into account external clinical knowledge and deal with potential data problems so as to properly understand the actual relationship between cholesterol and heart disease risk.
Performance evaluations of different models reveal that Logistic Regression and K-Nearest Neighbor (KNN) had shown strong overall performances where KNN was more effective in identifying positive class especially. Support Vector Machine (SVM) also showed balanced performance but higher false positives and false negatives than Logistic Regression and KNN. Artificial Neural Networks (ANN) were the least effective performers among all and might need some fine tuning or additional data for better accuracy. Moreover, there are more cases of heart disease in this dataset making it imbalanced; affecting the model’s results negatively
§ RECOMMENDATIONS
Further study may want to explore log or Box-Cox for normalisation of variables like resting blood pressure, cholestrol and oldpeak in order to tackle their skewness. Techniques like SMOTE (Synthetic Minority Over-sampling Technique) can be employed to handle the class imbalances.
In an effort to improve model performance , the data used can be increased with further exploration of hyper parameters to be tuned. Ensemble methods such as Random Forests or Gradient Boosting can be included for better performance
Conducting feature selection techniques like Recursive Feature Elimination (RFE) will help identify and retain features with the most impact.
Metrics like the ROC-AUC score should be employed to assess model performance beyond the performance metrics used in this study
§.§ Data Availability
The data used to support the findings of this study is available on Kaggle. https://www.kaggle.com/datasets/sid321axn/heart-statlog-cleveland-hungary-finalHeart Disease Dataset on Kaggle
§.§ Conflict of Interest
The authors declare that there are no conflicts of interest.
§.§ Acknowledgment
The authors thank University of Ghana Actuarial Science and Statistics Department,Department of Computer Science and the School of Mathematical and Statistical Sciences at University of Texas Rio Grande Valley.
9
akhila2022prediction
Akhila, M., Mahalakshmi, N., and Niriksha, N.,
Prediction of Heart Disease and Diabetes using Machine Learning,
International Journal of Innovative Technology and Research, vol. 16, 2022.
haq2018hybrid
Haq, A. U., Li, J. P., Memon, M. H., Nazir, S., and Sun, R.,
A hybrid intelligent system framework for the prediction of heart disease using machine learning algorithms,
Mobile Information Systems, 2018.
jordan2017hyperparameter
Jordan, J.,
Hyperparameter Tuning,
November 2, 2017.
Available at: <https://www.jeremyjordan.me/hyperparameter-tuning/>
Accessed: Jeremy Jordan's Blog.
khvoynitskaya2020future
Khvoynitskaya, S.,
The future of big data: 5 predictions from experts for 2020-2025,
January 30, 2020.
Available at: <https://www.itransition.com/blog/the-future-of-big-data>
Retrieved from Itransition.
mayoclinic2023cholesterol
Mayo Clinic,
High cholesterol - Symptoms and causes,
Retrieved from Mayo Clinic, 2023.
Available at: <https://www.mayoclinic.org/diseases-conditions/high-blood-cholesterol/symptoms-causes/syc-20350800>
mohan2019effective
Mohan, S., Thirumalai, C., and Srivastava, G.,
Effective heart disease prediction using hybrid machine learning techniques,
IEEE Access, vol. 7, pp. 81542–81554, 2019.
saboor2022method
Saboor, A., Usman, M., Ali, S., Samad, A., Abrar, M., and Ullah, N.,
A Method for Improving Prediction of Human Heart Disease Using Machine Learning Algorithms,
Mobile Information Systems, 2022.
vardhan2022heart
Vardhan, G. H., Reddy, N. S., and Umamaheswari, K. M.,
Heart disease prediction using machine learning,
International Journal of Health Sciences, pp. 7804–7813, 2022.
who2021cardiovascular
World Health Organisation (WHO),
Cardiovascular Diseases,
Retrieved from Who, June 11, 2021.
Available at: <https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds)>
|
http://arxiv.org/abs/2409.02294v1 | 20240903211525 | Accelerating Fortran Codes: A Method for Integrating Coarray Fortran with CUDA Fortran and OpenMP | [
"James McKevitt",
"Eduard I. Vorobyov",
"Igor Kulikov"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.SR",
"cs.DC",
"cs.PL"
] |
basicstyle=,
columns=fullflexible,
breaklines=true,
postbreak=,
inst1,inst2]James McKevitt
[email protected]
inst1]Eduard I. Vorobyov
inst3]Igor Kulikov
[inst1]organization=University of Vienna, Department of Astrophysics,
addressline=Türkenschanzstrasse 17,
city=Vienna,
postcode=A-1180,
country=Austria
[inst2]organization=University College London, Mullard Space Science Laboratory,
addressline=Holmbury St Mary,
city=Dorking,
postcode=RH5 6NT,
state=Surrey,
country=United Kingdom
[inst3]organization=Institute of Computational Mathematics and Mathematical Geophysics SB RAS,
addressline=Lavrentieva ave. 6,
city=Novosibirsk,
postcode=630090,
country=Russia
§ ABSTRACT
Fortran's prominence in scientific computing requires strategies to ensure both that legacy codes are efficient on high-performance computing systems, and that the language remains attractive for the development of new high-performance codes. Coarray Fortran (CAF), part of the Fortran 2008 standard introduced for parallel programming, facilitates distributed memory parallelism with a syntax familiar to Fortran programmers, simplifying the transition from single-processor to multi-processor coding. This research focuses on innovating and refining a parallel programming methodology that fuses the strengths of Intel Coarray Fortran, Nvidia CUDA Fortran, and OpenMP for distributed memory parallelism, high-speed GPU acceleration and shared memory parallelism respectively. We consider the management of pageable and pinned memory, CPU-GPU affinity in NUMA multiprocessors, and robust compiler interfacing with speed optimisation. We demonstrate our method through its application to a parallelised Poisson solver and compare the methodology, implementation, and scaling performance to that of the Message Passing Interface (MPI), finding CAF offers similar speeds with easier implementation. For new codes, this approach offers a faster route to optimised parallel computing. For legacy codes, it eases the transition to parallel computing, allowing their transformation into scalable, high-performance computing applications without the need for extensive re-design or additional syntax.
Coarray Fortran (CAF) CUDA Fortran OpenMP MPI
0000 1111
0000 1111
§ INTRODUCTION
Across the many fields which make use of scientific computing, the enduring importance of Fortran-written codes is undeniable. Most notably, Intel's compiler remains a popular choice for these Fortran codes, primarily due to its robust performance and reliable support. However, with the exponential growth in computational demands, there is an imperative need to enhance the speed and efficiency of these codes. Shared memory parallelism techniques, like OpenMP, though useful and easy to implement, often fall short in meeting these demands. Hence, turning to distributed memory parallelism and graphics processing units (GPUs) becomes essential, given their capacity to exploit modern and computationally efficient hardware <cit.>.
To optimise the use of GPUs in general computing tasks (general purpose computing on GPUs; GPGPU; <cit.>), Nvidia's CUDA, a parallel computing platform and programming model, is often employed <cit.>. Fortran users can leverage CUDA Fortran, an adapted language also provided by Nvidia, which offers all the speed advantages of CUDA, but with the familiar Fortran syntax <cit.>. The true potential of CUDA Fortran is unlocked when applied to tasks that involve heavy parallelisation like Fast Fourier Transform (FFT) operations <cit.>, often a common and performance-critical component in astrophysics simulations and image or data processing <cit.>.
For distributed memory parallelism the Message Passing Interface (MPI) is commonly used <cit.>. However, its implementation can be resource-intensive and often requires a full re-write of the original serialised code. We turn, however, to Coarray Fortran, as a simpler yet powerful alternative <cit.>. Coarray Fortran, introduced in the Fortran 2008 standard, is designed specifically for parallel programming, both with shared and distributed memory <cit.>. It not only offers a simple syntax but also ensures efficient performance, especially in the Intel implementation, easing the transition from single-to-multiple-node programming.
The fusion of these two paradigms offered by separate providers, while non-trivial, offers a powerful combination to accelerate Fortran codes on the most modern hardware using intuitive Fortran syntax. This paper provides a comprehensive guide on how to perform such an acceleration of Fortran codes using CUDA Fortran and Coarray Fortran. Regardless of the specific problem at hand, our methodology can be adapted and implemented by scientists in their computational work. We demonstrate the advantages and drawbacks of various available configurations, using a well-established and highly parallelisable potential field solver as a case study <cit.>. We also explore various techniques and strategies, such as the usage of pointers to optimise communication speed and implement direct memory access. Additionally, we delve into the definition of variables required by both the central processing unit (CPU) and GPU memory, highlighting the treatment of variables describing a potential field in our case study as an example.
Our proposed approach has a broad focus, providing a roadmap that can be adapted to any Fortran code. Through this detailed guide, we aim to enable researchers to streamline their computational workflows, augment their codes' speed, and thereby accelerate their scientific work.
§.§ Coarray Fortran
Coarray Fortran (CAF), introduced in the Fortran 2008 standard, has gained recognition for its ability to simplify parallel programming. CAF offers an intuitive method for data partitioning and communication, enabling scientists to focus on problem-solving rather than the intricacies of parallel computing.
Coarray Fortran integrates parallel processing constructs directly into the Fortran language, eliminating the need for external libraries like MPI. It adopts the Partitioned Global Address Space (PGAS) model, which views memory as a global entity but partitions it among different processors <cit.>. This allows straightforward programming of distributed data structures, facilitating simpler, more readable codes.
In CAF, the global computational domain is decomposed into a series of images, which represent self-contained Fortran execution environments. Each image holds its local copy of data and can directly access data on other images via coarray variables. These variables are declared in a similar fashion to traditional Fortran variables but with the addition of codimensions. The codimension has a size equal to the number of images. The variables defined without codimensions are local to each image.
Coarray Fortran introduces synchronisation primitives, such as , , , and statements, to ensure proper sequencing of events across different images. This provides programmers with the tools to manage and prevent race conditions and to coordinate communication between images.
§.§ CUDA Fortran
CUDA Fortran is a programming model that extends Fortran to allow direct programming of Nvidia GPUs. It is essentially an amalgamation of CUDA and Fortran, providing a pathway to leverage the computational power of Nvidia GPUs while staying within the Fortran environment.
The CUDA programming model works on the basis that the host (CPU) and the device (GPU) have separate memory spaces. Consequently, data must be explicitly transferred between these two entities. This introduces new types of routines to manage device memory and execute kernels on the device.
In CUDA Fortran, routines executed on the device are known as kernels. Kernels can be written in a syntax very similar to standard Fortran but with the addition of qualifiers to specify the grid and block dimensions. This is how CUDA Fortran harnesses the power of GPUs - by organising threads into a hierarchy of grids and blocks around which the hardware is constructed.
An essential aspect of CUDA Fortran programming is understanding how to manage memory effectively, which involves strategically allocating and deallocating memory on the device and copying data between the host and the device. Special attention needs to be given to optimising memory usage to ensure that the full computational capability of the GPU is used.
CUDA Fortran can be integrated seamlessly with existing Fortran codes, offering a less labour-intensive path to GPU programming than rewriting codes in another language <cit.>.
§.§ Heterogeneous Computing with CPUs and GPUs
When developing a code capable of running on a heterogeneous system — that being a system containing more than one type of processor, in this case a CPU-GPU system — it is important to understand the differences in the architecture of parallel execution between the hardware to best distribute tasks and optimize code performance.
Single Instruction, Multiple Data (SIMD) is a parallel computing model where one instruction is executed simultaneously across multiple data elements within the same processor <cit.>. This process is known as vectorisation, and it allows a single processor to perform the same operation on multiple data points at once.
In GPU architectures, SIMD is combined with multi-threading and implemented in a way where warps (groups of threads) receive the same instruction. This means that while multiple threads perform identical operations in parallel with each other, each thread also performs vectorised (SIMD) operations on its assigned data. This broader GPU parallelism is known as Single Instruction, Multiple Threads (SIMT) <cit.>.
While each GPU thread is capable of performing hundreds or thousands of simultaneous identical calculations through vectorisation, individual CPU cores also support SIMD but on a smaller scale, typically performing tens of operations simultaneously each. However, unlike in GPUs, each core within a CPU multiprocessor can receive its own set of instructions. This means that while individual cores perform SIMD operations, the overall CPU executes multiple instructions across its multiple cores, following the Multiple Instruction, Multiple Data (MIMD) model <cit.>.
This means that besides the difference in scale of simultaneous operations between CPUs and GPUs, there are also architectural differences in how CPU MIMD and GPU SIMT handle parallel tasks. In MIMD architectures like those in CPUs, each core operates independently, allowing more flexibility when encountering conditional branching such as an statement. This independence helps minimise performance losses during thread divergence because each core can process different instructions simultaneously without waiting for others.
Conversely, in GPU architectures using SIMT, all threads in a warp execute the same instruction. This synchronization can lead to performance bottlenecks during thread divergence —– for instance, when an statement causes only some threads within a warp to be active. In such cases, GPUs typically handle divergence by executing all conditional paths and then selecting the relevant outcomes for each thread, a process that can be less efficient than the CPU’s approach. This synchronisation requirement makes GPUs highly effective for large data sets where operations are uniform but potentially less efficient for tasks with varied execution paths. Thus, while GPUs excel in handling massive, uniform operations due to their SIMD capabilities within each thread, CPUs offer advantages in scenarios where operations diverge significantly.
It is, therefore, important to understand which problems are best suited to which type of processor, and design the code to distribute different tasks to the appropriate hardware.
The layout of this work is as follows. In Sect. 2 we outline our methodology and consider a number of options which are available to solve this problem, depending on the use case. We also present a detailed guide on how compiler linking can be achieved. In Sect. 3 we present the results of our tests on different hardware. In Sect. 4 we summarise our main conclusions.
§ METHODOLOGY
The methodology we propose hinges on the robust combination of CUDA Fortran and Coarray Fortran, leveraging their unique strengths to develop efficient high-performance computing applications. The primary challenge is the complex interfacing required to integrate the GPU-accelerated capabilities of Nvidia CUDA Fortran with the distributed memory parallelism of Intel Coarray Fortran. CUDA Fortran is chosen as it allows the user high levels of control over GPU operations - some of which are highlighted below - while remaining close to the traditional Fortran syntax. Likewise, Coarray Fortran, particularly when implemented by Intel, allows for high distributed memory performance with no augmentations to the standard Fortran syntax.
Below, we detail the steps involved in our approach.
§.§ Selection of Compilers
CUDA Fortran, being proprietary to Nvidia, requires the use of Nvidia's compiler. However, does not support Coarray Fortran. In contrast, Intel's compiler supports Coarray Fortran - with performance levels that rival MPI and without the complexity of its syntax - but does not support CUDA Fortran. One other alternative compiler supporting Coarray Fortran is . According to our experience, its implementation falls short in terms of speed. However, we note that experiences with OpenCoarrays may vary depending on a variety of factors such as hardware configurations and compiler versions.
When considering most Fortran programmes in general, Intel's compiler is a common choice and offers a high-performance, robust and portable compiler. Consequently, here we demonstrate a hybrid for Coarray Fortran and for CUDA Fortran solution.
§.§ Memory Space Configuration
When creating a single code which requires the use of two compilers, a few key considerations are required. For the following text, Intel code refers to code compiled with and Nvidia code refers to code compiled with the . The various execution streams mentioned below refer to an executed command being made within either the Intel code or the Nvidia code.
§.§.§ Pageable and Pinned Memory
Before continuing, a short background on pageable and pinned memory is required.
Pageable memory is the default memory type in most systems. It is so-called because the operating system can page it out to the disk, freeing up physical memory for other uses. This paging process involves writing the contents of the memory to a slower form of physical memory, which can then be read back into high-speed physical memory when needed.
The main advantage of pageable memory is that it allows for efficient use of limited physical memory. By paging out memory that is not currently needed, the operating system can free up high-speed physical memory for other uses. This can be particularly useful in systems with limited high-speed physical memory.
However, the paging process can be slow, particularly when data is transferred between the host and a device such as a GPU. When data is transferred from pageable host memory to device memory, the CUDA driver must first allocate a temporary pinned memory buffer, copy the host memory to this buffer, and then transfer the data from the buffer to the device. This double buffering incurs overhead and can significantly slow down memory transfers, especially for large datasets.
Pinned memory, also known as page-locked memory, is a type of memory that cannot be paged out to the disk. This means that the data is constantly resident in the high-speed physical memory of the system, which can result in considerably faster memory transfers between the host and device.
The main advantage of pinned memory is its speed. Because it can be accessed directly by the device, it eliminates the need for the double-buffering process required by pageable memory on GPUs. This can result in significantly faster memory transfers, particularly for large datasets.
However, pinned memory has its drawbacks. The allocation of pinned memory is more time-consuming than pageable memory - something of concern if not all memory allocation is done at the beginning - and it consumes physical memory that cannot be used for other purposes. This can be a disadvantage in systems with limited physical memory. Additionally, excessive use of pinned memory can cause the system to run out of physical memory, leading to a significant slowdown as the system starts to swap other data to disk. Pinned memory is not part of the Fortran standard and is, therefore, not supported by the Intel compiler. This leads to additional intricacies during the combination of compilers, as we discuss below.
§.§.§ Host memory: Coarray Fortran
Returning to our problem, when using Intel code and Nvidia code together, the division of parameters and variables between the two is one of the key areas to which attention should be paid. The execution stream within the Nvidia code can only access and operate on variables and parameters which have been declared in the Nvidia code. Likewise, the Intel execution stream can only access and operate on variables and parameters which have been declared in the Intel code. For this reason, we consider a virtual partition within the host physical memory, which can be crossed through the use of subroutines and interfaces, which we discuss in more detail below. This clear division of the physical memory does not happen in reality, as Intel and Nvidia declared variables and parameters will be mixed together when placed in the physical memory, but it is a helpful tool to consider the possible configurations we detail below.
Depending on which side of the partition in the host memory the variables are placed defines where and at what speed they can be transferred. Figure <ref> shows a selection of these options. In the leftmost case, a variable has two identical copies in the host memory, one defined by the Intel compiler to allow CAF parallelisation (simply using the attribute) and one by the Nvidia compiler to allow it to be pinned. Proceeding to the right, in the next case, the variable is defined by the Intel compiler for CAF communication, but a pointer to this variable is defined by the Nvidia compiler, meaning it cannot be pinned and so suffers from the slower pageable memory transfers to the GPU. In the next cases, we consider the options available with MPI. The variable is defined by the Nvidia compiler which allows it to be pinned, but a pointer to this variable is defined by the Intel compiler. This means that CAF transfers are not possible as they are not supported for pointers, meaning MPI is required for distributed memory parallelism. In the final case, the Intel code is not required at all, as the Nvidia compiler supports these MPI transfers natively. To understand any overhead associated with the pointing procedure which allows the combination of CAF and CUDA, we implemented the two MPI solutions to our potential solver and found no appreciable difference in performance. Any small speedup or slowdown between the two options — one using pointers to combine both compilers and one only using the Nvidia compiler — is likely due to the different MPI versions and implementations used in the compilers, rather than by the pointing procedure.
The partition-crossing use of such a variable is much slower when copying the array than when using a pointer, which has practically no speed overhead at all. In the case that the variable is copied across the partition, the version which is defined in the Nvidia code can be defined with attributes allowed by the Nvidia compiler, one of these being that it is pinned. This would not be possible in the Intel version of the variable because, as previously discussed, pinned memory is not part of the Fortran standard and so is not supported by the Intel compiler. It is important to note that, as mentioned above, the Nvidia compiler natively supports MPI, and can therefore run Fortran codes parallelised across shared memory, distributed memory and GPUs. While Coarray Fortran offers a simpler syntax, it requires a slightly more complex compilation process and setup when using GPU parallelisation. We, therefore, lay out this process as clearly as possible in this article, to make the simplified speed-up of Coarray Fortran as accessible as possible.
The pointer solution, while allowing a faster cross-partition solution, does not allow for this pinned attribute, as the Nvidia code simply points to the array defined by the Intel compiler, which resides in pageable memory. This means a pageable transfer rather than a pinned transfer takes place when moving the values onto the device, which is slower.
It is important to think about the problem which is being addressed as to which solution is more optimal. On our hardware and testing with a 128^3 array, pinned memory transfers took around 1 ms, as opposed to 3 ms taken by pageable memory transfers. However, during cross-partition operations, using a pointer resulted in approximately ∼0 ms of delay, as opposed to 3 ms when using a copy operation. Given the choice between 1) 1 ms GPU transfers but 3 ms cross-partition transfers, or 2) 3 ms GPU transfers but ∼0 ms cross-partition transfers, it is important to consider the specific use case.
In our case, and therefore also in most cases, it is more optimal to use a pointer in the Nvidia code. This does not mean pinned memory transfers to the device cannot be used at all. They simply cannot be used for any coarrayed variables, which reside in the Intel code partition. During implementation, it is also clearly easier to implement the pointer configuration, which we describe now.
First, we show the implementation of two copies of the same array stored in the physical memory of the host, either side of our partition. We note that this is inefficient and makes poor use of the memory available given that parameters and variables requiring transfer are stored twice, although this is not typically a problem for modern high-performance computing systems. Furthermore, every time a variable is updated by the execution stream of one side of the code, a transfer is required across the host physical memory causing some considerable slowdown — particularly in the case of large arrays which are frequently updated. Technically, a transfer could only be made if it is known that the data will be changed on the other side of the partition before being used again, but this introduces a large and difficult-to-detect source for coding errors, especially in complex codes, and so is not advisable. This transfer could also be performed asynchronously to mask the transfer time, but the inefficient use of available physical memory is intrinsic to this solution. An illustration of how this setup works can be seen in Figure <ref>, the stages of which are as follows:
1. The Nvidia execution stream calls a subroutine, which has been defined and compiled within the Intel code, to get the value of the array (in this case a coarray), which is to be operated on. To do this, an interface is defined within the Nvidia code with the attribute, meaning that while the source code is Fortran, the compiled code is C, ensuring a robust connection between the two compilers. The destination subroutine is also written with the attribute for the same reason. An example subroutine showing how to use C-binding is provided in the appendix.
2–3. The subroutine returns the value of the array and sets the value of the counterpart array in the Nvidia code to the same value.
4. The array is now updated on the Nvidia partition and can be operated on by the Nvidia execution stream.
5. A pinned memory transfer can now take place, moving the array to the device memory.
6. The array is operated on by the device.
7. A pinned memory transfer allows the array to be moved back to the Nvidia partition of the host memory.
8–11. An inverse of the first four steps takes place, allowing the updated array to be used by the Intel execution stream, and accessible for coarray transfer to the rest of the distributed memory programme.
What was found to be more efficient in our use case, and in general something which is true for the majority of cases in terms of execution speed, memory usage and coding simplicity, was to declare a parameter or variable once in either the Intel or Nvidia code, and then create a pointer to this parameters or variable in the partition of the code. This means that whenever, for example, the Intel execution stream requires a large array from the Nvidia code, no slowdown is caused during the transfer of the array to the other partition and no unnecessary overhead in the physical memory is present.
This pointing can be accomplished using an intermediate series of subroutines, calls, and pointers, all of which are written in Fortran but declared with the c-binding to ensure a robust and common connection between the two compilers, as mentioned above. In the prior duplication case, these subroutines and calls are required before the use of any parameters or variables which have been changed by the alternate execution stream since their last use, to ensure an up-to-date version is used. However, with this new pointer case, the subroutines and calls are only required once at the beginning of the running to setup and initialise the pointing, making it harder to mistakenly forget to update an array before using it in the source code. This configuration can be seen in Figure <ref>. The steps of this configuration, to set up a coarrayed variable capable of distributed memory transfer and a counterpart pointer which allows transfer to the device for GPU accelerated operations, are as follows:
1. The Nvidia execution stream again calls a subroutine, defined and compiled within the Intel code, to get the address in the memory of the array (in this case a coarray), which is to be operated on. This must be declared with the target attribute in the Intel code, so it can be pointed to. As before, an interface is defined within the Nvidia code with the attribute, to allow this connection. The destination subroutine is also written with the attribute, and the pointer is made a pointer for the same reason.
2. The subroutine returns the address of the array as a C-pointer.
3. The C-pointer is converted to a Fortran pointer for use by the Nvidia code. The original coarray, as defined by the Intel code and residing in the Intel partition of the host memory, can now be operated on and transferred to the device for GPU-accelerated operations. We note that in this case the data are transferred from the pageable host memory to the device, thus incurring a certain overhead, as described above.
This handling of sensitive cross-compiler operations with C-bindings was found to be essential, as relying on Fortran-Fortran interfacing between the compilers led to non-descript segmentation errors. The use of C-binding ensures a robust solution to this issue, with no overhead in the execution speed and no deviation from the Fortran syntax in the source code.
§.§.§ Host memory: MPI
This technique is almost identically applicable when using MPI. However, it should be noted that in that case, an additional benefit is present in that arrays which are communicated using the MPI protocol do not require a coarray attribute in their definition. This means that, opposite to the case in Figure <ref>, the array can be defined in the Nvidia code, and the pointer in the Intel code. In this case, the full speed-up of data transfer for the pinned memory can be utilized. An illustration of this can be seen in Figure <ref>.
§.§ CPU Affinity with GPUs
In any high-performance computing application using either PGAS, or more widely the single program multiple data (SPMD) technique, the ability to control the allocation of programme processes to specified processing units—commonly referred to as CPU affinity or process pinning—is pivotal for performance optimisation. This is especially relevant in systems with a Non-Uniform Memory Access (NUMA) architecture (common in HPC), and/or in systems with multiple CPUs and GPUs.
In a NUMA architecture, the term NUMA node refers to a subsystem that groups sets of processors within one multiprocessor together and connects them with a memory pool, forming a closely-knit computational unit. Unlike a Uniform Memory Access (UMA) configuration where each processor within the multiprocessor has equal latency for memory access across the system, NUMA nodes have differing latencies depending on whether the accessed memory resides within the same NUMA node or in a different one. Taking the example of one of our systems, described in detail later, this can be seen in Figures <ref> and <ref>. When using a hybrid programming model, such as a PGAS language like Coarray Fortran in combination with OpenMP in our case, it becomes specifically important to remember these.
Therefore, a well-configured system strives to keep data and tasks localised within the same NUMA node whenever possible, mitigating the impact of varying memory access latencies across NUMA nodes. This concept is critical for understanding the intricacies of CPU affinity and process pinning in multi-socket, multi-GPU systems.
The Linux command can be used to determine relative distances between NUMA nodes, the memory available to each group, and the ID of each processor within each NUMA node. If more exact latency numbers are required between NUMA nodes, tools like the Intel Memory Latency Checker can be used. If hyperthreading is enabled on a system, then additional processor IDs will be shown in addition to those referring to physical processors. In our case, our first NUMA node, NUMA node 0, encompasses the processors 0–15 and 128–143, with the former referring to the physical processors and the latter referring to their hyperthreaded counterparts. We do not concern ourselves with hyperthreading in this paper, as we observed no speed improvements when using it.
A second important aspect of the NUMA architecture when considering GPUs is the connection of GPUs to the processors, found using the Linux command . In our architecture, a GPU is connected to one NUMA node, with one device connected per socket. As seen in Figure <ref>, each NUMA node within a socket has a slight speed penalty associated with communication with NUMA nodes in the same socket (20% increase) and a high overhead associated with communicating with the other socket (220% increase). When a coarray image on one socket, therefore, attempts to communicate with the GPU of the other, there is a big overhead in the communication. Where our GPUs are connected is shown in the figure.
For practical application within Intel Coarray Fortran, the Intel MPI Library offers environment variables to facilitate CPU affinity settings:
where is a hexadecimal (hex) code corresponding to the IDs of the compute cores within the multiprocessors which are to be connected to the GPUs. This is described in more detail below.
To effectively set CPU affinity through Intel MPI's environment variables, one must provide the correct hexadecimal (hex) codes corresponding to the CPU or core IDs. These hex codes serve as unique identifiers for the CPUs and are critical for pinning computational tasks accurately.
In the case of our code, each coarray image is designed to require one GPU. It is, therefore, important to ensure that each coarray image is running on the socket containing the NUMA node that is connected to the correct GPU, namely, to the GPU the coarray image is using. Other situations where multiple coarray images use the same GPU are also possible, in which case these images should be pinned within the same socket the GPU is connected to. However, if more images are added to a socket, they each naturally must reside on fewer cores, meaning that any CPU parallelisation is limited. Determining the balance between CPU and GPU parallelisation is something that will vary between use cases.
As aforementioned, in our example, we are interested in CPU affinity because we want to ensure that each coarray image runs on the socket containing the NUMA node that is connected to the GPU it is using.
Once the desired destination CPU IDs have been identified, they can be converted to hexadecimal format in the following way: Given two coarray images are running on one node (in this case, one image on each socket), the CPU code of the first image should be pinned to NUMA node 3 and CPU code of the second image to NUMA node 7, with the corresponding GPU code pinned to GPUs 0 and 1 respectively. The GPU number used by the coarray image can simply be set using the Nvidia CUDA Fortran line
istat = cudaSetDevice(mod(irank-1,gpusPerNode))
where is the Coarray Fortran image number, starting from 1 and is the number of GPUs attached to each computational node, this being 2 in the case shown in Figures <ref> and <ref>. This will effectively alternate between 0 and 1 as the image number () increases from 1 to the last image.
Taking our test case, it is known that NUMA node 3 corresponds with cores 48–63. This first needs to be encoded in binary, considering all the 127 cores we have available:
00000000000000000000000000000000 00000000000000001111111111111111 00000000000000000000000000000000 00000000000000000000000000000000
This is then converted to a hex code:
.
The same is done for NUMA node 7 (cores 112-127) and the appropriate hex code (), is generated. Once obtained, these hex codes can be placed in the environment variable to set CPU affinity precisely as:
export I_MPI_PIN=on
export I_MPI_PIN_DOMAIN=[FFFF0000000000000000,FFFF]
here receives two arguments for positions within the node (), given that in our case we want to pin two coarray images to each node. Each coarray image is then allocated, starting with the first image (1) and proceeding in order to the last image, to position 1 on node 1, position 2 on node 1, position 1 on node 2, position 2 on node 2, and so on.
Activating these environment variables and running the Coarray Fortran executable adheres the computation to the designated CPU affinity. The fidelity of this configuration can be corroborated via the environment variable.
The performance difference between pinning coarray images on the correct, and incorrect, sockets for their associated GPUs, can be seen in our scaling tests below.
§.§ Compiling and Linking
Finally, we compile the CUDA Fortran code with into constituent object files and then into one shared object library file. The Coarray Fortran code is compiled with into constituent object files. To link the two together, we use , ensuring that the linking is performed in an environment where both CUDA Fortran and Coarray Fortran libraries are included in the library path. We outline a simplified procedure here:
* Compile the CUDA Fortran device code into a position-independent machine code object file:
* Create a shared object library file using the object file:
* Compile the Fortran host code into a machine code object file:
* Create a CAF configuration file:
* Link the host machine code object file and the device machine code shared object library file, also using CAF distributed memory flags:
Before execution of the programme, the relative path to the source directory should also be appended to the dynamic shared libraries path list environment variable .
Our methodology involves a meticulous combination of CUDA Fortran's GPU acceleration and Coarray Fortran's efficient data distribution across multiple compute nodes. It requires careful attention to compiler selection, memory allocation, and interfacing between two different programming paradigms. The result is a seamless integration of these distinct models into a single high-performance computing application with high-performance GPU and distributed memory acceleration.
§.§ Intel Coarray Fortran with AMD processors
Intel's does not officially support the running of code on AMD processors. Therefore, there are inevitably some complications when attempting to do this. Many high-performance computing centres use AMD processors, including the ones used by us for this study. We faced errors when running our code on AMD processors from the remote memory access (RMA) part of Coarray Fortran (CAF). Unlike pure-MPI, CAF uses a one-side communications protocol which means that instead of each image being told to send and receive data, one image is allowed to access the data from another without any action from the target image. CAF still uses MPI commands, and therefore requires an MPI version, but only uses the one-sided protocols.
We used the compiler in as it was the most recent version of the intel compiler available to us, and found that it cannot perform such RMA operations when AMD processors are being used, and therefore CAF programmes cannot run. There are three solutions we identified to this problem, the first of which is to change the version of MPI which is used by to one released by . The second is to change the Open Fabric Interface (OFI) version. The third, and in our experience the fastest, solution is to change the internal default transport-specific implementation. This still allows RMA operations to take place but through a slightly different, AMD processor-compatible, method.
This can be done by setting the environment variable . This is an environment variable setting for the Message Passing Interface (MPI) implementation, specifically for the MPICH library, which is a widely used open-source MPI implementation. The Intel Fortran compiler supports MPI for parallel programming, and the MPICH library can be used in conjunction with the Intel Fortran compiler for this purpose. The environment variable is related to the CH4 device layer in MPICH, which is responsible for communication between processes. CH4 is a generic communication layer that can work with different network interfaces, and OFI (Open Fabric Interface) is one such interface. The variable controls the usage of RMA operations in the CH4 device layer when using the OFI network interface. RMA operations enable processes to directly access the memory of other processes without involving the target process, thus providing low-latency communication and better performance.
To understand the overhead associated with changing the internal default transport-specific implementation, we ran speed tests using the Intel Benchmarking tool on some of the MPI operations which are used by CAF both with and without OFI RMA disabled. We used the and commands for this comparison, the results of which can be seen in Figure <ref>. As is seen in this Figure, there is little effect above a few × 10^6 bytes, saturating to no speed overhead incurred by changing the RMA implementation as we describe. Therefore, our comparisons of speeds between MPI and our slightly augmented CAF can be seen as representative of a direct comparison between MPI and CAF.
§ PERFORMANCE ANALYSIS AND RESULTS
To test our integration of CAF with CUDA Fortran, and to compare it with a hybrid MPI-CUDA Fortran approach, we employed the convolution method for solving potential fields on nested grids in three dimensions <cit.>, referred to from hereon in as CM4NG. This method provides an appropriate benchmark due to its reliance on all forms of shared memory, distributed memory, and GPU parallelism. We performed testing on two different clusters with different hardware configurations. These are the Vienna Scientific Cluster 5 (VSC5)[https://vsc.ac.at/https://vsc.ac.at/] and Compute Canada's Narval[https://www.calculquebec.ca/https://www.calculquebec.ca/][https://alliancecan.ca/https://alliancecan.ca/]. The specifications of the hardware configurations are shown in Table <ref>.
The domain distribution performed on each of the clusters for the convolution theorem on nested grids can be seen in Figure <ref>.
In our case, each nested mesh was allocated one GPU, with the number of images per socket varying based on the cluster configuration. GPU utilisation was over 50% during the test cases, meaning that a computational deceleration would be observed if hosting multiple nested grids on one GPU. If hardware availability is an issue, this may be outweighed by HPC queue times, and a technique called CUDA Streams could be used to split the GPUs. However, these were not considered in this study.
In the following text, N is used to refer to the length of the three-dimensional array used by each mesh level in the convolution method. Ndepth is used to denote the number of mesh levels, each one corresponding to a Coarray Fortran (and MPI when used for comparison purposes) image. For these tests, RMA operations in the CH4 device layer using the OFI interface were disabled, as explained above, for both the CAF and MPI tests to ensure comparability. In the following text, CAF+ is used to refer to the fully parallelised hybrid Coarray Fortran, CUDA, and OpenMP approach explained above. MPI+ is similarly used to refer to the aforementioned MPI, CUDA, and OpenMP approach.
§.§ GPU and distributed memory acceleration
To assess the performance of our integration, we first compared the execution times and scaling efficiency of our solution when only parallelised with OpenMP for shared memory parallelism, with shared memory parallelism combined with a single GPU acceleration, and then compared this with the fully parallelised and optimised Coarray Fortran-CUDA-OpenMP hybrid approach. The results of these tests and how they scale, for both N=64 and N=128 can be seen in Figure <ref> for different numbers of nested levels. It should be noted that this figure uses VSC5 for the single-node results and Narval for the multi-node results. The performance of the two clusters using distributed memory is shown later.
It can be seen that GPU acceleration is highly desirable as it enables acceleration of a factor of approximately 10. Distributed memory parallelism further accelerates our solver by a factor of at least 2.5. In cases such as these, where a potential solver forms part of a larger code which is all Coarray Fortran parallelised, the acceleration is further enhanced by requiring less transfers all to one node to perform the GPU operations. However, such additional complexities are not considered here.
§.§ CPU-GPU affinity
To present the importance of CPU-GPU affinity, we confined each nested mesh level to a NUMA node and then performed speed testing by running the code with perfect affinity and perfectly inverse affinity. These optimal and pessimal configurations are illustrated in Figure <ref>. Case a) presents the perfect affinity configuration when the coarray image runs its GPU tasks on the device directly connected to its CPU socket, while case b) shows the worst affinity configuration when the coarray image has to communicate through the other socket to perform its GPU tasks.
The results of this testing can be seen in Figure <ref>. The degree to which CPU-GPU affinity impacts the performance of CM4NG is dependent on the size of the computational task, given that the CPU-GPU transfer time takes up a smaller portion of the solving time as the number of computations increases with grid size. In our case, when it has the most impact, the optimal configuration is approximately 40% faster than the pessimal configuration. This is seen when using two nested meshes, each with 64^3 resolution. For the 64^3 resolution cases, the difference between optimal and pessimal configurations saturates, to the point where for 12 nested mesh levels, the advantage is negligible. For the 128^3 case, the difference between optimal and pessimal configurations is negligible for all mesh depths.
§.§ GPUs per node
To test the performance of the code when running on a different number of GPUs per node, we performed scaling tests on VSC5 and Narval. The results of this can be seen in Figure <ref>. Notable here is the speed-up of the code when using 4 GPUs per node on Narval as opposed to 2 GPUs per node on VSC5. Although Narval's CPUs support a slightly higher clock speed and a larger L3 cache, these differences are mostly due to the fact that inter-node communication is more costly than intra-node communication. When 4 GPUs per node are used as opposed to 2, only 3 nodes in total are needed as opposed to 6. This reduces the transfer time introduced by distributed memory parallelism, as the distribution is across a smaller number of nodes.
§.§ Coarray Fortran vs MPI
To test the efficiency of the Coarray Fortran implementation explained above, we performed scaling tests for both the CAF and MPI versions of the solver. The results of these tests can be seen in Figure <ref>. We observe that the performance of the CAF and MPI versions of the code are comparable, with the MPI version performing approximately 10% faster for 12 nested mesh levels for the 64^3 resolution case, and approximately 5% faster for the 128^3 resolution case. This difference becomes more pronounced as the size of the computational domain decreases, with CAF being 30% slower for 64^3 with 2 nested meshes and 50% for 128^3. This is partly due to the ability of the MPI code to use pinned memory for device transfers, as described above. However, the main reason for this difference is the faster implementation of MPI than CAF by Intel in the compiler. In the cases where the number of mesh levels is lower, the computation forms less than half of the total time to complete the solution, with the majority of time being spent on communication. This means the result is extremely sensitive to the speed of the communication. In real application, for the production of useful results, CM4NG is used at 8 nested mesh levels and above. In cases where data transfers form the majority of the time to complete the solution, and actual computation forms the minority, MPI becomes more desirable.
§.§ Execution times
The execution times for the different hardware configurations, as seen, show that Coarray Fortran performs comparably to MPI, scaling in the same manner with no worsening performance as the computational domain increases, both for the nested mesh size and the number of nested meshes. All results demonstrate a near-proportional scaling of the code with nested mesh level. In our method, no asynchronous transfers can be performed so all effects of transfers between the device and the host are reflected in the results. However, as is seen, when more GPUs are clustered on a computational node, the code performs better, given less inter-node transfers are required, which are more costly than intra-node transfers.
We found that for a CM4NG computational domain of a size producing adequate accuracy, the CAF-CUDA-OMP integration showed a 5% reduction in speed when compared to the MPI-CUDA Fortran approach. This, for us, is considered in the context of the simplicity of implementing Coarray Fortran when compared to MPI. According to our experience, many legacy Fortran codes were written to be run in a serial mode. Such codes can be parallelised with modest efforts to be run on a single node using OpenMP, unless blocks were utilised, in which case parallelisation is difficult and requires introducing modules. In any case, a distributed memory parallelisation of legacy Fortran codes using MPI requires a significant rewrite of the code which, in terms of the required efforts, may be comparable to writing a new code from scratch. With Coarray Fortran the distributed memory parallelisation of legacy Fortran codes becomes feasible with relatively little effort but significant speed-up. This is because CAF requires introducing only several additional lines and variables to the code, keeping the existing code structure and syntax intact. The fact that Coarray Fortran can be integrated with CUDA Fortran makes the coarray language standard particularly attractive for scientists.
§ CONCLUSIONS
In this study, we have successfully demonstrated a robust methodology for integrating Intel Coarray Fortran with Nvidia CUDA Fortran, supplemented by OpenMP, to optimise Fortran codes for high-speed distributed memory, GPU, and multiprocessor acceleration. Our approach, focusing on the fusion of these distinct yet powerful paradigms, has shown significant promise in enhancing the computational performance of Fortran codes without the need for extensive rewrites or departure from familiar syntax.
* Performance Improvements: Our results indicate that the Coarray Fortran-CUDA Fortran integration leads to substantial performance gains. We observed, for our use case, only a 5% reduction in execution time compared to a similar MPI-CUDA Fortran approach. The performance of CAF-CUDA Fortran is significant, considering the comparative ease of implementation of Coarray Fortran over MPI. Our findings underscore the potential of Coarray Fortran, especially for the scientific community that relies heavily on Fortran codes. Its straightforward syntax and implementation make it an accessible and powerful tool for researchers who may not have extensive experience with more complex distributed memory parallelism techniques.
* Scalability: The near-linear scaling of our potential solver code with the increase in nested mesh levels and the number of nested meshes highlights the efficiency of our approach, and this scaling is present in both the MPI and Coarray Fortran implementations of the code. This scalability allows the code to be run at a competitive speed on a range of hardware depending on its performance.
* Hardware Utilisation: Our methodology's ability to leverage multiple GPUs effectively, as evidenced by improved performance on systems with a higher concentration of GPUs per computational node, points towards the importance of hardware-aware code optimisation and the minimisation of distributed memory transfers. This means the speed of these communications should be as high as possible and not influenced heavily by additional transfers than those to and from the GPU memory. Our methodology avoids this by using pointers, ensuring the most optimal Coarray Fortran and MPI performance.
* CPU-GPU affinity: When using Coarray Fortran and CUDA Fortran together, we observe an increase in performance when CPU-GPU affinity is optimised, and show that this is particularly important when data transfer times make up a significant proportion of the complete solution time. We demonstrate the method which can be used to optimise CPU-GPU affinity when using Coarray Fortran via the environment variable.
* Portability: As our approach relies in part on the Intel Fortran compiler, we outline effective solutions to enable the running of Coarray Fortran on AMD processors, the fastest of these being to change the remote memory access protocol used by CAF.
In conclusion, our integration of Coarray Fortran with CUDA Fortran and OpenMP offers a significant step forward in modernizing Fortran-based scientific computing. Multiple implementations are available and are compared and contrasted depending on the use case.
§ ACKNOWLEDGEMENTS
We are thankful to the referee for the comments and suggestions that helped to improve the manuscript. This work was supported by the FWF project I4311-N27 (J.M., E.I.V.) and RFBR project 19-51-14002 (I.K.).
Benchmarks were performed on the Vienna Scientific Cluster (VSC)[https://vsc.ac.at/https://vsc.ac.at/] and on the Narval Cluster provided by Calcul Québec[https://www.calculquebec.ca/https://www.calculquebec.ca/] and the Digital Research Alliance of Canada[https://alliancecan.ca/https://alliancecan.ca/].
elsarticle-num
§ C-BINDING IN FORTRAN
C-binding in Fortran is a powerful feature that allows interoperability with C, enabling the use of C data types, and the calling of C functions and libraries. This feature is particularly useful in high-performance computing where leveraging both Fortran's computational efficiency and C's extensive library ecosystem can be advantageous.
In the context of our study, C-binding plays a critical role in enabling robust interaction between the and compilers, allowing them to communicate according to the C standard. When implementing pure Fortran interfaces between the compilers, we met numerous segmentation errors which were avoided when communication was facilitated by binding the connection with C. This section provides an overview of how C-binding is used in our methodology and a simple example for illustration.
§.§ Using C-binding in Fortran to combine Fortran compilers and allow a robust common memory space
In the case below, code compiled with compiler A will receive the location of a variable in memory compiled by compiler B. The code compiled by compiler A will then be able to access the variable directly.
To use C-binding in Fortran, specific steps must be followed:
B1 Define the subroutine to return the memory location: In code compiled with compiler B, define a subroutine with the attribute, which returns the C address of a variable.
A1 Define the interface to the subroutine: In the code compiled with compiler A, define an interface to the previous subroutine, also using the attribute. This must be done inside a module.
A2 Call the subroutine: In the code compiled with compiler A, call the subroutine defined in step B1. This will return the C address of the variable to the code compiled with compiler A.
A3 Convert the C address to a Fortran pointer: In the code compiled with compiler A, convert the C address to a Fortran pointer. This can be done using the intrinsic function.
§.§ Example of C-Binding for memory location passing
Below is a simple example demonstrating the use of C-binding in Fortran to pass the memory location of a variable from one compiler to another.
§.§.§ Code compiled with compiler B
§.§.§ Code compiled with compiler A
In this example, the Fortran program defines a module containing an interface to a C function, written in Fortran, named . This methodology can be used to not only pass C addresses but more generally call functions and subroutines across Fortran code compiled with different compilers.
§.§ Integration in Our Methodology
In our methodology, C-binding is employed to create interfaces between CUDA Fortran and Coarray Fortran components. This is crucial for transferring data and control between different parts of the application, which are compiled with different compilers. By using C-binding, we ensure a robust and efficient interaction between these components.
|
http://arxiv.org/abs/2409.03341v1 | 20240905083810 | Direct Readout of Nitrogen-Vacancy Hybrid-Spin Quantum Register in Diamond by Photon Arrival Time Analysis | [
"Jingyan He",
"Yu Tian",
"Zhiyi Hu",
"Runchuan Ye",
"Xiangyu Wang",
"Dawei Lu",
"Nanyang Xu"
] | quant-ph | [
"quant-ph"
] |
School of Physics & School of Microelectronics, Hefei University of Technology, Hefei, Anhui 230009, China
Research Center for Quantum Sensing, Zhejiang Lab, Hangzhou, 311000, China
Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China
School of Physics & School of Microelectronics, Hefei University of Technology, Hefei, Anhui 230009, China
Research Center for Quantum Sensing, Zhejiang Lab, Hangzhou, 311000, China
School of Physics & School of Microelectronics, Hefei University of Technology, Hefei, Anhui 230009, China
Research Center for Quantum Sensing, Zhejiang Lab, Hangzhou, 311000, China
Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China
[email protected]
Shenzhen Institute for Quantum Science and Engineering and Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China
[email protected]
Research Center for Quantum Sensing, Zhejiang Lab, Hangzhou, 311000, China
§ ABSTRACT
Quantum state readout plays a pivotal role in quantum technologies, spanning applications in sensing, computation, and secure communication. In this work, we introduce a new approach for efficiently reading populations of hybrid-spin states in the nitrogen-vacancy center of diamond using a single laser pulse, which utilizes the excited state level anti-crossing mechanism at around 500 Gs. Reading spin state populations through this approach achieves the same outcome as traditional quantum state diagonal tomography but significantly reduces the experimental time by an order of magnitude while maintaining fidelity. Moreover, this approach may be extended to encompass full-state tomography, thereby obviating the requirement for a sequence of spin manipulations and mitigating errors induced by decoherence throughout the procedure.
Direct Readout of Nitrogen-Vacancy Hybrid-Spin Quantum Register in Diamond by Photon Arrival Time Analysis
Nanyang Xu
September 9, 2024
==========================================================================================================
§ INTRODUCTION
Owing to its outstanding optical and electron-spin properties, the negatively charged nitrogen-vacancy (NV) center in diamond has emerged as a highly promising platform for solid-state qubits <cit.>. The NV center provides both an electron spin and a substitutional nitrogen nuclear spin (^14 N or ^15 N), and these spins are coupled via a hyperfine coupling <cit.>. A notable feature of this system is its suitability for manipulation as a hybrid-spin quantum register. On one hand, the electron spin exhibits properties such as optical pumping and electron-spin-state-dependent fluorescence <cit.>. This enables feasible initialization and readout of the electron spin <cit.>, even at room temperature <cit.>. On the other hand, nuclear spins offer significantly longer spin lifetimes in comparison to electrons <cit.>, making them a valuable resource for applications such as quantum memories <cit.>, computational nodes for quantum error correction <cit.>, and quantum communication <cit.>.
However, initialization and readout of nuclear spins with high fidelity remains a challenge due to the small magnetic moments of nuclear spins. To address this challenge, the electron spin often serves as an ancillary qubit, interacting with individual nuclear spins. Then the nuclear spin can be polarized by coherently transferring the state from the electron spin, and read out through the reverse process <cit.>. Achieving this necessitates the implementation of intricate quantum-control pulse sequences, operating at both microwave (MW) and radio frequencies (RF), to manipulate electron and nuclear spins, respectively. Nonetheless, due to its weak coupling to magnetic field, executing nuclear-spin operations takes roughly three orders of magnitude longer than the electron spin. As a result, controlling the nuclear-spin state is quite inefficient, and the microwave-induced heating effect hinders its application in thermally-sensitive sciences.
Due to the excited state level anti-crossing (ESLAC) phenomenon of NV center under a magnetic field around 500 Gs, the substitutional nitrogen spin can be automatically initialized via a dynamical nuclear polarization (DNP) process during the optical pumping <cit.>. This mechanism is frequently employed as a standard NV hybrid-spin initialization protocol in various applications <cit.>, while the readout method remains unchanged. In this work, we introduce a new avenue to directly obtain the spin state by analyzing the arrival time of the emitting photons in a single laser pulse. The difference in photon arrival time is also due to the ESLAC mechanism, which forms different photon time traces between the states. Combined with the ESLAC-based DNP protocol, we demonstrate a efficiency state initialization and readout experiment accompanied by exceeding an order of magnitude increase, this method can be extended to full-state tomography and reducing decoherence-induced errors.
§ PHYSICAL SYSTEM
The NV center in diamond is composed of a nitrogen atom substitution for a carbon atom, with a neighboring carbon vacancy as illustrated in Fig. <ref>(a). There exist two charge states, namely the neutral (NV^0) and the negatively charged (NV^-) NV color centers <cit.>. This investigation is primarily concerned with the negatively charged NV color center, characterized by a ground state in the form of a spin triplet state denoted as ^3A. This state displays a zero-field splitting of 2.87 GHz between its spin sublevels, specifically m_s=0 and m_s=±1. Additionally, there is an excited state denoted as ^3E, also a spin triplet, with a zero-field splitting of 1.4 GHz. The energy level diagram of the NV center is depicted in Fig. <ref>(b).
The NV center in diamond demonstrates a spin-dependent cycling transition of the electron spin under the influence of laser excitation <cit.>.
In Fig. <ref>(b), the spin state m_s=0 exhibits a notably low probability of undergoing intersystem crossing (ISC) <cit.> and a heightened likelihood of experiencing excitation to the corresponding m_s=0 state within the ^3E state (with a lifetime of 12 ns) before reverting to its initial state. Conversely, for the spin state m_s=±1, there is a substantial likelihood of transitioning to the intermediate ^1A state (with a lifetime of 250 ns), and an excited state lifetime of 7.8 ns for m_s=± 1. Once the NV center is in its metastable state, the decay back into the triplet ground state preferably occurs into the m_s=0 state with high fidelity. Under optical pumping, the electron spin state will ultimately reach m_s=0, regardless of its initial state.
Although laser illumination initializes the electron spin state, studying fluorescence evolution over time for each state (see Fig. <ref>(c)) is insightful. If the electron spin state was m_s=0 before laser illumination, it remains there, continually undergoing optical excitation and fluorescence emission cycles. This results in a sustained high level of fluorescence, except initially, where a small ISC rate, non-luminous and long-lived, leads to the initial fluorescence decrease. However, if the spin state was m_s=± 1, the higher ISC rate causes the electron spin to eventually transition from the excited state triplet to the excited singlet state during laser illumination. This transition leads to a breakdown in the fluorescence count rate until the NV returns to its ground state with m_s=0.
The crucial aspect for optical readout of the electron spin state is that the NV center in the m_s=0 state emits a higher number of photons within the first few hundred nanoseconds compared to when it is in the m_s=±1 state. This leads to approximately 30 % more total photon counts, which we refer to as the “signal photon”. This difference in emitted photons for m_s=0 and m_s=±1 states allows for the electron spin state of the NV center to be efficiently read out at room temperature <cit.>. The two polarized electron spin states, m_s=0 and m_s=1, correspond to the upper and lower fluorescence boundaries of the NV system, denoted as L_0 and L_1, respectively. Any superposition of these two states results in a general photon count L that falls between these two fluorescence boundaries when measured repeatedly. L can be expressed as a function of L_0 and L_1 by L=c_0 L_0+c_1 L_1,
where c_0 and c_1 denote the respective probabilities of the states m_s=0 and m_s=1, and c_0+c_1=1. Consequently, the electron spin state can be ascertained through the measurement of fluorescence counts.
The achievement of single-shot readout for an individual nuclear spin has been successfully demonstrated utilizing a single NV center in diamond <cit.>. This work was realized within the NV hybrid-spin system, where an NV center is coupled with ^14N nuclear spins, forming a spin-triplet state. The hybrid-spin system is illustrated in Fig. <ref>(b). When a magnetic field B is applied along the NV defect axis, the Hamiltonians for both the ground state and the excited state share the same form, given by H=DŜ^2_z+γ_eBŜ_z+γ_nBŜ_z+AŜÎ. Here, D represents the zero-field splitting, γ_e and γ_n denote the gyromagnetic ratios for the electron spin and nuclear spin, respectively, while Ŝ and Î correspond to the operators for the electron and nuclear spins. Additionally, A signifies the strength of the hyperfine interaction. At a magnetic field strength of 500 Gs, corresponding to the ESLAC region, electron-nuclear-spin flip-flops occur, leading to polarization into the m_s=0 state with m_I=+1.
Although the electron spin is a spin-1 system, a subspace containing only m_s = 0 and m_s=+1 (or -1) is utilized to form a two-level quantum bit (i.e., a qubit) in most cases of quantum information processing <cit.>. Specifically for quantum sensing, a spin-1 state (m_s=±1) has a relatively higher mangnetic moment, which can enhance magnetic sensing performance. However, the readout of m_s=+1 and m_s=-1 states still relies on the help of m_s=0 state <cit.>. Here, we consider only the subspace of m_s=0 and m_s=-1 within the ground spin-triplet state and denote the states with m_s=0 and m_s=-1 within the ground spin-triplet state as |0⟩_e and |1⟩_e, respectively, corresponding to the electron spin qubit. For the nuclear spin qubit of the ^14N nuclei, we use |↑⟩_n and |↓⟩_n to represent the states with m_I=0 and m_I=+1, as depicted in Fig. <ref>(b). The resulting polarized spin state achieved in this system is |0,↓⟩.
In a similar vein, the total fluorescence counts, denoted as L, are directly related to the population of the four basis states i.e., |0,↑⟩, |0,↓⟩, |1,↑⟩, and |1,↓⟩ as follows:
L=c_0↑ L_0↑+c_0↓ L_0↓+c_1↑ L_1↑+c_1↓ L_1↓,
where c_0↑, c_0↓, c_1↑ and c_1↓ denote the respective probabilities of the corresponding states , and c_0↑+c_0↓+c_1↑+c_1↓=1.
However, Eq. (<ref>) is inadequate for the complete readout of the hybrid-spin states. To achieve spin state readout, the conventional approach involves the application of a set of unitary operations designed to transform the population, followed by another round of fluorescence count measurements. The specific experimental sequence is shown in Table <ref>.
The experimental process described above can be succinctly summarized by the following matrix equation
[[ L_0↑ L_0↓ L_1↑ L_1↓; L_0↑ L_1↓ L_1↑ L_0↓; L_0↓ L_0↑ L_1↑ L_1↓; L_0↑ L_1↑ L_0↓ L_1↓ ]]·[
[ c_0↑; c_0↓; c_1↑; c_1↓ ]]=[
[ L_0; L_1; L_2; L_3 ]].
Utilizing Eq. (<ref>), we can calculate the probabilities of different states and consequently determine the state being read out.
§ DIRECT SPIN-READOUT SCHEME
Here, our primary focus centers on the photon time trace, which offers a detailed account of how fluorescence evolves over time for each state. We have effectively harnessed this approach to successfully read out the electron spin state, as previously demonstrated <cit.>.
At a magnetic field strength of 500 Gs, a noteworthy phenomenon called ESLAC emerges, as depicted in Fig. <ref> (a). Since the state |0,↑⟩ and |1,↓⟩ exchanges in the excited states due to the ESLAC mechanism, the nuclear states become unbalanced and also been polarized in the same process <cit.>. Crucially, the transition from |0,↓⟩ to the excited state conserves the nuclear spin, resulting in sustained high fluorescence until a small ISC rate leads to fluorescence decrease. Meanwhile, the transition from |0,↑⟩ has a high probability of transitioning to |1,↓⟩ with a high ISC rate, thus fluorescence rapidly drops until the spin in metastable state returns to the ground state of |0,↓⟩. Consequently, the hybrid-spin system in the state |0,↓⟩ emits a higher number of photons compared to the state |0,↑⟩. By incorporating the optically polarized electron spin mechanism, as described in the second part, where the electron spin m_s= ± 1 is lower than m_s= 0, it becomes evident that there are differences in fluorescence intensities among the four energy levels, namely |0,↑⟩, |0,↓⟩, |1,↑⟩, and |1,↓⟩.
The nuclear spin-dependent photon time traces are depicted in Fig. <ref>(b), and these traces are acquired after activating the readout laser. They are recorded using a custom-built time tagger with a resolution of 2 ns, implemented on a commercial field-programmable gate array module <cit.>. If we assume there are n time bins within the detection window, and m^i (where i ranges from 1 to n) represents the photon counts residing in the i-th time bin, then an arbitrary photon time trace can be expressed as a one-dimensional vector M=[m^1, ⋯, m^i, ⋯, m^n]^T. The fluorescence count m^i obtained from the readout is intricately linked to the population of basis states as
m^i=l^i_0↑c_0↑+l^i_0↓c_0↓+l^i_1↑c_1↑+l^i_1↓c_1↓.
In this context, l^i_0↑, l^i_0↓, l^i_1↑ and l^i_1↓ denote the photon counts present in the i-th time bin for the four basis states, respectively. Since there are n time bins, we can derive n functions, which can be represented in the matrix form
[[ l^1_0↑ l^1_0↓ l^1_1↑ l^1_1↓; l^2_0↑ l^2_0↓ l^2_1↑ l^2_1↓; ⋮ ⋮ ⋮ ⋮; l^i_0↑ l^i_0↓ l^i_1↑ l^i_1↓; ⋮ ⋮ ⋮ ⋮; l^n_0↑ l^n_0↓ l^n_1↑ l^n_1↓; ]]·[
[ c_0↑; c_0↓; c_1↑; c_1↓ ]]=[
[ m^1; m^2; ⋮; m^i; ⋮; m^n ]].
This function describes how an unknown state, represented as a vector M=[m^1, ⋯ m^i, ⋯ m^n]^T, can be expressed using a set of basis vectors L_0↑,0↓,1↑,1↓=[l^1_0↑,0↓,1↑,1↓, ⋯, l^i_0↑,0↓,1↑,1↓, ⋯, l^n_0↑,0↓,1↑,1↓]^T within a Hilbert space. We use matrix form to represented the basis vectors as L=[L_0↑, L_0↓, L_1↑, L_1↓]. The coefficients c=[c_0↑, c_0↓, c_1↑, c_1↓]^T correspond to the probabilities associated with the four coefficients.
As an alternative to linear inversion methods, maximum likelihood estimation has been widely employed <cit.>. In this work, we utilize optimization techniques to perform a direct analysis of the photon time trace, with the objective of determining the optimal values c^ optimal that provide the best approximation to c^ theoretical with respect to a specific matrix norm. The optimization process is subject to a constraint condition, which is defined as
min L· c-M_2,
s.t. c^T · c=1.
§ EXPERIMENTAL REALIZATION
In our experimental setup, we conducted tests using an NV center embedded in a bulk chemical vapor deposition diamond. These experiments were carried out on a custom-built confocal microscopy system, all of which took place at room temperature. The specific NV center utilized in these experiments was positioned at a depth of 10 μ m within the diamond material. This NV center was addressed and manipulated using our in-house optically detected magnetic resonance system. The experimental setup involves a 532 nm green laser beam that is passed through an acoustic-optic modulator to facilitate switching. After passing through an oil objective lens and focusing on the NV center, the fluorescence emitted is collected by an avalanche photo diode (APD). The output signal from the APD is then detected by a custom-built time tagger with a resolution of 2 ns. To create a static magnetic field, approximately 500 Gs, a columnar neodymium magnet is employed. This magnetic field is applied parallel to the NV axis, which is in the z-direction. The purpose of this magnetic field is to exclusively affect the z-terms of the electron spin Ŝ_z, leading to the splitting of the m_s=±1 sublevels and enabling the ESLAC. This ESLAC phenomenon facilitates the optical polarization of the nuclear spin with ease. To manipulate the electron spin and nuclear spin for general-purpose measurements, MW and RF sources are utilized, respectively. The frequencies of RF_1 (5.102067 MHz) and RF_2 (2.941124 MHz) have been calibrated through electron-nuclear double resonance experiments.
To validate the procedure, our initial step involved calibrating the photon time traces of the four basis states through a substantial number of measurements, totaling 10^9. These calibrated traces are collectively represented as a set of basis vectors, denoted as {L_0↑, L_0↓, L_1↑, L_1↓}. Subsequently, we prepared test states, namely |0,↑⟩, |0,↓⟩, |1,↑⟩, and |1,↓⟩, through different operations. And the fidelity of test sate are over 0.99 (see Appendix <ref>). We then carried out laser-based readouts for these test states, totaling 10^7 measurements, and the results were documented as M.
Utilizing the optimal method, we successfully obtained the four coefficients: c_0 ↑, c_0 ↓, c_1 ↑, c_1 ↓, and readout the spin state populations with fidelity 0.9972 ± 0.00042, 0.9963 ± 0.00043, 0.9981± 0.00038, and 0.9982 ± 0.00038, respectively. Here, we characterize the fidelity of population readout as Equ. (<ref>), obtained from the full state tomography caculation <cit.>.
F_p=(c^ th, c^ exp)/√((c^ th, c^ th)(c^ exp, c^ exp)),
where c^ th (c^ exp) is theoretical( experimentally reconstructed) population vectors formed by [c_0↑, c_0↓, c_1↑, c_1↓]^T.
Now, we conside the population readout of an arbitrary state, in Fig. <ref>.(b) the time trace of basis state are calibrated and we think the time trace of arbitrary state is straightforwardly the superposition of the four basic states, which has be verified in Supplementary Materials. Here, we perform one more experiment employing the superposition states of electron or nuclear spins and the results are shown in Table <ref>.
The obtained fidelity values, denoting the similarity between the input states 1/√(2)(|0,↑⟩+|0,↓⟩), 1/√(2)(|0,↓⟩+|1,↓⟩) and 1/√(2)(|1,↑⟩+|1,↓⟩) are 0.99145 ± 0.00065, 0.99206 ± 0.00050 and 0.98845 ± 0.00083 respectively. The observed differences between the experimental and theoretical results can primarily be attributed to factors such as statistical variations in photon detection and control errors arising from pulse imperfections during state preparation and tomography implementation.
§ TIME-COST ANALYSIS
The direct readout method sets itself apart from the conventional approach by eliminating the need for a spin operation sequence. In the general method, reading out the nuclear spin state of the NV hybrid-spin quantum register necessitates at least four operations and laser-based readout. However, given the effective isolation of nuclear spins from their surrounding environment, manipulating nuclear spins entails executing a lengthy RF sequence lasting approximately 100 μs.
In practical experiments, the most significant factor contributing to the fidelity loss is the shot noise σ_M. Due to the photon counts follow a Poisson distribution, it is correlated with the photon signal, i.e σ_M=[√(m^1), ⋯ √(m^i), ⋯ √(m^n)]^T. This loss can be mitigated by increasing the number of measurement sweeps. Here, we conducted simulations based on experimental data to analyze the relationship between fidelity, the number of sweeps, and the total experiment time. The specific simulation process is as follows.
Initially, we perform measurements to obtain the photon time traces for the four basis states through 10^9 measurements. These traces are denoted as {L_0↑, L_0↓, L_1↑, L_1↓}, and we assume them to be error-free. The total number of measurements for this step is marked as S1. In the second step, we randomly generate a set of target samples represented as (c_0↑^i, c_0↓^i, c_1↑^i, c_1↓^i). Next, we generate theoretical data, m^i_ th, using the following equation: m^i_ th=S1/S2c_0↑^i l_0↑^i+c_0↓^i l_0↓^i+c_1↑^i l_1↑^i+c_1↓^i l_1↓^i. Here, S2 represents the number of experiment sweeps. To simulate the real experimental scenario, we add photon counting statistics noise, δ m^i_ th, to the theoretical data, m^i_ th, resulting in experimental data, m^i_ exp. The noise, δ m^i_ th, is drawn from a Gaussian distribution and falls within the range [-√(m^i_ th), √(m^i_ th)]. By changing the value of S1, we can generate experimental data at different numbers of sweeps, which allows us to study the relationship between fidelity, sweeps, and experiment time. Finally, we use the generated photon time trace and experimental data, which have been normalized based on their maximum values, as inputs to an optimization model for analysis.
The evolution of fidelity F_p as a function of the number of sweeps under a magnetic field strength of 500 Gs is depicted in Fig. <ref>(a). When the fidelity level is not very high, the direct readout method requires fewer sweeps, especially when fidelity is less than 0.90. As the number of sweeps increases, the operation time of the general method grows linearly. In contrast, the direct readout method has a shorter readout time because it only includes the laser component and no additional operational time. Consequently, for the same number of sweeps, the direct readout method requires less time.
To further elucidate the relationship between fidelity and readout time, Fig. <ref>(b) presents the corresponding data. When compared to the conventional general method, our approach demonstrates a substantial enhancement in time efficiency. For instance, when striving for a fidelity of 0.95, the direct readout method requires only 6.83× 10^8 ns, whereas the general method demands a significantly longer time of 2.24×10^10 ns. Therefore, in this particular scenario, our method accelerates the experimental process by a factor of 32. This time-saving achievement is primarily realized by reducing the number of essential spin operations, making it particularly advantageous for systems characterized by shorter decoherence times.
§ DEPENDENCE ON MAGNETIC FIELD
The ESLAC phenomenon is a crucial aspect of our method. Optical excitation at ESLAC results in state-selective spin mixing between the NV electron and nuclear spin. To gain deeper insights into our approach, we conducted measurements of the photon time trace under five different magnetic fields, comprising 10^8 measurements each. Fig. <ref> presents simulation data for these various magnetic field strengths, illustrating the relationship between fidelity F_p and sweeps. The evolution of F_p as a function of sweeps can be well-fitted using the functions F_p=1-e^(as^2+bs+c) and F_p=1-e^(a(ι-δ)^2+b(ι-δ)+c). In the field regime close to 500 Gs (ESLAC point), we observe an optimal experimental effect where achieving the same fidelity level requires the fewest sweeps and the shortest time. This phenomenon arises due to the excited state electron-nuclear spin flip-flop process, induced by hyperfine interactions.
The spin flip frequency is closely related to the magnetic field strength, and it reaches an extreme value at approximately 500 Gs. Under the ESLAC conditions, the non-eigenstate |0,↓⟩ completely transforms into |1,↑⟩ and then back again, resulting in the most substantial differences in the photon time traces among the four initial states. In fact, even minor state mixing in the excited state can lead to noticeable distinctions in the photon time traces.
In the evaluation of photon time traces within the spin readout mechanism, we examine the worst-case noise magnification ratio κ<cit.>, which exhibits a quadratic correlation with the magnetic field, as shown in Fig. <ref>. This observed change pattern aligns with the experimental results.
§ DISCUSSION AND CONCLUSION
In summary, we have effectively showcased a novel approach for achieving single-shot readout of nuclear spins linked to room-temperature NV centers in diamond. Our method centers around the direct analysis of photon time traces, with a specific emphasis on the discernible characteristics present during the ESLAC. When contrasted with conventional techniques, our approach introduces a substantial time-saving advantage, reducing the necessary time by a factor of 32, thanks to the elimination of spin operations prior to laser readout. While the ESLAC point represents the optimal condition for our method, it can also be applied to read out spin states whenever discernible differences in photon time traces are present.
This method bestows benefits akin to those derived from diagonal element tomography experiments, and it can also be effectively applied to full-state tomography experiments. In state tomography experiments, it mitigates the requirement for intricate nuclear spin manipulation, offering a clear advantage in scenarios where preserving high fidelity poses challenges due to short decoherence times. This approach streamlines the experimental setup while retaining its capacity to uphold fidelity, rendering it a valuable tool for applications in quantum information processing and other domains that involve nuclear spin readout and manipulation.
§ ACKNOWLEDGMENTS
This work was supported by the Fundamental Research Funds for the Central Universities (Grant No. 226-2023-00137), the National Natural Science Foundation of China (Grant Nos. 92265114, 92265204, 12104213), the National Key Research and Development Program of China (2019YFA0308100) and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302200).
§ SIMULATION OF TIME TRACE
At a magnetic field strength of 500 Gs, a noteworthy phenomenon called ESLAC emerges (see Fig. <ref>(a)), which will influence the fluorescence intensities among the four energy levels, namely |0,↑⟩, |0,↓⟩, |1,↑⟩, and |1,↓⟩. Here, we provided numerical simulations of the time trace with different ESLAC rates (see Fig. <ref>). Furthermore, from the simulation of arbitrary superposition states in Fig. <ref>, it can be observed that an arbitrary state is straightforwardly the superposition of the basic states.
§ TWO-QUBIT FULL STATE TOMOGRAPHY
The density matrix ρ of two-qubit entanglement is as followes:
ρ_0=[[ c_0↑ … a_|0↑⟩⟨ 1↓|+j b_|0↑⟩⟨ 1↓|; a_|0↑⟩⟨ 0↓|-j b_|0↑⟩⟨ 0↓| … a_|0↓⟩⟨ 1↓|+j b_|0↓⟩⟨ 1↓|; a_|0↑⟩⟨ 1↑|-j b_|0↑⟩⟨ 1↑| … a_|1↑⟩⟨ 1↓|+j b_|1↑⟩⟨ 1↓|; a_|0↑⟩⟨ 1↓|-j b_|0↑⟩⟨ 1↓| … c_1↓; ]].
The setup for measure diagonal elements c_0↑, c_0↓, c_1↑ and c_1↓ are explanated detaily in the second part of the main text. The a_|0↑⟩⟨ 0↓|, |0↑⟩⟨ 1↑|, |0↑⟩⟨ 1↓|, |0↓⟩⟨ 1↑|, |0↓⟩⟨ 1↓|, |1↑⟩⟨ 1↓| and b_|0↑⟩⟨ 0↓|, |0↑⟩⟨ 1↑|, |0↑⟩⟨ 1↓|, |0↓⟩⟨ 1↑|, |0↓⟩⟨ 1↓|, |1↑⟩⟨ 1↓| are denote the real and imaginary part of off-diagonal elements |0↑⟩⟨ 0↓|, |0↑⟩⟨ 1↑|, |0↑⟩⟨ 1↓|, |0↓⟩⟨ 1↑|, |0↓⟩⟨ 1↓| and |1↑⟩⟨ 1↓| respectively.
For measure off-diagonal element, we need apply microwave(MW) or radio frequencies(RF) sequence to transform off-diagonal element into diagonal, then readout by laser. The off-diagonal experimental sequence is shown in <ref>.
Next, take off-diagonal element |0↑⟩⟨ 1↓| as an example for a detailed introduction. Firstly, we apply a MW_2 π pulse, and the partial density matrix evolves as follows:
ρ =[[ 1 0 0 0; 0 0 0 -j; 0 0 1 0; 0 -j 0 0; ]] ρ_0 [[ 1 0 0 0; 0 0 0 j; 0 0 1 0; 0 j 0 0; ]]
=[[ c_0↑ ja_|0↑⟩⟨ 1↓|- b_|0↑⟩⟨ 1↓| … …; -ja_|0↑⟩⟨ 1↓|+b_|0↑⟩⟨ 1↓| c_1↓ … …; … … c_1↑ …; … … … c_0↓; ]].
Then applying four RF_1 π/2 pulse with four different phases, the evolution of 2 × 2 density matrices ρ_1,2,3,4^sub go as follows:
ρ_0^sub =[[ c_0↑ ja_|0↑⟩⟨ 1↓|- b_|0↑⟩⟨ 1↓|; -ja_|0↑⟩⟨ 1↓|+b_|0↑⟩⟨ 1↓| c_1↓; ]],
ρ_1^sub =[[ 1 -j; -j 1; ]]
ρ_0^sub[[ 1 j; j 1; ]]
=[[ c_0 ↑+c_1 ↓/2-a_|0↑⟩⟨ 1↓| j/2(c_0 ↑-c_1 ↓)-b_|0↑⟩⟨ 1↓|; -j/2(c_0 ↑-c_1 ↓)+b_|0↑⟩⟨ 1↓| c_0 ↑+c_1 ↓/2+a_|0↑⟩⟨ 1↓|; ]],
ρ_2^sub =[[ 1 j; j 1; ]]ρ_0^sub[[ 1 -j; -j 1; ]]
=[[ c_0 ↑+c_1 ↓/2+a_|0↑⟩⟨ 1↓| -j/2(c_0 ↑-c_1 ↓)-b_|0↑⟩⟨ 1↓|; j/2(c_0 ↑-c_1 ↓)-b_|0↑⟩⟨ 1↓| c_0 ↑+c_1 ↓/2-a_|0↑⟩⟨ 1↓|; ]],
ρ_3^sub =[[ 1 -1; 1 1; ]]ρ_0^sub[[ 1 1; -1 1; ]]
=[[ c_0 ↑+c_1 ↓/2+b_|0↑⟩⟨ 1↓| 1/2(c_0 ↑-c_1 ↓)+j a_|0↑⟩⟨ 1↓|; (c_0 ↑-c_1 ↓)/2-ja_|0↑⟩⟨ 1↓| c_0 ↑+c_1 ↓/2-b_|0↑⟩⟨ 1↓|; ]],
ρ_4^sub =[[ 1 1; -1 1; ]]ρ_0^sub[[ 1 -1; 1 1; ]]
=[[ c_0 ↑+c_1 ↓/2-b_|0↑⟩⟨ 1↓| -(c_0 ↑-c_1 ↓)/2+ja_|0↑⟩⟨ 1↓|; -(c_0 ↑-c_1 ↓)/2-ja_|0↑⟩⟨ 1↓| c_0 ↑+c_1 ↓/2+b_|0↑⟩⟨ 1↓|; ]].
And now, the off-diagonal elements have been transfered to the elements, which can be optical readout. Following a laser readout, we get:
X_1 =(c_0↑+c_1↓/2-a_|0↑⟩⟨ 1↓|)L_0↑
+(c_0↑+c_1↓/2+a_|0↑⟩⟨ 1↓|)L_0 ↓+c_1↑L_1 ↑+c_0 ↓L_1 ↓,
X_2 =(c_0↑+c_1↓/2+a_|0↑⟩⟨ 1↓|)L_0↑
+(c_0↑+c_1↓/2-a_|0↑⟩⟨ 1↓|)L_0 ↓+c_1↑L_1 ↑+c_0 ↓L_1 ↓,
Y_1 =(c_0↑+c_1↓/2+b_|0↑⟩⟨ 1↓|)L_0↑
+(c_0↑+c_1↓/2-b_|0↑⟩⟨ 1↓|)L_0 ↓+c_1↑L_1 ↑+c_0 ↓L_1 ↓,
Y_2 =(c_0↑+c_1↓/2-b_|0↑⟩⟨ 1↓|)L_0↑
+(c_0↑+c_1↓/2+b_|0↑⟩⟨ 1↓|)L_0 ↓+c_1↑L_1 ↑+c_0 ↓L_1 ↓,
a_|0↑⟩⟨ 1↓|=-X_1+X_2/2(L_0 ↑-L_0 ↓),
b_|0↑⟩⟨ 1↓|=Y_1-Y_2/2(L_0 ↑-L_0 ↓).
In this work, the all off-diagonal elements are calculated by the similar method, and the results of tomography as shown in Fig. <ref>.
unsrt
50
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Doherty et al.(2016)Doherty, Meriles, Alkauskas, Fedder, Sellars, and Manson]PhysRevX.6.041035
author author M. W. Doherty, author C. A. Meriles, author A. Alkauskas,
author H. Fedder, author M. J. Sellars, and author N. B. Manson, 10.1103/PhysRevX.6.041035
journal journal Phys.Rev. X volume 6, pages 041035
(year 2016)NoStop
[Nemoto et al.(2014)Nemoto,
Trupke, Devitt, Stephens,
Scharfenberger, Buczak, Nöbauer, Everitt, Schmiedmayer, and Munro]PhysRevX.4.031022
author author K. Nemoto, author M. Trupke,
author S. J. Devitt, author A. M. Stephens, author
B. Scharfenberger, author
K. Buczak, author T. Nöbauer, author M. S. Everitt, author J. Schmiedmayer, and author W. J. Munro, 10.1103/PhysRevX.4.031022
journal journal Phys. Rev. X volume 4, pages 031022 (year
2014)NoStop
[Hanson et al.(2008)Hanson,
Dobrovitski, Feiguin, Gywat, and Awschalom]science.1155400
author author R. Hanson, author V. V. Dobrovitski, author A. E. Feiguin, author O. Gywat, and author D. D. Awschalom, 10.1126/science.1155400 journal journal Sci. volume 320, pages 352
(year 2008)NoStop
[Harley et al.(1984)Harley,
Henderson, and Macfarlane]R.T.Harley_1984
author author R. T. Harley, author M. J. Henderson, and author R. M. Macfarlane, 10.1088/0022-3719/17/8/002 journal journal J. Phys. C: Solid State Phys. volume 17, pages L233 (year
1984)NoStop
[Lovchinsky et al.(2016)Lovchinsky, Sushkov, Urbach, de Leon, Choi, Greve, Evans, Gertner, Bersin, Müller, McGuinness, Jelezko, Walsworth, Park, and Lukin]science.aad8022
author author I. Lovchinsky, author A. O. Sushkov, author E. Urbach,
author N. P. de Leon, author S. Choi, author
K. D. Greve, author
R. Evans, author R. Gertner, author E. Bersin, author C. Müller, author L. McGuinness, author F. Jelezko, author R. L. Walsworth, author H. Park, and author M. D. Lukin, 10.1126/science.aad8022 journal
journal Sci. volume 351, pages 836 (year 2016)NoStop
[Staudacher et al.(2013)Staudacher, Shi, Pezzagna, Meijer, Du, Meriles, Reinhard, and Wrachtrup]science.1231675
author author T. Staudacher, author F. Shi,
author S. Pezzagna, author J. Meijer, author
J. Du, author C. A. Meriles, author F. Reinhard, and author J. Wrachtrup, 10.1126/science.1231675
journal journal Sci. volume 339, pages 561 (year
2013)NoStop
[Glenn et al.(2018)Glenn,
Bucher, Lee, Lukin,
Park, and Walsworth]nature25781
author author D. R. Glenn, author D. B. Bucher,
author J. Lee, author
M. D. Lukin, author
H. Park, and author
R. L. Walsworth, 10.1038/nature25781
journal journal Nat. volume 555, pages 351 (year
2018)NoStop
[Suter and Jelezko(2017a)]j.pnmrs.2016.12.001
author author D. Suter and author F. Jelezko, https://doi.org/10.1016/j.pnmrs.2016.12.001
journal journal Prog. Nucl. Magn. Reson.
Spectrosc. volume 98-99, pages 50
(year 2017a)NoStop
[Cramer et al.(2016a)Cramer, Kalb,
Rol, Hensen, Blok,
Markham, Twitchen, Hanson, and Taminiau]ncomms11526
author author J. Cramer, author N. Kalb,
author M. A. Rol, author B. Hensen, author
M. S. Blok, author
M. Markham, author D. J. Twitchen, author R. Hanson, and author T. H. Taminiau, 10.1038/ncomms11526 journal journal Nat. Commun. volume
7, pages 11526 (year
2016a)NoStop
[Mamin et al.(2013)Mamin,
Kim, Sherwood, Rettner,
Ohno, Awschalom, and Rugar]mamin_nanoscale_2013
author author H. J. Mamin, author M. Kim, author M. H. Sherwood, author
C. T. Rettner, author
K. Ohno, author D. D. Awschalom, and author D. Rugar, 10.1126/science.1231540
journal journal Sci. volume 339, pages 557 (year
2013)NoStop
[Oort et al.(1988)Oort,
Manson, and Glasbeek]oort_optically_1988
author author E. V. Oort, author N. B. Manson, and author M. Glasbeek, 10.1088/0022-3719/21/23/020 journal journal J. Phys. C: Solid State Phys. volume
21, pages 4385 (year 1988)NoStop
[Terblanche et al.(2001)Terblanche, Reynhardt, Rakitianski, and Van Wyk]TERBLANCHE2001107
author author C. J. Terblanche, author E. C. Reynhardt, author S. A. Rakitianski, and author J. A. Van Wyk, https://doi.org/10.1006/snmr.2001.0025
journal journal Solid State Nucl. Magn. Reson. volume 19, pages 107 (year
2001)NoStop
[Dutt et al.(2007)Dutt,
Childress, Jiang, Togan,
Maze, Jelezko, Zibrov,
Hemmer, and Lukin]science.1139831
author author M. V. G. Dutt, author L. Childress, author L. Jiang,
author E. Togan, author J. Maze, author
F. Jelezko, author A. S. Zibrov, author P. R. Hemmer, and author M. D. Lukin, 10.1126/science.1139831 journal journal Sci. volume 316, pages 1312 (year 2007)NoStop
[Maurer et al.(2012)Maurer,
Kucsko, Latta, Jiang,
Yao, Bennett, Pastawski,
Hunger, Chisholm, Markham,
Twitchen, Cirac, and Lukin]science.1220513
author author P. C. Maurer, author G. Kucsko,
author C. Latta, author L. Jiang, author
N. Y. Yao, author S. D. Bennett, author F. Pastawski, author D. Hunger, author N. Chisholm, author M. Markham, author D. J. Twitchen, author J. I. Cirac, and author M. D. Lukin, 10.1126/science.1220513 journal journal Sci. volume 336, pages 1283 (year 2012)NoStop
[Childress et al.(2006)Childress, Gurudev Dutt, Taylor,
Zibrov, Jelezko, Wrachtrup,
Hemmer, and Lukin]science.1131871
author author L. Childress, author M. V. Gurudev Dutt, author J. M. Taylor, author A. S. Zibrov,
author F. Jelezko, author J. Wrachtrup, author
P. R. Hemmer, and author
M. D. Lukin, 10.1126/science.1131871
journal journal Sci. volume 314, pages 281 (year
2006)NoStop
[Cramer et al.(2016b)Cramer, Kalb,
Rol, Hensen, Blok,
Markham, Twitchen, Hanson, and Taminiau]cramer_repeated_2016
author author J. Cramer, author N. Kalb,
author M. A. Rol, author B. Hensen, author
M. S. Blok, author
M. Markham, author D. J. Twitchen, author R. Hanson, and author T. H. Taminiau, 10.1038/ncomms11526 journal journal Nat. Commun. volume
7, pages 11526 (year
2016b)NoStop
[Reiserer et al.(2016)Reiserer, Kalb, Blok, Van Bemmelen, Taminiau, Hanson,
Twitchen, and Markham]reiserer_robust_2016
author author A. Reiserer, author N. Kalb,
author M. S. Blok, author K. J. M. van Bemmelen, author T. H. Taminiau, author
R. Hanson, author D. J. Twitchen, and author M. Markham, 10.1103/PhysRevX.6.021040
journal journal Phys. Rev. X volume 6, pages 021040 (year
2016)NoStop
[Kalb et al.(2017)Kalb,
Reiserer, Humphreys, Bakermans, Kamerling, Nickerson,
Benjamin, Twitchen, Markham, and Hanson]kalb_entanglement_2017
author author N. Kalb, author A. A. Reiserer,
author P. C. Humphreys, author J. J. W. Bakermans, author S. J. Kamerling, author
N. H. Nickerson, author
S. C. Benjamin, author
D. J. Twitchen, author
M. Markham, and author
R. Hanson, 10.1126/science.aan0070
journal journal Sci. volume 356, pages 928 (year
2017)NoStop
[Pfender et al.(2019a)Pfender, Wang,
Sumiya, Onoda, Yang,
Dasari, Neumann, Pan,
Isoya, Liu, and Wrachtrup]pfender_high-resolution_2019
author author M. Pfender, author P. Wang,
author H. Sumiya, author S. Onoda, author
W. Yang, author D. B. R. Dasari, author P. Neumann, author X.-Y. Pan, author J. Isoya, author R.-B. Liu, and author J. Wrachtrup, 10.1038/s41467-019-08544-z journal journal Nat. Commum. volume 10, pages 594 (year 2019a)NoStop
[Lvovsky(2004)]lvovsky_iterative_2004
author author A. I. Lvovsky, 10.1088/1464-4266/6/6/014 journal journal J. Opt. B: Quantum Semiclassical Opt. volume 6, pages S556 (year
2004)NoStop
[Pfender et al.(2019b)Pfender, Wang,
Sumiya, Onoda, Yang,
Dasari, Neumann, Pan,
Isoya, Liu, and Wrachtrup]s41467-019-08544-z
author author M. Pfender, author P. Wang,
author H. Sumiya, author S. Onoda, author
W. Yang, author D. B. R. Dasari, author P. Neumann, author X. Y. Pan, author J. Isoya, author R. B. Liu, and author J. Wrachtrup, 10.1038/s41467-019-08544-z journal journal Nat. Commun. volume 10, pages 594 (year 2019b)NoStop
[Hopper et al.(2018)Hopper,
Shulevitz, and Bassett]hopper_spin_2018
author author D. Hopper, author H. Shulevitz, and author L. Bassett, 10.3390/mi9090437 journal journal
Micromachines volume 9, pages 437
(year 2018)NoStop
[Fuchs et al.(2008)Fuchs,
Dobrovitski, Hanson, Batra,
Weis, Schenkel, and Awschalom]PhysRevLett.101.117601
author author G. D. Fuchs, author V. V. Dobrovitski, author R. Hanson,
author A. Batra, author C. D. Weis, author
T. Schenkel, and author
D. D. Awschalom, 10.1103/PhysRevLett.101.117601 journal journal
Phys. Rev. Lett. volume 101, pages
117601 (year 2008)NoStop
[Jacques et al.(2009)Jacques, Neumann, Beck, Markham, Twitchen, Meijer, Kaiser, Balasubramanian, Jelezko, and Wrachtrup]PhysRevLett.102.057403
author author V. Jacques, author P. Neumann,
author J. Beck, author
M. Markham, author D. Twitchen, author J. Meijer, author F. Kaiser, author G. Balasubramanian, author F. Jelezko, and author J. Wrachtrup, 10.1103/PhysRevLett.102.057403 journal journal
Phys. Rev. Lett. volume 102, pages
057403 (year 2009)NoStop
[Dréau et al.(2013)Dréau, Spinicelli, Maze, Roch, and Jacques]PhysRevLett.110.060502
author author A. Dréau, author P. Spinicelli,
author J. R. Maze, author J.-F. Roch, and author V. Jacques, 10.1103/PhysRevLett.110.060502 journal journal
Phys. Rev. Lett. volume 110, pages
060502 (year 2013)NoStop
[Zangara et al.(2019)Zangara, Dhomkar, Ajoy, Liu, Nazaryan, Pagliero, Suter, Reimer, Pines, and Meriles]pnas.1811994116
author author P. R. Zangara, author S. Dhomkar,
author A. Ajoy, author
K. Liu, author R. Nazaryan, author D. Pagliero, author D. Suter, author J. A. Reimer, author A. Pines, and author C. A. Meriles, 10.1073/pnas.1811994116 journal
journal Proc. Natl. Acad. Sci. U.S.A. volume 116, pages 2512 (year 2019)NoStop
[Qian et al.(2022)Qian,
Zhai, Hu, Zheng,
Chen, and Xu]PhysRevA.106.033506
author author P. Qian, author Y. Zhai, author J. Hu, author
R. Zheng, author B. Chen, and author N. Xu, 10.1103/PhysRevA.106.033506
journal journal Phys. Rev. A volume 106, pages 033506 (year
2022)NoStop
[Jelezko and Wrachtrup(2006)]pssa.200671403
author author F. Jelezko and author J. Wrachtrup, https://doi.org/10.1002/pssa.200671403
journal journal Phys. Status Solidi A volume 203, pages 3207 (year
2006)NoStop
[Goldman et al.(2015)Goldman, Sipahigil, Doherty, Yao, Bennett, Markham, Twitchen, Manson, Kubanek, and Lukin]PhysRevLett.114.145502
author author M. L. Goldman, author A. Sipahigil,
author M. W. Doherty, author N. Y. Yao, author
S. D. Bennett, author
M. Markham, author D. J. Twitchen, author N. B. Manson, author A. Kubanek, and author M. D. Lukin, 10.1103/PhysRevLett.114.145502 journal journal Phys. Rev. Lett. volume 114, pages 145502 (year
2015)NoStop
[Lin et al.(2023)Lin,
Fan, Ye, Zhou, Song, Lu, and Xu]s11467-022-1235-5
author author X. Lin, author J. Fan, author R. Ye, author
M. Zhou, author Y. Song, author D. Lu, and author N. Xu, 10.1007/s11467-022-1235-5 journal journal Front. Phys. volume 18, pages 21301 (year 2023)NoStop
[Suter and Jelezko(2017b)]suter_single-spin_2017
author author D. Suter and author F. Jelezko, 10.1016/j.pnmrs.2016.12.001 journal journal Prog. Nucl. Magn. Reson. Spectrosc. volume 98-99, pages 50 (year
2017b)NoStop
[Gulka et al.(2021)Gulka,
Wirtitsch, Ivády, Vodnik,
Hruby, Magchiels, Bourgeois,
Gali, Trupke, and Nesladek]gulka_room-temperature_2021
author author M. Gulka, author D. Wirtitsch,
author V. Ivády, author J. Vodnik, author
J. Hruby, author G. Magchiels, author E. Bourgeois, author A. Gali, author M. Trupke, and author M. Nesladek, 10.1038/s41467-021-24494-x journal journal Nat. Commun. volume
12, pages 4421 (year 2021)NoStop
[Warren(1997)]warren_usefulness_1997
author author W. S. Warren, 10.1126/science.277.5332.1688 journal journal Sci. volume 277, pages 1688 (year 1997)NoStop
[Song et al.(2020)Song,
Tian, Hu, Zhou, Xing, Lu, Chen, Wang,
Xu, and Du]song_pulse-width-induced_2020
author author Y. Song, author Y. Tian, author Z. Hu, author
F. Zhou, author T. Xing, author D. Lu, author B. Chen, author Y. Wang, author
N. Xu, and author
J. Du, 10.1364/PRJ.386983
journal journal Photonics Res. volume 8, pages 1289 (year
2020)NoStop
[Neumann et al.(2010)Neumann, Beck, Steiner, Rempp, Fedder, Hemmer, Wrachtrup, and Jelezko]science.1189075
author author P. Neumann, author J. Beck,
author M. Steiner, author F. Rempp, author
H. Fedder, author P. R. Hemmer, author J. Wrachtrup, and author F. Jelezko, doi:10.1126/science.1189075
journal journal Sci. volume 329, pages 542 (year
2010)NoStop
[Liu et al.(2017)Liu,
Xing, Ma, Wang, Li, Po, Zhang, Fan,
Liu, and Pan]PhysRevLett.118.150504
author author G.-Q. Liu, author J. Xing, author W.-L. Ma, author
P. Wang, author C.-H. Li, author H. C. Po, author Y.-R. Zhang, author H. Fan, author R.-B. Liu, and author X.-Y. Pan, 10.1103/PhysRevLett.118.150504 journal journal
Phys. Rev. Lett. volume 118, pages
150504 (year 2017)NoStop
[Gillard et al.(2022)Gillard, Clarke, and Chekhovich]gillard_harnessing_2022
author author G. Gillard, author E. Clarke, and author E. A. Chekhovich, 10.1038/s41467-022-31618-4 journal journal Nat. Commun. volume 13, pages 4048 (year 2022)NoStop
[Xu et al.(2019)Xu,
Tian, Chen, Geng,
He, Wang, and Du]PhysRevApplied.12.024055
author author N. Xu, author Y. Tian, author B. Chen, author
J. Geng, author X. He, author Y. Wang, and author J. Du, 10.1103/PhysRevApplied.12.024055 journal
journal Phys. Rev. Appl. volume 12, pages 024055 (year 2019)NoStop
[Qian et al.(2021)Qian,
Lin, Zhou, Ye, Ji, Chen, Xie, and Xu]qian_machine-learning-assisted_2021
author author P. Qian, author X. Lin, author F. Zhou, author
R. Ye, author Y. Ji, author B. Chen, author G. Xie, and author N. Xu, 10.1063/5.0038590 journal journal Appl. Phys.
Lett. volume 118, pages 084001
(year 2021)NoStop
[Smeltzer et al.(2009)Smeltzer, McIntyre, and Childress]PhysRevA.80.050302
author author B. Smeltzer, author J. McIntyre,
and author L. Childress, 10.1103/PhysRevA.80.050302 journal journal Phys. Rev. A volume 80, pages 050302(R)NoStop
[Steiner et al.(2010)Steiner, Neumann, Beck, Jelezko, and Wrachtrup]PhysRevB.81.035205
author author M. Steiner, author P. Neumann,
author J. Beck, author
F. Jelezko, and author
J. Wrachtrup, 10.1103/PhysRevB.81.035205 journal journal Phys.
Rev. B volume 81, pages 035205
(year 2010)NoStop
[Dréau et al.(2012)Dréau, Maze, Lesik, Roch, and Jacques]PhysRevB.85.134107
author author A. Dréau, author J.-R. Maze,
author M. Lesik, author J.-F. Roch, and author V. Jacques, 10.1103/PhysRevB.85.134107 journal journal Phys.
Rev. B volume 85, pages 134107
(year 2012)NoStop
[Ye et al.(2022)Ye,
Lin, Zhou, Dai, Hu, Li, Xie, and Xu]ye_synchronized_2022
author author R. Ye, author X. Lin, author F. Zhou, author
Y. Dai, author Q. Hu, author X. Li, author G. Xie, and author N. Xu, 10.1063/5.0086943 journal journal Rev. Sci.
Instrum. volume 93, pages 063102
(year 2022)NoStop
[Banaszek et al.(1999)Banaszek, D’Ariano, Paris, and Sacchi]PhysRevA.61.010304
author author K. Banaszek, author G. M. D’Ariano, author M. G. A. Paris, and author M. F. Sacchi, 10.1103/PhysRevA.61.010304 journal journal Phys. Rev. A volume
61, pages 010304(R)NoStop
[Zhang et al.(2023)Zhang,
Hegde, and Suter]PhysRevLett.130.090801
author author J. Zhang, author S. S. Hegde, and author D. Suter, 10.1103/PhysRevLett.130.090801 journal journal Phys. Rev. Lett. volume 130, pages 090801 (year 2023)NoStop
[Wang et al.(2008)Wang,
Yu, and Yi]j.physleta.2008.10.083
author author X. Wang, author C.-S. Yu, and author X. X. Yi, https://doi.org/10.1016/j.physleta.2008.10.083 journal journal Phys. Lett. A volume
373, pages 58 (year 2008)NoStop
[Bogdanov et al.(2010)Bogdanov, Brida, Genovese, Kulik, Moreva, and Shurupov]PhysRevLett.105.010404
author author Y. I. Bogdanov, author G. Brida,
author M. Genovese, author S. P. Kulik, author
E. V. Moreva, and author
A. P. Shurupov, 10.1103/PhysRevLett.105.010404 journal journal
Phys. Rev. Lett. volume 105, pages
010404 (year 2010)NoStop
[Shen et al.(2016)Shen,
Heeres, Reinhold, Jiang,
Liu, Schoelkopf, and Jiang]PhysRevA.94.052327
author author C. Shen, author R. W. Heeres,
author P. Reinhold, author L. Jiang, author
Y.-K. Liu, author R. J. Schoelkopf, and author L. Jiang, 10.1103/PhysRevA.94.052327
journal journal Phys. Rev. A volume 94, pages 052327 (year
2016)NoStop
[Nguyen et al.(2019)Nguyen,
Sukachev, Bhaskar, Machielse,
Levonian, Knall, and Stroganov, Riedinger, Park, Lonččar, and Lukin]PhysRevLett.123.183602
author author C.T. Nguyen, author D.D. Sukachev,author M.K. Bhaskar,
author B. Machielse, author D.S. Levonian, author
E.N. Knall, author P. Stroganov, author R. Riedinger, author H. Park,author M. Lonččar, and author M.D. Lukin, 10.1103/PhysRevLett.123.183602
journal journal Phys. Rev. Lett. volume 123, pages 183602 (year
2019)NoStop
[Bradley et al.(2019)Bradley,
Randall, Abobeih, Berrevoets,
Degen, Bakker, Markham, Twitchen, and Taminiau]PhysRevX.9.031045
author author C.E. Bradley, author J. Randall,author M.H. Abobeih,
author R.C. Berrevoets, author M.J. Degen, author
M.A. Bakker, author M. Markham, author D.J. Twitchen , and author T.H. Taminiau, 10.1103/PhysRevX.9.031045
journal journal Phys. Rev. X volume 9, pages 031045 (year
2019)NoStop
[Mamin et al.(2014)Mamin,
Sherwood, Kim, Rettner,
Ohno, Awschalom, and Rugar]PhysRevLett.113.030803
author author H.J. Mamin, author M.H. Sherwood,author M. Kim,
author C.T. Rettner, author K. Ohno, author
D.D. Awschalom, and author D. Rugar, 10.1103/PhysRevLett.113.030803
journal journal Phys. Rev. Lett. volume 113, pages 030803 (year
2014)NoStop
[Myers et al.(2014)Myers,
Ariyaratne, Jayich, and Bleszynski]PhysRevLett.118.197201
author author B.A. Myers, author A. Ariyaratne,author Jayich, and author A.C. Bleszynski, PhysRevLett.118.197201
journal journal Phys. Rev. Lett. volume 118, pages 197201 (year
2017)NoStop
[Bauch et al.(2014)Bauch,
Hart, Schloss, Turner, Barry, Kehayias, Singh, and Walsworth]PhysRevX.8.031025
author author E. Bauch, author C.A. Hart,author J.M. Schloss,author M.J. Turner, author J.F. Barry,author P. Kehayias,author S. Singh, and author R.L. Walsworth, 10.1103/PhysRevX.8.031025
journal journal Phys. Rev. X volume 8, pages 031025 (year
2018)NoStop
|
http://arxiv.org/abs/2409.02082v1 | 20240903173250 | Effects of fetch length on turbulent boundary layer recovery past a step-change in surface roughness | [
"Martina Formichetti",
"Dea D. Wangsawijaya",
"Sean Symon",
"Bharathram Ganapathisubramani"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
: Carbon-Aware Serverless Function Scheduling for Sustainable Computing
Alejandro W. Rodriguez
September 9, 2024
=========================================================================
§ ABSTRACT
Recent studies focusing on the response of turbulent boundary layers (TBL) to a step-change in roughness have provided insight into the scaling and characterisation of TBLs and the development of the internal layer. Although various step-change combinations have been investigated, ranging from smooth-to-rough to rough-to-smooth, the “minimum" required roughness fetch length over which the TBL returns to its homogeneously rough behaviour remains unclear. Moreover, the relationship between a finite- and infinite-fetch roughness function (and the equivalent sandgrain roughness) is also unknown. In this study, we determine the minimum “equilibrium fetch length" for TBL developing over a smooth-to-rough step-change as well as the expected error in local skin friction if the fetch length is under this minimum threshold. An experimental study is carried out where the flow is initially developed over a smooth wall, and then a step-change is introduced using patches of P24 sandpaper. 12 roughness fetch lengths are tested in this study, systematically increasing from L = 1δ_2 up to L = 39δ_2 (where L is the roughness fetch length and δ_2 is the TBL thickness of the longest fetch case), measured over a range of Reynolds numbers (4·10^2 ≤ Re_τ≤ 2·10^5). Results show that the minimum fetch length needed to achieve full equilibrium recovery is around 20δ_2. Furthermore, we observe that C_f recovers to within 10% of its recovered value for fetch lengths ≥ 5δ_2. This information allows us to incorporate the effects of roughness fetch length on the skin friction and roughness function.
§ INTRODUCTION
Turbulent Boundary Layers (TBLs) developing over rough walls encompass many engineering applications. Studying this phenomenon is crucial for the performance evaluation of an engineering system. For example, in the aeronautical or automotive industry, the manipulation of a TBL using a surface treatment (i.e. “roughness") may result in drag reduction, <cit.>. On the other hand, in the wind energy sector, an Atmospheric Boundary Layer (ABL) in neutral conditions developing over a wind farm behaves like a large-scale TBL over “roughness". Understanding the physics of this flow leads to more accurate wind power predictions and strategic site selections, <cit.>.
A realistic representation of a rough-wall TBL in these applications hardly ever involves a “homogeneous" rough wall. In some scenarios, it can be better approximated with a streamwise transition in roughness. For example, the roughness on a ship hull (due to biofouling and coating deterioration) incurs at various roughness length scales and sites, resulting in multiple streamwise transitions in roughness, affecting the TBL developing over it. At the same time, when analysing sites for wind farm locations we might encounter areas of complex terrain where we see a combination of forests and plains or sea and coastline. These variations significantly affect the behaviour of the ABL and, consequently, the drag production near the surface.
The main change occurring in a TBL over a rough wall compared to one over a smooth wall is an increase in Wall Shear Stress (WSS) and, consequently, a momentum deficit Δ U^+, characterised by a vertical shift in the logarithmic layer of the streamwise mean velocity profile, which, for fully rough flows, is defined as follows:
U^+=1/κln(y^+)+B-Δ U^+ = 1/κln( y/k_s)+B_FR,
where κ≈ 0.39, B≈4.3, and B_FR=8.5. Equation <ref> shows that the main two parameters used to scale TBLs over rough walls are k_s as the length scale, and the friction velocity u_τ (see <cit.> or other similar works for the details on the scaling arguments). A surface with arbitrary representative roughness height k is associated with a length scale k_s, as shown in figure <ref>, which affects the logarithmic layer of the mean velocity profile in the same way as a surface covered by an ideally uniform sand-grain type of roughness with physical height k_s. Its definition is given in <cit.> and <cit.> and some examples of its usage can be found in <cit.> and <cit.>. This height is usually calculated by taking a point measurement in the logarithmic layer of a TBL and using equation <ref>, with the main assumption being that the flow is within the fully rough regime. Another method of calculating k_s is given by <cit.>, which consists of an iterative procedure to obtain a direct relation between the surface friction and k_s. This method assumes that the TBL adheres with the outer layer similarity (see <cit.>) thus, the TBL is in equilibrium with the surface texture.
The response of the WSS after an abrupt change in roughness and its recovery to an equilibrium state has been studied extensively, using both experimental techniques and numerical simulations. The main results found in research are that the WSS either increases or decreases abruptly overshooting or undershooting the expected value for the downstream surface in smooth-to-rough <cit.> and rough-to-smooth transitions <cit.>, respectively. Experimentally, this has been researched with direct WSS measurements immediately downstream of the step change by using floating element balances, <cit.>, near-wall hot-wires, <cit.>, Preston tubes, <cit.>, and pressure taps, <cit.>, coupled with indirect methods to obtain the development of the WSS with distance from the step-change. This was mainly done using a logarithmic fit to match the measured value and the expected one for the downstream surface if there were no surface changes upstream. Numerically, the WSS recovery after a step-change in roughness has been mainly investigated with DNS <cit.> and LES <cit.>. The results of all these studies were conducted at different Reynolds numbers, approximately 10^2≤ Re_τ≤ 10^6, and a variety of upstream-to-downstream roughness height ratios, -6 ≤ ln(z_02/z_01)≤ 6 (where negative ratios correspond to rough-to-smooth transitions, and positive values correspond to smooth-to-rough changes).
Previous studies highlighted some remaining questions regarding the TBL recovery to an equilibrium condition after being subjected to a streamwise step change in roughness. Firstly, as mentioned above, the characteristic overshoot or undershoot of the WSS just after a step-change in roughness renders the characterisation methods developed for the homogeneous rough wall inaccurate, since both scaling parameters depend on WSS and are calculated assuming fully rough homogeneous roughness. This leads to a need to define a minimum recovery length in which the flow recovers to the homogeneous rough wall TBL. Secondly, the use of experimental indirect methods and numerical methods to obtain the WSS recovery after a step change resulted in a wide range of recovery fetch lengths between 1δ and 10δ, making it difficult to draw specific conclusions from these predictions. Moreover, some studies such as <cit.> showed an increase in recovery fetch length with Reynolds number which is inconsistent with other studies, highlighting the necessity of a direct WSS measurement method for a more accurate prediction. An extensive review and comparison between existing studies can be found in <cit.>.
In this study, we consider a TBL developing from a baseline smooth wall and subjected to a streamwise transition to a rough wall. We aim to investigate and achieve a reliable value for the minimum roughness fetch length that allows a TBL developing past such step change in roughness to recover to an equilibrium condition, i.e. fully adjust to the rough wall downstream of the transition. This is essential since all of the scaling arguments used in rough-wall TBLs depend on the WSS and the latter changes drastically after a step-change in wall roughness. Secondly, we aim to quantify the error in choosing a shorter fetch to conduct experiments/simulations on presumably homogeneous fully rough flows. This would be helpful to quantify the uncertainty of the data if, for instance, a study needed to be conducted in a facility with a shorter test section, or if there were limitations on the domain size for a numerical investigation dictated by the available computational power. Finally, we aim to develop a relationship between the k_s of a surface with short fetch (where the flow is not in equilibrium) in terms of the equilibrium value of k_s. To answer these questions, we designed an experiment to directly measure the change in WSS to sequential increases in roughness fetch, covering a wide range of Reynolds number, 4· 10^3≤ Re_τ≤ 2· 10^5, to ensure all or most common conditions are covered. The setup of the experiment is covered in <ref>, followed by a detailed discussion of the results in <ref> leading to the conclusions and future work in <ref>.
§ EXPERIMENTAL SET-UP AND METHODOLOGY
The experimental campaign is conducted inside the closed return BLWT at the University of Southampton. The TBL is tripped by a turbulator tape located at the inlet of the test section, marking the streamwise datum (x=0) and further developed along the floor of the 12 m-long test section, which has a width and height of 1.2×1 m. Figure <ref> illustrates the tunnel and coordinate system where x, y and z denote the streamwise, wall-normal and spanwise directions, respectively. The tunnel is equipped with a “cooling unit” comprised of two heat exchangers and a temperature controller such that the air temperature remains constant during measurements (21^∘C ± 0.5^∘C). The free stream has a turbulence intensity of ≤ 0.1 U_∞ (where U_∞ is the free-stream velocity), measured with hot-wire anemometry before the experimental campaign. The tunnel is equipped with a closed-loop PID controller to set U_∞, and air properties are measured with a pitot-static tube and a thermistor inside the BLWT.
As seen in figure <ref>, the experiment consisted of a roll of P24 sandpaper cut in patches of size 2δ_2× 8δ_2 (where δ_2 refers to the TBL thickness of the case with the longest fetch measured at the balance location). The patches are sequentially taped on the floor of the wind tunnel's test section starting at the measurement point, about 59delta2 downstream from the test section's inlet, and added upstream. In this way, the roughness fetch measured from the centre-line of the balance is systematically increased, with the shortest fetch being 1δ_2, and the longest being 39δ_2. All cases are listed in table <ref>. The longest fetch tested was chosen as a threshold between having as long of a fetch as achievable in our facility and ensuring the TBL on the smooth surface upstream of the step change would also have enough development length to be in equilibrium conditions (≈25δ_1 in the longest fetch case, where δ_1 is the TBL thickness of the smooth surface measured at the measurement point).
The experimental campaign was designed to take direct WSS measurements at different Reynolds numbers and with sequentially increased roughness fetch (the distance between the step change in roughness and the measurement point). This was possible by employing a floating element drag balance (located at the previously mentioned “measurement point"), designed and manufactured by <cit.>. With this tool, the friction on the wall was monitored during velocity sweeps (0-40 [ms^-1]) while systematically increasing the length of the roughness fetch. The velocity sweeps were run three times per case to ensure the repeatability of the results. A schematic of the balance and its specifications can be found in <cit.>.
For each fetch length, measurements are conducted within a range of freestream velocities 10≤ U_∞≤40 [ms^-1], allowing 10 seconds for the flow to adjust after each velocity increase and 60 seconds for the flow to come to rest completely before restarting the sweep. The sampling rate was set to f_s = 256 Hz, while the sampling time was set to 60s. Once the friction force, F, has been measured, the WSS, τ_w, and friction velocity, u_τ, can be directly computed.
τ_w = F/A = 1/2ρ U_∞^2 C_f = ρ u_τ^2 ,
where A is the surface area of the balance plate and C_f is the friction coefficient.
Planar PIV was also performed in the streamwise wall-normal plane above the floating element location. This was done to check whether the Outer Layer Similarity used in <cit.> to calculate k_s holds for some or all of our cases and to calculate the boundary layer thickness for all cases. The additional PIV measurements were only conducted at a free-stream velocity of 20 [ms^-1] and only for the cases with fetch length 1δ_2,3δ_2,5δ_2,7δ_2,9δ_2,19δ_2, and 39δ_2. The selection of free-stream velocity and fetches to study with PIV was dictated by the trends obtained in the drag balance measurements, as seen in <ref>. The data was sampled at f_s =1 Hz (t_r=U_∞/(f_s·δ_2)≈133, where t_r is the TBL turnover rate) with Lavision Imager CMOS 25 MP cameras (resolution of 17 pixels/mm), using a Bernoulli 200 mJ, 532 nm Nd:YAG laser and the Lavision software Davis 10 for acquisition. The data was processed using an in-house code for cross-correlation, with a final window size of 16×16 px with 75% overlap, and a viscous-scaled final window size of 30×30 [cm].
§ RESULTS
The evolution of the friction coefficient obtained with the drag balance at different fetch lengths and increasing Reynolds number (Re_x=ρ U_∞ x/μ, where x is the streamwise distance between the wind-tunnel's test section inlet and the balance centre-line) can be seen in figure <ref>A. The colour legend for all the plots in <ref> is shown in table <ref>, while the error propagation given by the drag balance measurements is listed in table <ref> for the different parameters. Figure <ref>A shows that for a fixed fetch length, C_f is independent of Re_x, which is a sign that the flow is within the fully rough regime bounds mentioned in 1. However, it is not fetch-length independent: the fetch length is inversely proportional to C_f, consistent with the overshoot downstream of the transition observed by multiple studies listed in <cit.>, and the slow recovery with increasing distance from the step change.
The recovery of WSS with fetch length is shown more clearly in figure <ref>B. This plot shows the recovery of the friction coefficient measured at around 20 [ms^-1] with fetch length. This is the lowest flow speed at which the TBL seems to reach fully rough conditions and is thus used for the PIV measurements as well. The friction coefficient is plotted against the normalised fetch length, where δ_2 is the boundary layer thickness at the balance location of the fully rough case with a fetch length of ≈ 39δ_2. This length scale was chosen instead of the most commonly used δ_1 for two reasons. Firstly, having the recovery length as a function of the downstream TBL thickness removes all dependency on the type of surface upstream of the step change, making it applicable to more cases; secondly, the TBL thickness was measured using PIV above the balance to ensure consistency between the balance readings and the flow field above while no PIV measurement was taken upstream of the step change in any of the cases. For comparison, table <ref> lists the TBL thickness measured above the balance for all the different fetches.
Figure 2.4 in <cit.> shows a comparison of recovery lengths collated from previous studies, suggesting a wide range of recovery lengths from 1 to 10 δ_1 for both smooth-to-rough and rough-to-smooth transitions. Our present results, shown in figure <ref>B, suggest a longer recovery length of at least 20δ_2. Secondly, although we expect the overshoot in C_f immediately after transition (i.e. C_f measured in shorter patch lengths, 1δ_2-5δ_2), we observe that for L> 5δ_2, the error in C_f is within ≈ 10% of the converged value. This type of error is to be expected when a shorter development length or computational domain is used.
Figure <ref>A can also be used to obtain the equivalent sand-grain height following the method proposed by <cit.>. As briefly mentioned in 1, this method assumes that the flow has already reached an equilibrium state and therefore employs outer layer similarity from <cit.>, to obtain a relationship between C_f at constant unit Reynolds number and k_s, via what the authors refer to as “lines of constant length, k_s/x". These are shown in figure <ref>A as pink horizontal solid lines and, the intersection of these and the C_f values at given Re_x, give us a way of calculating k_s for different fetch lengths.
Before discussing the result of this operation, the outer layer similarity hypothesis from <cit.> was reproduced and is shown in figure <ref>A. From this plot, it can be seen that for shorter fetch lengths, velocity defect profiles do not collapse and hence do not conform to Outer Layer Similarity (OLS). This is to be expected as OLS is a measure of equilibrium with the boundary layer and for fetches lower than ≈ 10δ_2 equilibrium cannot be achieved due to the development of the internal layer. On the other hand, for the longer fetches, OLS can be observed as the profiles perfectly collapse onto smooth wall TBLs from ≈0.4δ. The latter is the main conclusion from <cit.>, which means that the near wall region and anything that is associated with it should not affect the outer portion of the TBL. From our results, we can conclude that this indeed holds for the longest fetches tested. In the following analysis, we will see more in detail how the non-equilibrium conditions affect the prediction of k_s values based on OLS and how this compares to the standard practice of calculating it from the roughness function Δ U^+ where fully rough as well as equilibrium conditions are assumed.
In figure <ref>B we show the k_s development with fetch length obtained using two methods. Firstly,
the method from <cit.> defined by the symbol [baseline=(0,0.4)]
[col1, draw=black] (0,0.4) – (0.15,0.7) – (-0.15,0.7) – cycle; ; secondly, we used equation <ref> to fit logarithmic profiles to the velocity profiles near the wall, represented by the symbol [baseline=(0,0.4)] [col1,draw=black] (0,0.55) circle (4pt);
. Lastly, the symbol [baseline=(0,0.4)] [col1,draw=black] (0,0.4) – (0.15,0.5) – (0.3,0.4) – (0.24,0.55) – (0.35,0.65) – (0.2,0.65) – (0.15,0.8) – (0.1,0.65) – (-0.05,0.65) – (0.06,0.55) –cycle; represents the ratio of the k_s calculated with the two methods to offer a direct comparison of the respective values with increasing fetch length. Starting with the method from <cit.> we observe that k_s follows the same trend as C_f i.e. overshooting its “real" value for a certain surface at fixed Reynolds number and logarithmically approaching its true value with increasing fetch length. This plot shows how crucial it is to ensure sufficient fetch length for the WSS to recover to be able to treat k_s as universal and use it as a length scale/modelling constant for rough wall TBLs. It can also be noted that the minimum fetch length for full WSS recovery is around L≥20δ_2, where C_f becomes both Re and fetch length independent. We note here that this streamwise fetch might depend on the type of roughness and the extent of change in k_s (from upstream to downstream). If the change is small, then, we expect the surface to reach equilibrium faster. Regardless, the results suggest that TBLs flowing over a change in wall texture with fetch lengths shorter than at least 5δ_2 (error≥ 10%) will inevitably result in a significant overestimation of the roughness function and corresponding mean flow.
In figure <ref>B, we also see the trend of k_s when calculated by fitting a logarithmic profile to the velocity profile in the near-wall region (i.e. below the inflection point), which is the point where the IL blends into the outer layer. This is done by applying equation <ref> to obtain the k_s value that gives the best matching U^+ profile via an iterative procedure. As shown in this figure, the trend captured by this method is opposite to the one given by the previous one. Nonetheless, the fetch length at which we can infer equilibrium conditions after a step change does not change and is in full agreement between the two methods. Moreover, the converged value of k_s for the longer fetch cases appears to be in perfect agreement as well. The challenge in using this method lies in accessing velocity measurements in the region close to the wall with enough resolution to fit a logarithmic profile and achieving a Reynolds number large enough to be able to distinguish the inflection point.
Figure <ref> shows the streamwise-averaged mean flow profiles measured by PIV, taken above the FE drag balance and scaled by the friction velocity given by the balance measurement, where the black dashed line represents the log profile. In figure <ref>A, the wall-normal coordinate used to plot all the profiles is normalised by the fully-rough, equilibrium value of k_s = k_s,2, which is computed for the longest fetch case. Here, we can see that although the two longest fetches collapse onto the dashed line perfectly in the log region, the rest of the cases slowly diverge from it with the shortest fetch displaying a change in slope across the logarithmic domain of the TBL. This is clearly explained by the blending of the logarithmic regions from the upstream and downstream surface near the step-change in roughness. In the next plot, figure <ref>B, we used a k_s value for each fetch case that attempts to include the effect of the internal layer development by computing it using the local C_f value for each case. However, when using this method the profiles seem to diverge to a greater extent than using than k_s,2 value for all the cases. This can be explained by the equilibrium assumption made when employing the method described in <cit.>. Lastly, in figure <ref>C, we used the k_s value for each individual case given by fitting a logarithmic profile to the IL only as given in figure <ref>. Using this method we achieved a perfect match for all fetches below the inflection point, while above this point the shorter fetch profiles do not collapse onto the longer ones. This is because the k_s value that models the IL region would inevitably fail in the outer region in cases of non-equilibrium such as a TBL after a step change in roughness. Therefore, in order to achieve a universal scaling we would have to make k_s a function of x, by employing a different value for different fetches, and y, by using a different value below and above the inflection points where the IL is still developing. Finally, this method is only possible when a direct way of measuring drag is available as the drag given by the slope of the IL is not correct for short fetches.
§ CONCLUSIONS
The current paper aims to describe the outcome of an experimental campaign involving direct measurements of the WSS recovery after a step change in wall roughness with systematically increased fetch length. The results show that full WSS recovery is achieved 20δ_2 downstream of the step change, while previous studies employing indirect ways of measuring the WSS recovery predicted a full recovery between 1δ_1-10δ_1. This difference is most likely due to the logarithmic nature of the WSS recovery. Therefore, even the smallest difference in WSS results in a significant difference in fetch length. We also show that the greatest change in WSS appears for fetch lengths between 1δ_2-5δ_2, resulting in an error of ≤ 10% of the converged WSS value when fetches ≥ 5δ_2 are used.
Moreover, we have shown that the equivalent sand-grain height, k_s, given by the method adopted in <cit.> cannot be used to scale or model the mean velocity profile of a TBL for fetches measuring less than 5δ_2, as this would inevitably result in a significant overprediction of the roughness effects and erroneous velocity profiles. This is due to the assumptions employed when deriving k_s, including fully-rough regime and equilibrium conditions in the TBL, which do not apply in the case of a TBL flowing past a step change with fetch length measuring less than 5δ_2. On the other hand, when fitting a logarithmic profile to the IL region, we can achieve a unique k_s value for finite fetches that is able to scale/model the velocity profile below the inflection point and, by making k_s vary in the wall-normal direction, we could be able to scale/model TBLs past step changes in roughness and their development to a greater extent.
A new way of modelling k_s, which takes into account both log regions of the internal boundary layer downstream of the transition and the outer layer (containing the flow history prior to the transition), could help with scaling/modelling streamwise varying rough wall TBLs. A correction factor between the k_s trend with increasing fetch given by <cit.> and the one given by fitting should also be developed in cases where high-resolution PIV at the right Reynold number near the wall is not viable.
Acknowledgments. The authors acknowledge funding from the Leverhulme Early Career Fellowship (Grant ref: ECF-2022-295), the European Office for Airforce Research and Development (Grant ref: FA8655-23-1-7005) and EPSRC (Grant ref no: EP/W026090/1).
Data Statement. All data presented in this manuscript will be made publicly available upon publication.
jfm
|
http://arxiv.org/abs/2409.02737v1 | 20240904141033 | Nevanlinna Analytic Continuation for Migdal-Eliashberg Theory | [
"D. M. Khodachenko",
"R. Lucrezi",
"P. N. Ferreira",
"M. Aichhorn",
"C. Heil"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el",
"physics.comp-ph"
] |
Institute of Theoretical and Computational Physics, Graz University of Technology, NAWI Graz, 8010, Graz, Austria
Institute of Theoretical and Computational Physics, Graz University of Technology, NAWI Graz, 8010, Graz, Austria
Institute of Theoretical and Computational Physics, Graz University of Technology, NAWI Graz, 8010, Graz, Austria
Computational Materials Science Group (ComputEEL/MatSci), Universidade de São Paulo, Escola de Engenharia de Lorena, DEMAR, Lorena, Brazil
Institute of Theoretical and Computational Physics, Graz University of Technology, NAWI Graz, 8010, Graz, Austria
[Corresponding author: ][email protected]
Institute of Theoretical and Computational Physics, Graz University of Technology, NAWI Graz, 8010, Graz, Austria
§ ABSTRACT
In this work, we present a method to reconstruct real-frequency properties from analytically continued causal Green's functions within the framework of Migdal-Eliashberg (ME) theory for superconductivity. ME theory involves solving a set of coupled equations self-consistently in imaginary frequency space, but to obtain experimentally measurable properties like the spectral function and quasiparticle density of states, it is necessary to perform an analytic continuation to real frequency space. Traditionally, the ME Green's function is decomposed into three fundamental complex functions, which are analytically continued independently. However, these functions do not possess the causal properties of Green's functions, complicating or even preventing the application of standard methods such as Maximum Entropy. Our approach overcomes these challenges, enabling the use of various analytic continuation techniques that were previously impractical. We demonstrate the effectiveness of this method by combining it with Nevanlinna analytic continuation to achieve accurate real-frequency results for ME theory, which are directly comparable to experimental data, with applications highlighted for the superconductors MgB_2 and LaBeH_8.
Nevanlinna Analytic Continuation for Migdal-Eliashberg Theory
C. Heil
September 9, 2024
=============================================================
§ INTRODUCTION
Temperature-dependent quantum field theory via Green's functions is a state-of-the-art approach to theoretical condensed matter physics. Solving many-body Green's function equations directly on the real frequency axis is possible, but numerically very challenging, due to the existence of poles on the real axis, requiring the evaluation of principal value integrals.
However, these equations simplify significantly when formulated in imaginary, i.e. Matsubara frequency space.
By doing so, the problem becomes markedly simpler both analytically and numerically, as it leads to well-defined integrals and discrete Matsubara frequency sums <cit.>, reducing the numerical cost of calculations considerably compared to a direct evaluation in otherwise continuous real frequency space.
The disadvantage that comes with this is that in order to be able to compare to real frequency properties obtained in experiment, such as angle-resolved photo emission spectra <cit.> or scanning tunneling spectroscopy <cit.>, an analytic continuation is required, which is an ill-conditioned problem.
As a result,
a variety of analytic continuation methods have been developed, such as Nevanlinna analytic continuations (NAC) <cit.>, the Padé approximation <cit.>, Maximum Entropy (MaxEnt) formalism <cit.>, stochastic analytic continuation <cit.>, genetic algorithms, and machine learning <cit.> or causal projection methods <cit.>.
A highly topical and prominent use case for such a workflow is the state-of-the-art theoretical description of conventional superconductivity within Migdal-Eliashberg (ME) theory <cit.>, as for instance implemented in the EPW package <cit.> of the Quantum ESPRESSO software <cit.>, which allows for the ab initio computation of important properties of the superconducting phase, most notably the superconducting gap function Δ_n𝐤(iω_j), and thus , on the imaginary frequency axis.
Many of the analytic continuation methods mentioned above require the causal Green's function in imaginary time or frequency. This is a problem in ME formalism, where the electron-phonon mediated self-energy Σ̂_n𝐤 and therefore the Green's function is split into three fundamental complex functions:
the already mentioned superconducting gap function Δ_n𝐤(iω_j), the mass renormalization function Z_n𝐤(iω_j) and the energy shift function χ_n𝐤(iω_j). It is important to have access to those in real frequency space, which is why current implementations of analytic continuation in the EPW package are restricted to methods that are able to continue these three functions directly. This is the case for the computationally lightweight Padé approximation <cit.> and a computationally very expensive iterative procedure <cit.> specific to ME theory. The latter is generally not feasible in most scenarios due to numerical cost, and the former is known to often have numerical artifacts which break the non-negative causality condition of spectral functions.
In this work, we will demonstrate how to reconstruct the complex functions Δ_n𝐤(ω), Z_n𝐤(ω) and χ_n𝐤(ω) in real frequency space by analytically continuing the components of the Nambu-Gor'kov Green's function, and we further introduce an efficient general analytic continuation workflow for ME theory.
Importantly, this workflow can be used with any analytic continuation method that requires causal Green's functions, therefore giving us access to many more methods without any loss of information for the complex functions Δ_n𝐤(ω), Z_n𝐤(ω) and χ_n𝐤(ω). Due to symmetry, only two analytic continuations are required, which is one less than previously needed for full-bandwidth calculations with χ_n𝐤(ω) ≠ 0 <cit.>.
In this work, we employ the recently proposed NAC <cit.>, which is very well suited for continuing high-quality data and is able to reconstruct the spectral function with a high level of accuracy. We introduce a full implementation
for both the isotropic and the anisotropic ME equations in the EPW package. We find that this method works exceptionally well for obtaining superconducting properties accurately, while keeping numerical costs comparable to the Padé approximation. NAC is numerically much more stable and, due to its analytic approach, always fulfills the causality conditions of normal spectral functions.
§ THEORY
§.§ Nevanlinna Analytic Continuation (NAC)
We start with a brief summary of the NAC as employed for our purposes. For a more detailed description of the method, we refer the reader to Refs. <cit.>.
Our goal is to obtain the experimentally measurable spectral function A_n𝐤(ω), which is proportional to the imaginary part of the retarded Green's function G_n𝐤^ret,
A_n𝐤(ω) = -1/π Im G_n𝐤^ret(ω+iη),
where n and 𝐤 are band and momentum indices, respectively.
Due to the existence of poles on the real axis, the retarded Green's function is evaluated slightly above the real axis, described by the infinitesimal positive number η.
A causal spectral function such as the one in Eq. (<ref>) needs to fulfill two causality conditions,
A_n𝐤(ω) ≥ 0 and ∫_-∞^∞dω A_n𝐤(ω) = 1.
We want to note at this point that NAC can still be used in cases where normalization is not fulfilled, as will be the case for the auxiliary Green's function introduced in section <ref>.
To obtain the retarded Green's function G_n𝐤^ret(ω+iη) from the imaginary frequency solution, analytic continuation in the form G_n𝐤(iω_j) → G_n𝐤^ret(ω+iη) is required. This problem, however, is ill-conditioned, i.e. small errors in the Matsubara Green's function G_n𝐤(iω_j) result in large errors in the retarded Green's function G_n𝐤^ret(ω+iη), regardless of the method one uses to analytically continue. The reason for that can be made apparent by considering
the kernel K(τ,ω) of the spectral representation of the Green's function,
G_n𝐤(τ) = ∫dωe^-ωτ/1+e^-ωβ_K(τ,ω) A_n𝐤(ω),
which becomes exponentially small for high frequencies <cit.>.
Determining the spectral function boils down to computing the inverse of the kernel, where for high frequencies divisions by very small numbers occur, resulting in small errors in the data having a huge impact on the result. Thus, the inverse of the kernel cannot be used to obtain the spectral function in practice.
The main interpolation procedure in NAC works according to the Schur algorithm <cit.>, performed on the unit circle. In particular, to construct Nevanlinna interpolants, the first step is to Möbius transform <cit.> the Green's function to a contractive function on the unit circle with
θ(iω_j) = (-G_n𝐤(iω_j) - i) / (-G_n𝐤(iω_j) + i),
as can be appreciated in Fig. <ref>. We then use the following recursive formula from the Schur algorithm,
θ(z)[z;θ_M(z)] = a(z)θ_M(z)+b(z)/c(z)θ_M(z)+d(z),
with
[ a(z) b(z); c(z) d(z) ]
= ∏_j = 1^M[ z-iω_j/z+iω_j θ_j-1(iω_j); z-iω_j/z+iω_j [θ_j-1(iω_j)]^* 1 ],
to obtain analytic continuation on the unit circle θ(z).
By construction, θ_0(z)=θ(z) interpolates the Green's function for all Matsubara frequencies j=1,2,...,M, while each subsequent contractive function θ_1(z),...,θ_M(z) interpolates one less point than the previous one. As a result, the final function θ_M is an unconstrained arbitrary contractive function on the unit circle. For discrete spectral functions of Lorentzian shape, the method works well regardless of the choice of θ_M(z), which is, for example, the case for most isotropic ME solutions.
The simplest approach is using a constant θ_M(z)=0, called the free solution.
However, in the case of smooth and rather featureless spectral functions, using a constant θ_M(z) can result in rapidly oscillating functions,
in which case optimization of θ_M(z) can be required. We found that the free solution works very well in isotropic ME calculations, as discussed in section <ref>. However, in the case of anisotropic ME theory, individual spectral functions will usually oscillate for θ_M(z)=0. Optimization of θ_M(z) is possible <cit.>, but will not be considered in this work and is subject of future studies.
Finally, to obtain the desired G_n𝐤^ret(ω+iη), the function θ(z) in Eq. (<ref>) is evaluated at ω+iη (red dashed line) and then transformed back to the complex plane using the inverse Möbius transformation (see Fig. <ref>).
One big advantage of NAC is that we can use a generalization of the Pick criterion <cit.> to avoid the errors from higher frequency Matsubara points while also significantly improving numerical efficiency of the method. With the Möbius transformation h(z)=(z-i)/(z+i), the criterion is fulfilled if the matrix
[ 1-θ(iω_j)θ(iω_j)^*/1-h(iω_i)h(iω_j^*)]_i,j i,j = 1,2,...,M
is positive semi-definite, in our case verified by using Cholesky decomposition. When the Pick criterion is fulfilled, it is guaranteed that a solution to the Schur algorithm exists, which should always be the case for a noise-free imaginary solution. In practice, however, higher frequency points are much noisier, as has been discussed for the kernel in Eq. (<ref>), so they will generally not fulfill the Pick criterion. As a result, discarding frequencies that don't fulfill the criterion reduces noise and improves numerical speed of the method.
§.§ Using NAC with Migdal-Eliashberg Theory
Contrary to a continued fraction approach like in the Padé approximation, NAC requires causal Green's functions to be employed. The Green's function we want to access in real frequency space using analytic continuation is the generalized Green's function Ĝ_n𝐤(τ) from the Nambu-Gor'kov formalism <cit.>,
Ĝ_n𝐤(τ) = -
[ ⟨ T_τĉ_n𝐤↑(τ) ĉ_n𝐤↑^†(0) ⟩ ⟨ T_τĉ_n𝐤↑(τ) ĉ_n-𝐤↓(0) ⟩; ⟨ T_τĉ_n-𝐤↓^†(τ) ĉ_n𝐤↑^†(0) ⟩ ⟨ T_τĉ_n-𝐤↓^†(τ) ĉ_n-𝐤↓(0) ⟩ ],
where the main diagonal elements are the normal state Green's functions and the off-diagonal elements are the anomalous Green's functions, which describe Cooper pairs <cit.>.
Although the definition of the Nambu-Gor'kov matrix is different to a general matrix-valued Green's function,
it is possible to transform one to the other using a spin-dependent particle-hole transformation such as ĉ_n-𝐤↓→ĉ_n-𝐤↓^†. As a result, analytic continuation methods for matrix-valued Green's functions do work for the Nambu-Gor'kov matrix. However, due to symmetry properties of both the main and off-diagonal elements of Ĝ_n𝐤, as well as its small size, an approach using auxiliary Green's functions as presented in this paper is much more efficient. As we will demonstrate later, only two analytic continuations are necessary to obtain all the components of the Nambu-Gor'kov matrix.
Due to the periodicity of the imaginary time Green's function, it can be expanded to imaginary Matsubara frequency space using a Fourier series of the form
Ĝ_n𝐤(τ) = 1/β∑_n=-∞^∞e^-iω_jτĜ_n𝐤(iω_j),
and the Nambu-Gor'kov matrix in imaginary frequency space can be written as
Ĝ_n𝐤(iω_j) =
[ G^11_n𝐤(iω_j) F_n𝐤(iω_j); F_n𝐤^*(iω_j) G^22_n𝐤(iω_j) ].
One can explicitly show that this is fulfilled for the negative of the main diagonal elements of the Nambu-Gor'kov matrix using the Lehmann representation
G^11_n𝐤(iω_j) = 1/Z∑_mm'⟨ m'|ĉ_n𝐤↑|m⟩⟨ m|ĉ_n𝐤↑^†|m'⟩/iω_j + E_m' - E_m (e^-β E_m + e^-β E_m')
where Z is the partition function, and E_m and E_m' are the eigenenergies of the states |m⟩ and |m'⟩. Identifying the positive term K = Z^-1 |⟨ m|ĉ_n𝐤↑^†|m'⟩|^2 (e^-β E_m + e^-β E_m') ≥ 0 and using z=x+iy, with y>0, we find
Im[-G_n𝐤(iω_j)] = ∑_mm'Ky/(x + E_m' - E_m)^2 + y^2≥ 0,
verifying the correct mapping 𝒩: ℂ^+ →ℂ^+. The prove for G^22_n𝐤(iω_j) follows the same lines.
The situation for the anomalous Green's function F_n𝐤(iω_j) in the off-diagonal elements of Eq. (<ref>) however is different, as the product of the matrix elements ⟨ m'|ĉ_n𝐤↑|m⟩⟨ m|ĉ_n𝐤↑|m'⟩ cannot be rewritten as an absolute value. This means that the spectral function of the anomalous Green's function does not fulfill the causality condition of strictly non-negative spectral weight,
and a direct analytic continuation using Nevanlinna functions is not possible.
This can also be seen from the anomalous spectral function A^an_n𝐤(ω) which is defined as usual from the imaginary part of F_n𝐤(ω). Due to the anti-commutation of the creation and annihilation operators with themselves, the normalization ∫_-∞^∞dω A^an_n𝐤(ω) = 0 is fulfilled by definition.
Nevertheless, we can obtain the spectral functions of the anomalous Green's functions F_n𝐤(iω_j) by constructing auxiliary Green's functions that can be continued with NAC.
In accordance with Refs. <cit.> we define a mixed operator of the form
â_n𝐤(τ) = ĉ_n𝐤↑(τ) + ĉ_n-𝐤↓^†(τ).
The auxiliary Green's function G^aux_n𝐤(τ) = -⟨ T_τâ_n𝐤(τ) â_n𝐤^†(0) ⟩ is then guaranteed to have a spectral function with constant sign. Now, by rewriting the auxiliary Green's function in terms of the original creation and annihilation operators, one obtains
G^aux_n𝐤(τ) = - ⟨ T_τĉ_n𝐤↑(τ) ĉ_n𝐤↑^†(0) ⟩ - ⟨ T_τĉ_n𝐤↑(τ) ĉ_n-𝐤↓(0) ⟩
- ⟨ T_τĉ_n-𝐤↓^†(τ) ĉ_n𝐤↑^†(0) ⟩ - ⟨ T_τĉ_n-𝐤↓^†(τ) ĉ_n-𝐤↓(0) ⟩,
or, in other words, the sum over all the components of the Nambu-Gor'kov matrix. In Matsubara frequency space, the auxiliary Green's function is then given by
G^aux_n𝐤(iω_j) = G^11_n𝐤(iω_j) + F_n𝐤(iω_j) +
F^*_n𝐤(iω_j) + G^22_n𝐤(iω_j).
In ME theory, the components of the Nambu-Gor'kov matrix from Eq. (<ref>) have the form
G^11_n𝐤(iω_j) = -iω_j Z_n𝐤(iω_j) + ε_n𝐤 + χ_n𝐤(iω_j)/Θ_n𝐤(iω_j)
F_n𝐤(iω_j) = F^*_n𝐤(iω_j)=-Δ_n𝐤(iω_j) Z_n𝐤(iω_j)/Θ_n𝐤(iω_j)
G^22_n𝐤(iω_j) = -iω_j Z_n𝐤(iω_j) - ε_n𝐤 - χ_n𝐤(iω_j)/Θ_n𝐤(iω_j)
with
Θ_n𝐤(iω_j) = [ω_j Z_n𝐤(iω_j)]^2 + [ε_n𝐤 + χ_n𝐤(iω_j)]^2 +
[Δ_n𝐤(iω_j) Z_n𝐤(iω_j)]^2.
where we used the shorthand notation ε_n𝐤 = ϵ_n𝐤 - ϵ_F. Then the auxiliary Green's functions G^aux_n𝐤(iω_j) in Matsubara frequency space is given by
G^aux_n𝐤(iω_j) = -2/Θ_n𝐤[ . iω_j Z_n𝐤(iω_j) + Δ_n𝐤(iω_j) Z_n𝐤(iω_j) . ].
In contrast to the anomalous spectral function, the corresponding auxiliary spectral function A^aux_n𝐤(ω) is normalized to two,
i.e. ∫_-∞^∞dω A^aux_n𝐤(ω) = 2, and does fulfill the causality condition A^aux_n𝐤(ω) ≥ 0, allowing the application of NAC to G^aux_n𝐤(iω_j).
The symmetry G^11_n𝐤(-z)=-[G^22_n𝐤(z)]^*
is fulfilled in both real and imaginary frequency space, which we can use to avoid explicit analytic continuation of G^22_n𝐤. As a result, we only need two analytic continuations to obtain all the components of the Nambu-Gor'kov matrix in real frequency space, one for G^11_n𝐤 and one for G^aux_n𝐤.
The analytically continued anomalous Green's functions F_n𝐤(ω) is then given by
F_n𝐤(ω) = 1/2[ . G^aux_n𝐤(ω) - G^11_n𝐤(ω) - G^22_n𝐤(ω) . ],
where we do not explicitly write the small imaginary part η to keep notations light.
We can now finally reconstruct all functions in ME formalism by using the Nambu-Gor'kov Green's function in real frequency space
Ĝ_n𝐤(ω) = ω Z_n𝐤(ω)τ̂_0 + [ε_n𝐤 + χ_n𝐤(ω)]τ̂_3 + Δ_n𝐤(ω) Z_n𝐤(ω)τ̂_1/[ω Z_n𝐤(ω)]^2 - [ε_n𝐤 + χ_n𝐤(ω)]^2 - [Δ_n𝐤(ω)Z_n𝐤(ω)]^2.
Here, τ̂_j are the Pauli matrices. By solving Eq. (<ref>) like a nonlinear system of equations, we find for the gap function
Δ_n𝐤(ω) = 2ω F_n𝐤(ω)/G^11_n𝐤(ω) + G^22_n𝐤(ω),
in agreement with Refs. <cit.>. For the remaining complex functions in ME formalism, we have the following two equations
Z_n𝐤(ω) = - G^11_n𝐤(ω)+G^22_n𝐤(ω)/2ω[[F_n𝐤(ω)]^2-G^11_n𝐤(ω)G^22_n𝐤(ω)],
χ_n𝐤(ω) = -2ε_n𝐤[F_n𝐤(ω)]^2+G^11_n𝐤(ω)[2ε_n𝐤G^22_n𝐤(ω)-1]+G^22_n𝐤(ω)/2[[F_n𝐤(ω)]^2-G^11_n𝐤(ω)G^22_n𝐤(ω)].
By obtaining the off-diagonal anomalous Green's function from the analytic continuation of the auxiliary Green's function in Eqs. (<ref>) and (<ref>), there is no longer any loss in information, regardless of the analytic continuation method. This is the key idea of the generalized analytic continuation workflow we will introduce in more detail in section <ref>.
Eqs. (<ref>-<ref>) represent the set of self-consistent equations for the fully anisotropic ME theory. To speed up the calculations, two approximations are usually employed. (i) The first one is the isotropic limit, in which the gap function and the pairing are averaged over the Brillouin zone. This can be justified by considering that the superconducting gaps are found to be isotropic for most materials (MgB_2 being a notable exception) and that, in real materials, defects are always present, which tend to average out anisotropic gaps. (ii) Secondly, the density of states is considered constant as only electronic states within an energy window of the phonon energy around the Fermi level will experience renormalization due to electron-phonon effects. Considering only assumption (i), commonly referred to as the isotropic full-bandwidth ME equations, our framework simplifies to
Δ(ω) = 2ω F(ω)/G^11(ω)+G^22(ω),
Z(ω) = - G^11(ω)+G^22(ω)/2ω[[F(ω)]^2-G^11(ω)G^22(ω)],
χ(ω) = G^22(ω)-G^11(ω)/2[[F(ω)]^2-G^11(ω)G^22(ω)].
Working with both assumptions, the energy shift function χ_n𝐤 vanishes in imaginary and real frequency space. As a result the main diagonal components o the Nambu-Gor'kov matrix become identical, i.e. G(ω)=G_n𝐤^11(ω)=G_n𝐤^22(ω). So, the ME equations in our framework are further simplified to
Δ(ω) = ω F(ω)/G(ω),
and
Z(ω) = - G(ω)/ω[[F(ω)]^2-[G(ω)]^2].
One particularly interesting quantity of the superconducting state, apart from Δ and is the superconducting quasiparticle density of states, for the Fermi-surface restricted case given by
N_S (ω)/N_F = -1/π∫_-∞^∞dϵ_n𝐤 Im G^11_n𝐤(ω).
Although solving this integral numerically is possible from the G_ n𝐤^11 component, results will oscillate quite strongly beyond the initial gap for any method, as significantly finer 𝐤-grids would be necessary to average over them properly.
In the isotropic BCS-limit, where it is assumed that χ_n𝐤=0 and Z_n𝐤=1 we have
N_S (ω)/N_F = ∑_n ∫d𝐤/Ω_BZδ(ϵ_n𝐤 - ϵ_F)/N_FRe[ ω/√(ω^2 - Δ^2_n𝐤(ω))].
In other words, by using Eq. (<ref>), we also have access to the quasiparticle density of states in the BSC-limit, which generally results in more pronounced peaks and smoother curves.
§.§ Analytic Continuation Workflow for Migdal-Eliashberg Theory using Green's functions
To perform the analytic continuation of the complex functions Δ_n𝐤, Z_n𝐤 and χ_n𝐤 from ME formalism using causal Green's functions, we introduce the following generalized workflow as shown in Fig. <ref>. The steps are as follows:
* Start with the imaginary frequency solution from ME theory Δ_n𝐤(iω_j), Z_n𝐤(iω_j) and χ_n𝐤(iω_j).
* Calculate the normal Green's function G^11_n𝐤(iω_j), as well as the auxiliary Green's function G^aux_n𝐤(iω_j) from Eq. (<ref>).
* Perform the analytic continuation of these two Green's functions (a) to obtain them in real frequency (b).
* Determine the remaining components of the Nambu-Gor'kov matrix in real frequency, by using the symmetry G^22_n𝐤(ω)=-[G^11_n𝐤(-ω)]^*, and Eq. (<ref>) for the anomalous Green's function F_n𝐤(ω).
* Calculate any other real frequency properties from the obtained Nambu-Gor'kov matrix, as for instance the gap function Δ_n𝐤(ω) and the quasiparticle density of states in the BCS-limit N_S (ω)/N_F.
This workflow can be used with any analytic continuation method. Even in the case of the Padé approximation, which initially does not require causal Green's functions, it is possible to restrict the initial continuous fraction assumption to the analytic continuation of causal Green's functions <cit.>
G_n𝐤(z) = a_1z+b_1-a_2z+b_2-a_2z+b_3-...,
where a_j > 0 and b_j∈ ℝ.
Using this causal Green's function approach for the Padé approximation could make the method numerically more stable within ME theory, but is not the focus of this work.
§ RESULTS
We start by showing some results where NAC and the Padé approximation are in excellent agreement. First, for an isotropic full-bandwidth calculation of LaBeH_8, with a fine 𝐤-grid of 30 × 30 × 30 at a temperature of T=70 K. We used default parameters for the Padé approximation as implemented in EPW. For NAC, we used the Pick criterion to determine the optimal number of Matsubara frequencies and a broadening parameter of η=1 meV. The same broadening was added to the spectral function from the Padé approximation in post-processing to obtain Fig. <ref>, where we show a comparison between the two methods for both the normal and anomalous spectral functions A^11_n𝐤(ω) and A^12_n𝐤(ω), respectively. The pole position, and therefore the quasiparticle energy renormalized by superconducting pairing, is located around ω=± 35 meV. NAC is done without optimization, i.e. by using a constant free parameter of θ_M(z)=0. We found that this works very well for the isotropic approximation to ME theory, although small unphysical satellite peaks with a maximum of 1 eV^-1 to 2 eV^-1 can appear at higher frequencies.
The most common outcome is this kind of agreement between NAC and the Padé approximation, which we found to be the case in about 95% of our testing.
NAC can also be used for anisotropic calculations, however spectral functions of individual n and 𝐤 index often have oscillations for θ_M(z)=0, meaning an optimization of the θ_M(z) parameter would be required. Work on this is ongoing and will be implemented in a future version.
Nevertheless, we can use NAC to obtain excellent agreement in the quasiparticle density of states, demonstrated in Fig. <ref> for the well-studied two-gap superconductor MgB_2 <cit.>. Although slight oscillations persist for individual n and 𝐤 contributions when NAC is combined with anisotropic ME calculations, they effectively cancel out entirely in the quasiparticle density of states if the fine 𝐤-grid is sufficiently dense. As a result we have almost perfect agreement between the two methods.
From Fig. <ref>, we can, therefore, identify the gaps that originate from the different electronic π and σ orbitals. For example, in our calculation for T=15 K, we find a first gap of about 2Δ_π=5 meV and a second gap of about 2Δ_σ=16 meV. Finally, we obtained a critical temperature of = 42 K for our anisotropic calculation of MgB_2.
Beyond the most common
cases with excellent agreement between NAC and the Padé approximation, we found two significant advantages of NAC: The first rather obvious one is the avoidance of non-causal negative A^11_n𝐤(ω) spectral functions as shown in Fig. <ref>, demonstrating a situation where the Padé approximation exhibits non-physical numerical artifacts at about ω=± 110 meV.
By definition, the spectral functions from NAC is always positive, making the method preferable for analytic continuations of simple Lorentzian peak shaped spectral functions, which is almost always the case for isotropic ME calculations.
The second advantage is that NAC is able to analytically continue even in difficult cases where the Padé approximation fails entirely. Most common for isotropic calculations at very low temperatures, the computation of the Padé interpolants will run into division by zero issues.
The reason for this is that at low temperatures, the ME calculation requires a lot of Matsubara points, which then enter the Padé approximation and significantly increase the probability of failure.
This can be appreciated in Fig. <ref> for an isotropic calculation of MgB_2 and LaBeH_8 at a low temperature of T=0.05 K. NAC is numerically much more stable by introducing quad precision arrays in the Schur algorithms of the method. We tested a similar quad precision approach for the Padé approximation; however, the numerical cost becomes much larger by approximately a factor of ten times more than NAC. The reason for this is the efficiency of the Pick criterion, which allows NAC to require much less Matsubara frequencies. However, even when using the same number of Matsubara frequencies for both methods, i.e. by not using the Pick criterion, NAC has never run into division by zero issues for our testing, as numerical errors only appear in the form of the above-mentioned satellite peaks.
The numerical costs of NAC using quadruple precision are very similar to those of the Padé approximation using double precision. We reached this conclusion by comparing the runtimes of anisotropic calculations, as isotropic calculations are too brief to provide meaningful comparisons. However, it's important to note that the runtimes of anisotropic calculations depend highly on the chosen 𝐤-grid, so the following estimates are intended to give a general sense of the performance. In our calculations for MgB_2, we observed typical CPU runtimes of approximately 50 seconds per temperature for NAC with quadruple precision, comparable to the runtimes for Padé approximation calculations using double precision. In contrast, using quadruple precision for the Padé approximation significantly increases the computation time to around 500 seconds.
§ CONCLUSION
In this work, we demonstrate a method to obtain
real-frequency properties from the analytically continued Nambu-Gor'kov matrix within the framework of Migdal-Eliashberg (ME) theory for superconductivity.
By leveraging the symmetry properties of the Green's function, our method requires only two analytic continuations to obtain all components in real frequency space: one for the normal and one for the auxiliary Green's function. This significantly broadens the scope for applying a variety of analytic continuation techniques that were previously inaccessible in the Migdal-Eliashberg framework, retaining full information of the real frequency complex functions explicitely, such as Δ_n𝐤(ω), Z_n𝐤(ω), and χ_n𝐤(ω). Our generalized workflow opens up the possibility for employing advanced methods like MaxEnt, stochastic analytic continuation, genetic algorithms, and causal projection in the modeling of superconducting materials, thereby enhancing the applicability and accuracy of the Migdal-Eliashberg formalism.
In particular, as a very powerful method for analytic continuation, we implemented the NAC into this framework.
Compared to the standard Padé approximation, our method offers two key advantages: (i) it effectively avoids non-physical negative values in the spectral function, and (ii) it provides improved numerical stability, enabling accurate analytic continuations at significantly lower temperatures than previously achievable. Despite these enhancements, the NAC method maintains comparable CPU runtimes, thanks to the efficiency of the Pick criterion and the streamlined analytic continuation workflow we introduced.
Using NAC with an unoptimized θ_M(z) parameter performs well for isotropic Migdal-Eliashberg calculations at lower frequencies. However, minor satellite peaks can emerge at higher frequencies due to random oscillations, indicating the need for careful optimization of the θ_M(z) parameter. Likewise, optimization can be necessary to mitigate oscillatory behavior and ensure accurate results when dealing with spectral functions from anisotropic calculations.
§ DATA AVAILABILITY
The authors confirm that the data supporting the findings of this study are available within this article. The updated and newly developed code routines can be obtained upon request and will also be made available within a future EPW release.
We thank E. Gull, E. R. Margine, and H. Mori for their insightful discussions. Calculations have been performed on the dCluster and the lCluster of the Graz University of Technology and on the Vienna Scientific Cluster (VSC) under Project 71754. PNF acknowledges the São Paulo Research Foundation (FAPESP) under Grants 2020/08258-0 and 2021/13441-1.
§ COMPUTATIONAL DETAILS
DFT and DFPT calculations were performed using the Quantum ESPRESSO (QE) software, employing norm-conserving Vanderbilt (ONCV) pseudopotentials <cit.> with a PBE-GGA exchange-correlation potential.
For Fm3m-LaBeH_8 at 100 GPa, we used a 12×12×12 Monkhorst-Pack 𝐤-grid, a 6×6×6 Monkhorst-Pack 𝐪-grid, a kinetic energy cutoff of 80 Ry, a smearing of 0.01 Ry, an electronic convergence threshold of 10^-10 Ry, and a phonon self-consistency threshold of 10^-14 Ry.
After that, we used the Wannier implementation of EPW to interpolate electron-phonon matrix elements onto dens 𝐤- and 𝐪-grids. We used the "selected columns of the density matrix" (SCDM) method <cit.> to automatically wannierize our electronic band structure, using a frozen window of about [-14 eV, +5 eV] around the Fermi energy E_F. For the SCDM entanglement, we use an error function with the two parameters σ=5 and μ_c at the Fermi energy E_F. Then for the solution of the isotropic full-bandwidth ME equations, we used a coarse 𝐤- and 𝐪-grids of 6×6×6 interpolated onto 30×30×30 fine grids. We set the lower boundary of phonon frequency to 10 cm^-1, the width of the Fermi surface window to 1 eV, and the imaginary convergence threshold to 10^-5 Ry, all for a Morel-Anderson pseudopotential of μ^*=0.16. The ME equations were solved while updating the chemical potential to keep the particle number constant, which requires a comparably high-frequency cut-off of 10 eV for convergence. We used sparse sampling and analytically continued with the Padé approximation and NAC simultaneously, starting at a temperature of 60 K increased by steps of 10 K.
For MgB_2, we used 80 Ry for the kinetic energy cutoff, 24^3 Monkhorst-Pack 𝐤-grid, and a Methfessel-Paxton first-order spreading <cit.> of 0.02 Ry for Brillouin zone integration. The dynamical matrices and the variations of the self-consistent potentials were obtained on a 6^3 Monkhorst-Pack 𝐪-grid with a phonon self-consistency threshold of 10^-16 Ry.
The electronic wave functions for the Wannier interpolation were calculated on a Γ-centered uniform 𝐤-grid of 6^3 points. We used 5 Wannier functions for describing the band structure of MgB_2 around the Fermi level: two B-p_z-like states and 3 sp^2 hybridized orbitals associated with the Mg atom.
To solve the Eliashberg equations for MgB_2, we adopted the Fermi-surface restricted approximation <cit.> to keep our calculations consistent with the approach adopted in Ref. <cit.>. We interpolated the electron-phonon matrix elements onto fine Γ-centered uniform grids containing 60^3-𝐤 and 30^3-𝐪 points. The Matsubara frequency cutoff is set to 1 eV, approximately 10 times the maximum phonon frequency. The smearing parameters in the Dirac δ functions for electrons and phonons are set to 100 meV and 0.5 meV, respectively. The Pick criterion was used to determine the optimal number of Matsubara points, and the real frequency broadening of NAC was set to zero.
For the Coulomb pseudopotential μ^* of MgB_2, we utilized the μ^* calculated in Ref. <cit.> according to the Morel-Anderson approximation at the GW level. To rescale the value of μ^* used for solving the Allen-Dynes equation in Ref. <cit.> to be used in the Eliashberg equation in the present work, we used the simple rule <cit.>
1μ^*_Eliashberg = 1μ^*_AD + ln(ω_phω_c),
where ω_ph is the maximum of the phonon spectrum. By doing this, we obtained a μ^* of 0.11.
§ EPW INPUT FLAGS
We have implemented three new input flags that are all written into the input file of the EPW calculation. Although not necessary, it is good practice to perform analytic continuations using a restart calculation. We require the imaginary ME solution, so "limag=.true." has to be set. Then, to enable the NAC calculation, use the logical flag "lneva=.true." (default .false.), which can also be done in the same run as any other analytic continuation method. By default, the Pick criterion will be used to determine the optimal number of Matsubara points. However, if that is not desirable one can disable the criterion with "lpick=.false." (default .true.), where all Matsubara frequencies are used instead. Finally, we have implemented a flag for the real frequency broadening η (in eV) of NAC, called "neva_broad". By default, a small broadening of "neva_broad=0.001", or 1 meV, is used.
apsrev4-2
|
http://arxiv.org/abs/2409.02089v1 | 20240903174113 | Maximum Shannon Capacity of Photonic Structures | [
"Alessio Amaolo",
"Pengning Chao",
"Benjamin Strekha",
"Stefan Clarke",
"Jewel Mohajan",
"Sean Molesky",
"Alejandro W. Rodriguez"
] | physics.optics | [
"physics.optics"
] |
APS/123-QED
[email protected]
Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA
Equal contribution
[email protected]
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Equal contribution
Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA
Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey 08544, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA
Department of Engineering Physics, Polytechnique Montréal, Montréal, Québec H3T 1J4, Canada
Department of Electrical and Computer Engineering, Princeton University, Princeton, New Jersey 08544, USA
§ ABSTRACT
Information transfer through electromagnetic waves is an important problem that touches a variety of technologically relevant application areas, including computing, telecommunications, and power management.
Prior attempts to establish limits on optical information transfer have focused exclusively on waves propagating through vacuum or a known photonic structure (prescribed wave sources and receivers).
In this article, we describe a mathematical theory that addresses fundamental questions relating to optimal information transfer in photonic devices.
Combining the mathematics of information theory, wave scattering, and optimization theory, we derive a method to compute upper bounds on the maximum Shannon capacity that may be achieved by structuring senders, receivers, and their surrounding environment.
This approach provides a means to understand how material selection, device size, and general geometrical features impact power allocation, communication channels, and bit-rate in photonics.
Arbitrary structuring provides a means to optimize the power requirements and channel capacities for communication and leads to a non-convex problem that is significantly more difficult to bound than its fixed structure counterpart, which is convex and thus satisfies a known “water-filling” solution.
We derive a geometry-agnostic convex relaxation of the problem that elucidates fundamental physics and scaling behavior of Shannon capacity with respect to device parameters, as well as the importance of structuring for enhancing capacity.
We also show that in regimes where communication is dominated by power insertion requirements, maximum Shannon capacity maps to a biconvex optimization problem in the basis of singular vectors of the Green's function.
This problem admits analytical solutions that give physically intuitive interpretations of channel and power allocation and reveals how Shannon capacity varies with signal-to-noise ratio.
Proof of concept numerical examples show that bounds are within an order of magnitude of achievable device performance in a variety of settings, with even better agreement in high signal-to-noise regimes, and successfully predict the scaling of performance with channel noise.
The presented methodologies have implications for the optimization of antennas, integrated photonic devices, metasurface kernels, MIMO space-division multiplexers, and waveguides to maximize communication efficiency and bit-rates.
Maximum Shannon Capacity of Photonic Structures
Alejandro W. Rodriguez
September 9, 2024
===============================================
§ INTRODUCTION
From the 19th century electrical telegraph to modern 5G telecommunications, electromagnetics has played an increasingly important role in human communication.
Paralleling this rapid improvement in physical technologies, our understanding of the fundamental meaning of information has also steadily improved.
Building upon earlier work by Nyquist <cit.> and Hartley <cit.>, Shannon demonstrated in two landmark papers <cit.> that not only is it possible to transmit information at a finite rate with arbitrarily small error through noisy channels, but that the maximum transmission rate—now known as the Shannon capacity—can also be explicitly calculated as a function of the channel bandwidth and signal-to-noise ratio.
Shannon's original derivation focused on scalar, time-varying signals subject to additive white Gaussian noise, but the principles he elucidated are universal and can be applied to more general physical settings, including electromagnetic wave phenomena <cit.>.
Information capacity results have been obtained for a variety of systems including fiber optics <cit.> and wireless communications networks <cit.>; in particular, a major area of research are multiple-input multiple-output (MIMO) systems that make use of spatial multiplexing, wherein the signal fields have spatial as well as temporal degrees of freedom <cit.>.
Communication in the presence of a scatterer has also been analyzed by choosing an appropriate optical characteristic proxy, e.g., the number of waves escaping a region <cit.> or the sum of field amplitudes at the receiver <cit.>.
These figures of merit are intimately related to other quantities of power transfer such as thermal radiation <cit.> and near-field radiative heat transfer <cit.> and establish intuitive connections between fields at the receiver and communication, but do not directly maximize Shannon capacity.
These prior results generally work well if the wave propagation medium is fixed, be it antenna design in free-space <cit.> or guided modes in a fiber <cit.>.
Yet, they do not capture the range of physics and possibilities of control and optical response achievable via (nano)photonic structuring.
Antennas at either the sender or receiver may be designed to enhance gain and directivity (therefore bit-rate), and similarly their environment may be structured to achieve greater communication and field enhancements (e.g., parabolic reflectors, metasurfaces, waveguides, fibers, etc.).
In fact, there is emerging consensus on the increasing importance of complex structured environments for electromagnetic communication in a wide range of contexts, including silicon photonics <cit.>,
on-chip optical interconnects <cit.>, and integrated image processing <cit.>.
Naturally, for each of these systems, the question of how much environmental structuring can improve their information transfer capabilities is key to quantifying future potential.
Recently, significant progress has been made in computational photonic design <cit.> by utilizing techniques from mathematical optimization to maximize field objectives over many degrees of freedom.
In particular, these designs' ability to concentrate power into selective wavelengths (e.g., resonant modes) and manipulate fields in counterintuitive ways has a wide array of applications and implications for photonic templates of communication: metasurface kernels <cit.>, integrated resonators <cit.>, and channel engineering <cit.>.
In this work, we leverage recent developments in photonic optimization to present a unified framework for investigating the extent to which structured photonic devices may enhance information transfer. First, we show how the structure-dependent, non-convex problem of maximizing information transfer can be relaxed into a shape-independent, convex problem to arrive at a bound on Shannon capacity subject to drive-power and physical wave constraints derived from Maxwell's equations.
Second, we show that in regimes where power requirements are dominated by insertion impedance (i.e., internal resistance or contact impedance in the driving currents), the maximization of the Shannon capacity can be significantly simplified in the basis of singular vectors of the Green's function, written in terms of a single signal-to-noise free parameter, and further relaxed to a biconvex problem that admits analytical solutions.
We present several illustrative examples where we compare these bounds to inverse designs and show that not only do the bounds predict trends but are within an order of magnitude of achievable performance for a wide range of signal-to-noise ratios.
Bounds on the Shannon capacity provide performance targets for communication and elucidate how information capacity scales with key device parameters (e.g., material choices, device sizes, or bandwidth).
Even infinitely large spaces are guaranteed to have decaying singular values by the compactness of the Green's function <cit.>, meaning that only a finite number of channels will be useful in transferring information.
Additionally, the fundamental limitations encoded in Maxwell's equations (spectral sum rules, finite speed of light, space-bandwdith products, etc.) dictate a finite limit on the ability to concentrate power over selective wavelengths (e.g., via resonances) or propagate fields over a given distance <cit.>. The proposed method paves the way for understanding how such trade-offs affect electromagnetic information transfer.
§ THEORY
In this section, we begin with a brief review of several defining relations of information capacity that pertain to wave communication. We show how the standard formula for MIMO communication can be derived from the fundamental definition of the Shannon capacity, subject to a constraint on the magnitude of the input signal.
This constraint manifests electromagnetically as a constraint on input current amplitudes; we show how we can also derive a constraint with respect to drive power, connecting to prior work on vacuum information transfer in the context of antenna design <cit.>.
By incorporating the freedom to structure senders, receivers, and their surrounding environments, we formulate a non-convex structural optimization problem that bounds the maximum Shannon capacity over any photonic structure.
We show how the problem can be relaxed to a convex optimization problem or, alternatively, to a biconvex problem on the channel capacities as given by the electromagnetic Green's function.
We consider a MIMO system with input signal x∈ℂ^n and output signal y∈ℂ^m related by
= +
where the ∈ℂ^m × n is the channel matrix and the channel has additive zero-mean complex Gaussian noise with zero mean and covariance (^†) = N.
The channel matrix and noise prescription relate “input" signals x to “output" signals y, which are viewed as random variables given the randomness of noise and the probabilistic argument used by Shannon to derive the Shannon capacity <cit.>. Communication is possible when x and y are dependent: an observation of y changes the a posteriori probability for any given x. Denote the joint probability distribution of x and y as γ( x, y); the marginal and conditional probability distributions are given by
γ_ x( x) = ∫γ( x, y) y
γ( x| y) = γ( x, y) / γ_ y( y)
and symmetrically in .
The differential and conditional differential entropy of can now be written as
() = ∫γ_ x() log_2(1/γ_x()) , and
(|) = ∫γ_ y() ∫γ(|) log_2(1/γ(|)) .
() is a continuous analog of the discrete information entropy which encodes the level of uncertainty in x
and is a measure of the information gained upon observation of x; (|) in turn is the expected entropy of conditioned on observation of <cit.>.
Their difference is the mutual information between and , denoted ( ; ):
(;) = () - (|) = () - (|).
Intuitively, (;) is the expected reduction in uncertainty (and hence gain in information) of x upon observation of . The second equality follows from Eq. (<ref>) and indicates that (;) is symmetric between and ; note that for our MIMO channel (<ref>) the second term is simply ( | ) = ().
Shannon's noisy coding theorem states that a tight upper limit on the error-free information transfer rate across the channel achievable via encoding is the Shannon capacity C = sup_γ_x (;) <cit.>.
Over a channel with continuous signals and finite additive noise as in (<ref>), C diverges without further constraints on the magnitude of :
intuitively, a larger signal magnitude will make the corruption of additive noise n proportionally smaller. Without further details of the underlying physics of the channel, this is addressed by enforcing a cap on the average magnitude of , which is equivalent to a constraint on the covariance Q of γ_x:
_γ_x[^†] = (_γ_x[^†]) ≡ Q ≤ P.
The Shannon capacity of our MIMO system is thus
C(, P) = sup_γ_x, ( Q)≤ P (;)
where the supremum is taken over all possible probability distributions γ_x() for .
Given (<ref>), one can show that among all γ_x with a particular covariance Q, (;) is maximized by the corresponding Gaussian distribution with covariance Q <cit.>.
Thus when solving (<ref>) we can only consider Gaussian distributions, leading to the well-known expression for the Shannon capacity per unit bandwidth C for a MIMO system <cit.>
C(,P) = max_Q≻0, (Q)≤ P () - ()
= max_Q≻0, (Q)≤ P log_2 (Q^† + N)
- log_2 (N)
= max_Q≻0, (Q)≤ P log_2 ( + 1/NQ^†).
This is a convex optimization problem and has a analytical solution that is often referred to as “water-filling" in the literature. For further mathematical details the reader is referred to <cit.>.
We will now consider Shannon capacity in the physical context of information transfer via electromagnetic fields and describe how to incorporate the possibility of photonic structuring into bounds on the information transfer rate.
The schematic setup is illustrated in Fig. <ref>, where the input is a free current distribution _i within a sender region S, and the output is the electric field _R generated in a receiver region R.
Working in the frequency domain with dimensionless units of μ_0=ϵ_0=1 and vacuum impedance Z = √(μ_0 / ϵ_0) = 1, the electric field e in a region obeys Maxwell's wave equation ( ∇×∇× - ϵ( r) ω^2 ) ( r; ω) = iω_i( r), where ω is the angular frequency of the source current _i, k= ω, and ϵ( r) is the permittivity distribution in space.
The total Green's function _t(r,r', ϵ(·); ω) maps input currents j_i to electric fields via ( r) = i/ω∫_t( r, r') _i( r') r' and satisfies ( ∇×∇× - ϵ( r) ω^2 )_t( r, r',ϵ(·);ω) = ω^2 Iδ(r - r'), where I is the unit dyad.
We also denote χ_D a constant susceptibility factor associated with a photonic device and related to the permittivity profile via ϵ( r) - = P_ϵ( r) χ_D where P_ϵ( r) is a projection operator containing the spatial dependence of ϵ( r).
In vacuum, there is no photonic structuring so the vacuum Green's function is given by _0 ≡_t( r, r', ; ω). We denote _t,RS as the sub-block of the Green's function that maps currents in the source region to fields in the receiver region, so the channel matrix H = i/ω_t,RS <cit.>.
As shown in Fig. <ref>, we can modify _t and H through the engineering of a photonic structure ϵ( r) within the design region D, which may also incorporate either the S or R regions; the central question we seek to address is the extent to which photonic engineering may boost the Shannon capacity C.
Some constraints on the input currents _i are needed for C to be finite; instead of the magnitude constraint prescribed in the more abstract formulation of (<ref>), we instead consider a budget P on the total drive power needed to maintain _i:
[-1/2i/ω_i^†_t,SS_i] + [ α |_i|^2 ] ≤ P.
The total drive power in (<ref>) is comprised of two parts: the first term is the power extracted from _i by the electric field, which includes radiated power and material absorption: optical loss processes that are modeled explicitly via Maxwell's equations.
The second term is an “insertion cost” for the (abstract) drive producing the input currents, which we assume to be proportional to the squared norm of _i with proportionality factor α.
Depending on the specific drive mechanism, this term may represent, for example, the internal resistance of a power source or contact impedance leading into an antenna <cit.>.
Adapting the Shannon capacity expression (<ref>) to the specifics of our electromagnetic channel along with the drive power budget (<ref>), keeping in mind that we are also designing the propagation environment (including sender and receiver) via ϵ( r), we arrive at the following structural optimization problem
max_ϵ(r) C(ϵ, P) = max_γ__i (_i;_R;ϵ)
s.t. _γ__i[-1/2i/ω_i^†_t,SS(ϵ) _i + α|_i|^2] ≤ P.
Given that (<ref>) can be written as a linear trace constraint on the covariance of γ__i, we only need to consider Gaussian distributions for the inner optimization, allowing us to rewrite (<ref>) more explicitly as
max_ϵ(r), _i log_2 ( + PNω^2_t, RS(ϵ) _i _t, RS^† (ϵ))
s.t. (( α_S + 12ω_t, SS(ϵ)) _i) ≤ 1
_i ≽0
( - ϵ(r)ω^2) _t(r,r';ϵ) = ω^2 δ(r-r')
where _i is the covariance matrix [_i _i^†]/P and A≡1/2i (A - A^†).
This scaling by P allows the expression of the objective in terms of a familiar signal-to-noise ratio P/N. The two free parameters in Eq. (<ref>) are P/N and α.
We note that α formally has units of resistivity: in this article, we take these units to be Z λ. Similarly, P/N has units of ϵ_0 c λ^2 in 3D and ϵ_0 c λ in 2D.
Note that the optimization degrees of freedom ϵ( r) may be restricted to any sub-region of the domain (e.g., one may consider a prescribed set of antennas and instead seek only to optimize the region between them).
This problem is non-convex owing to the nonlinear dependence of on ϵ( r).
In particular, the water-filling solution does not apply: a bound on Shannon capacity is given by the co-optimization of current covariance and the photonic structure.
In the following sections, we will derive bounds to (<ref>) via two different methods and discuss our results in the context of prior work. In the first method of Sec. <ref>, we will reinterpret (<ref>) from a maximization over ϵ( r) and γ__i to a maximization over joint probability distributions of _i and the induced polarization current _s = ( ϵ( r) - ) _t(ϵ( r)) _i.
By relaxing physical consistency requirements on this joint probability distribution, we obtain a convex problem that can be solved with methods in semidefinite programming (SDP) or via a generalized waterfilling approach described in Appendix <ref>.
Second, in Sec. <ref>, we consider a regime in which the drive power (<ref>) is dominated by insertion losses in the sender, and utilize a convenient basis transformation to express the Shannon capacity in terms of the singular values of _t,RS, yielding an intuitive decomposition into orthogonal channels.
By relaxing Maxwell's equations to encode wave behavior directly into structurally agnostic bounds on these singular values, we derive a biconvex problem that for a limited number of constraints admits analytical solutions and physically intuitive interpretations of channel and power allocation.
Both relaxations can in principle be applied in the appropriate regimes of validity, but theoretically understanding their differences is challenging: while both encode the physics of Maxwell's equations, the first does so via power-conservation constraints on polarization currents while the second through the dependence of the channel matrix on its singular values.
Surprisingly, however, we find that the predicted bounds are close in regimes where they can be compared.
§.§ Convex Relaxation via Expected Power Conservation over Joint Distributions
For notational convenience we define the combined source-induced current = [ j_i; j_s ] and a function that maps input currents to induced polarization currents S(_i; ϵ) = (ϵ-) _t(ϵ) _i.
The set of physically feasible joint distributions is then given by
𝒫_physical = {γ_ | ∃ϵ ∀_i γ_(_s|_i) = δ(_s - S(_i; ϵ)) }.
Sampling any joint distribution γ_∈𝒫_physical gives such that _s is the induced polarization current in some ϵ(r) given _i; furthermore for a fixed γ_ this underlying ϵ(r) is consistent across all _i.
Given _s, the output field _R and the extracted drive power P can be expressed solely using the vacuum Green's function
_R = i/ω_RS(ϵ) _i = i/ω(_0,RS_i + _0,RD_s),
P = -1/2i/ω_i^† (_0,SS_i + _0,SD_s) + α |_i|^2.
Observe that the joint distribution optimization
max_γ_∈𝒫_physical (;_R)
s.t. _γ_[-1/2i/ω_i^† (_0,SS_i + _0,SD_s) + α |_i|^2 ] ≤ P
is exactly equivalent to (<ref>): here, the (ϵ( r), γ__i) pairs have a one-to-one correspondence with γ_∈𝒫_physical, both producing the same output distribution γ__R and hence, the same ℋ(_R) and ℐ.
So far, (<ref>) is just a rewriting of (<ref>) with the complexity of the problem buried in the description of the set 𝒫_physical; we now relax (<ref>) by considering a superset of 𝒫_physical that has a simpler mathematical structure.
Instead of insisting that there exists some specific ϵ(r) for a member joint distribution as in (<ref>), we simply require that on average, _i and _s satisfy structure agnostic local energy conservation constraints (see Appendix <ref>):
_γ_[_i^†_0,DS^†_k _s - _s^†_k _s] = 0
where ≡( χ_D^-† - _0,DD^†), _k is the indicator function over some design sub-region D_k ⊂ D and χ_D is the constant susceptibility of the design region that does not depend on geometry.
This relaxed requirement gives us the relaxed distribution set
𝒫_relaxed = {γ_ | _γ_[_i^†_0,DS^†_k _s - _s^†_k _s] = 0}
with an associated Shannon capacity problem
max_γ_∈𝒫_relaxed (;_R)
s.t. _γ_[-1/2i/ω_i^† (_0,SS_i + _0,SD_s) + α |_i|^2] ≤ P.
Given that 𝒫_physical⊂𝒫_relaxed, the optimum of (<ref>) bounds (<ref>) and hence (<ref>) from above. Furthermore, the restrictions of (<ref>) are just linear constraints on the covariance of γ_; we can again restrict attention to just Gaussian distributions to solve (<ref>), yielding
max_ log_2 ( + P/Nω^2[_0,RS _0,RD] [_0,RS^†
_0,RD^†] )
s.t. [α_S + 1/2ω(_0,SS) 1/2ω1/2i_0,SD
1/2ω-1/2i_0,SD^† 0] ≤ 1
P [0 - _0,DS^†_k
- _k _0,DS _k] = 0, ∀_k
≽0
This convex optimization problem over the joint covariance ≡[^†]/P is guaranteed to give a finite bound on the Shannon capacity given finite α, χ_D > 0 (Appendix <ref>) and can be solved using methods in semidefinite programming.
Although (<ref>) is not strictly an SDP due to the log det objective, -log((·)) is the standard logarithmic barrier function for SDP interior point methods and certain solvers support this objective natively (see for example SDPT3 <cit.>). In our numerical testing, we have found that using existing general convex optimization solvers for (<ref>) is computationally expensive for even wavelength-scale domains, and instead we solve (<ref>) using a “dual waterfilling” approach that is a generalization of the method used in <cit.> (see Appendix <ref>).
§.§ Insertion-Dominated Power Requirements and Singular-Value Decompositions
Although the formulation above rigorously includes drive-power constraints on the Shannon capacity to all orders in scattering by the photonic structure, it is both practically and pedagogically useful to consider a relaxation of Eq. (<ref>) that assumes drive-power considerations are dominated by insertion losses.
This yields a solvable biconvex problem that elucidates fundamental principles of power transfer and channel allocation and may prove accurate in cases where structuring (local density of states enhancement) does not significantly influence extracted power, e.g., a laser incident on a scatterer or communication in an electrical wire.
Mathematically, the contributions on the power constraint given by _t,SS are negligible compared to the insertion losses given by ∼α_i^2, which results in a current amplitude constraint of the form _i^2 ≤ P / α.
We can simplify Eq. (<ref>) by decomposing the Green's function via the singular value decomposition _t,RS = ∑_j σ_j μ_j ν_j^†.
We then define each μ_j, ν_j as the available “channels" for communication and σ_j as its corresponding channel strength <cit.>.
The compactness of ensures that inclusion of a finite number of channels
n_c approximates the operator to high precision (i.e., the singular values decay) <cit.>.
This operator maps unit amplitude currents ν_j in S to fields 𝐞_R,j = σ_j μ_j in R.
Diagonalizing _t,RS^†_t,RS = U^†Σ^2 U, where Σ^2 is a diagonal matrix of squared singular values, the Shannon capacity objective of Eq. (<ref>) may then be written as <cit.>
log_2 ( + PNω^2Σ U U^†Σ).
By Hadamard's inequality <cit.>, this is maximized when U U^† is diagonal, so we bound Eq. (<ref>) by taking the degrees of freedom J_i to be the diagonal entries of U U^†.
The structural optimization problem (<ref>) then takes the simpler form
max_ J, ϵ( r) ∑_i=1^n_clog_2 ( 1 + γ J_i σ_i^2 )
s.t. J ≥ 0
1^T J ≤ 1
( ∇×∇× - ϵ( r) ω^2 )_t( r, r',ϵ(·);ω) = ω^2 Iδ(r - r')
_t,RS(ω, ϵ( r)) = ∑_i^n_cσ_j μ_j ν_j^†,
where neglecting the work done by _i allows writing the problem in terms of a single free parameter γ = P/αω^2 N, and J is the vector of current amplitudes J_i.
The utility of this formulation is now apparent: for problems where the impact of “back-action" from the photonic environment on power requirements is negligible compared to losses in the drive currents, this relaxation provides a means to investigate Shannon capacity in terms of a “scale-invariant" SNR parameter γ.
For a known structure and therefore Green's function, (<ref>) can be solved via the water-filling solution <cit.>.
The highly nonlinear dependence of the singular values σ_i on ϵ( r), however, makes this problem non-convex. Numerical solutions of (<ref>) via gradient-based topology optimization, which relaxes the discrete optimization problem over ϵ( r) to a continuous counterpart <cit.>, can lead to useful designs, but do not provide bounds on the Shannon capacity over all possible structures. To obtain a bound on the Shannon capacity that is independent of geometry, one may relax (<ref>) by ignoring the precise relation between singular values and the Green's function and instead encode structural information by imposing the same energy conservation constraints of Eq. (<ref>) on the calculation of singular values. Thus, one may write
𝐉, σ^2max ∑_i=1^n_clog_2 ( 1 + γ J_i σ_i^2 )
s.t. J_i ≥ 0, σ_i^2 ≥ 0, ∀ i ≤ n_c
∑_i J_i ≤ 1
∑_i σ_i^2 ≤ M_𝙵
σ_max^2 ≤ M_1
where M_𝙵, M_1 are bounds on the square of the Frobenius norm _t,RS_𝙵^2 ≡∑_i σ_i^2 and spectral norm _t,RS_2^2 ≡σ_max^2 of the channel matrix, respectively.
Each of these bounds can be calculated in a structurally independent way while maintaining crucial information on the underlying physics of Maxwell's equations (See Appendix <ref>).
As long as constraints on σ_i^2 are bounds over all possible structures allowed in Eq. (<ref>), solutions to Eq. (<ref>) represent structure-agnostic limits on Shannon capacity between the S and R regions.
This objective is biconvex and therefore difficult to solve <cit.>.
While numerical methods to find global optima of biconvex functions exist (see for example <cit.>), existing implementations do not easily allow for this objective function (see for example the cGOP package). Development and application of techniques to solve for the global optimum of (<ref>) is a promising avenue of future work.
Surprisingly, however, incorporating the Frobenius norm bound M_𝙵 but excluding the maximum singular value bound M_1 makes (<ref>) amenable to closed-form solutions. As shown in Appendix <ref>, the optimal solution to this problem may be written as follows:
C(M_F) = f_n=n_opt = n_optlog_2 (1 + γ'/ n_opt^2 )
n_opt =
r√(γ') or r√(γ'), r√(γ')≤ n_c
n_c, r√(γ')≥ n_c
where n_opt≥ 1, r = max_x∈ [0, 1][ x log_2 (1+1/x^2) ] ≈ 0.505, and we have introduced the total signal-to-noise ratio γ' = γ M_𝙵; note that the floor and ceiling functions are a consequence of the number of populated channels being an integer allocation.
This solution corresponds to evenly allocating capacity and power among a subset of all channels.
Notably, r√(γ')≤ 1 n_opt = 1 and r√(γ')≥ n_c n_opt = n_c.
In the high-noise regime (i.e., SNR dominated by increasing channel capacities), the Shannon capacity is maximized by allocating all available channel capacity and current to a single channel.
As noise is lowered, the number of channels increases by integer amounts until the low-noise regime (i.e., SNR dominated by low noise, regardless of channel capacities) is reached, in which case every channel has capacity M_𝙵/n_c and power P/n_c.
In both cases, current allocations naturally satisfy the water-filling solution given their corresponding channel capacity allocation.
As shown in Appendix <ref>, asymptotic solutions in the low- and high SNR regimes take the form
C(M_𝙵) γ' / log 2,
C(M_𝙵) n_c log_2 (γ' / n_c^2),
in agreement with the analysis above.
Intuitively, this solution is unrealistic in the low-noise regime:
the compactness of implies that singular values must decay, making even allocation of singular values among all channels impractical.
Regardless, this method provides a simple way to calculate limits on Shannon capacity given only a sum-rule on the Frobenius norm M_𝙵 of the channel matrix.
In the high-noise and low-noise regimes, it is straightforward to add the maximum singular value bound M_1 to this analysis; this solution is denoted C(M_𝙵, M_1).
If r√(γ')≤ 1 (the regime where n_opt = 1) a simple substitution of M_𝙵→ M_1 yields a bound.
In general, if M_𝙵/n_opt≤ M_1, the maximum singular value constraint is not active in the solution C(M_𝙵, M_1) and so it coincides with C(M_𝙵). The amenability of this rigorous formulation to analysis via the familiar language of channel capacity and power allocation makes this conceptual ground for further studies of the impact of structuring and SNR on optimal channel design.
§.§ Relation to Prior Work
The challenge of (<ref>) from an optimization perspective is the structural optimization over ϵ( r) which is both high-dimensional and non-convex. The main novelty of our results is the ability to account for the deliberate engineering of the propagation environment through such structural optimization, which is increasingly relevant given the rise of integrated photonics platforms. Mathematically, prior work on bounding structural optimization in photonics focused predominantly on objectives that are quadratic functions of the fields <cit.>; here we bound the Shannon capacity log objective using a relaxation with a probability distribution based interpretation that ties directly into the probabilistic proof of Shannon's noisy channel coding theorem <cit.>.
If the structure ϵ( r) and therefore the channel matrix _t,RS is fixed, the Shannon capacity is given by a convex optimization problem that can be solved via the water-filling solution. Prior work on Shannon capacity in electromagnetism have generally made such a simplifying assumption, either taking the propagation environment to be fully vacuum <cit.> or considering a fixed antenna structure in free space <cit.>.
Specifically, Refs. <cit.> investigated a planar antenna sheet surrounded by an idealized spherical shell receiver that captures all waves propagating through free space.
Ref. <cit.> calculated the Shannon capacity in vacuum between concentric sphere-shell sender and receiver regions: due to the domain monotonicity of the singular values of _0 <cit.>, this bounds the capacity between smaller sender-receiver regions completely enclosed within the larger sphere-shell.
Related works <cit.> have established sum-rule bounds on these vacuum singular values, establishing a qualitative connection between information transfer and the notion of maximizing field intensity at the receiver.
There are also differences in the exact constraint used to restrict the input current magnitude: Ref. <cit.> implicitly used a spectral norm constraint on the input currents, which is similar to the insertion-loss dominated case of <ref>. We explicitly impose a constraint on the total power required to maintain the input currents, including not just insertion loss but also the power going into propagating fields, similar to Ref. <cit.>. Refs. <cit.> also considered further restrictions on the antenna efficiency, defined as the ratio of radiated power to total power; Ref. <cit.> included constraints on the ratio of stored vs. radiated power in the antenna as a proxy for the operating bandwidth.
Our bound formulation (<ref>) given in the previous section is flexible enough to accommodate such additional constraints; given the conceptual richness that structural optimization adds to the problem, we have opted to only include the drive power constraint (<ref>) in this manuscript for simplicity.
§ ILLUSTRATIVE EXAMPLES
In the prior section, we derived two methods to compute upper bounds on the Shannon capacity between sender and receiver regions containing any possible photonic structure in either region or their surroundings.
As described in Section <ref>, the total drive power constraint Eq. <ref> can be considered in two regimes.
In one regime, handled by the direct convex relaxation, the extracted power (determined by Maxwell's equations) and the insertion power (as abstracted by a resistivity associated with driving the initial currents) are comparable:
1/2ω_t,SS_2^2 ∼α.
In the other, treated by both the direct convex relaxation and the simpler biconvex problem, the cost of driving initial currents dominates power consumption (i.e., insertion-dominated): 1/2ω_t,SS_2^2 ≪α.
In this section, we evaluate bounds in both of these contexts for several illustrative 2D configurations, showcasing the possibility of enhancing information transfer via photonic design and also revealing scaling characteristics with respect to material and design parameters.
We compute bounds both for out-of-plane (scalar electric field) polarization (TM) and in-plane (vector electric field) polarization (TE).
In the particular case of insertion-dominated power, we also compare the bounds with structures obtained via topology optimization, using an in-house automatic differentiable Maxwell solver written in JAX <cit.> to compute gradients of the Shannon capacity with respect to structural degrees of freedom.
Direct Convex Relaxation: We consider a scenario, shown schematically in Fig. <ref>A, involving sender (S) and receiver (R) square regions of size λ×λ with centers aligned along the x-axis and separated by a surface–surface distance of 1.2λ. A rectangular mediator region M of length L and width λ (along the y dimension) is placed between S and R. Bounds are computed by numerically solving (<ref>) with energy conservation constraints imposed globally for a specified design region, i.e., _k = _D.
To investigate the impact of structuring different regions on the Shannon capacity, Fig. <ref>B shows bounds for TM polarized sources, assuming a fixed impedance α=0.01 Zλ and SNR P/N=10^7 ϵ_0 c λ, as a function of L and for different combinations of designable S, M, and R regions. As L increases, the mediator eventually overlaps with and completely encloses the S and R regions.
The results reveal a striking feature: in contrast to designing the sender, structuring the receiver has by far the greatest impact on the Shannon capacity.
Intuitively, designing R allows large field concentrations _R. Such density of states enhancements are further corroborated by the dependence of the bounds on material susceptibility χ seen in Fig. <ref>C. In particular, bounds grow rapidly with χ for weak materials before saturating, and exhibit highly subdued scaling with respect to material loss (χ) consistent with resonant enhancements also seen in prior works on photonic limits <cit.>: decreasing χ by two orders of magnitude yields an increase in C of less than a factor of two. The exact nature of this scaling behavior is the subject of future work.
The apparent ineffectiveness of structuring the sender to increase C stems from the implicit freedom associated with designing the input drive, which allows for arbitrary input currents j_i within the S region, capped in magnitude only by the finite drive power constraint. In particular, the field e_R produced by a free current source j_i in a structured S region containing a polarization current j_s is in fact equivalent to one produced by a corresponding “dressed" input current j_t = j_i + j_s in vacuum.
The imposition of a finite drive power requirement with an insertion power contribution does, however, create a distinction between these two scenarios. In the α→ 0 limit where there is no power cost to modifying the input current j_i, there is no advantage in structuring since non-zero j_s incurs further material losses compared to inserting j_t directly in vacuum. Conversely, in the insertion dominated regime α≫_t,SS_2/2ω increasing j_i has a steep cost, making it more advantageous to achieve larger field enhancements by increasing j_t via structuring. Clearly, the relative importance of insertion- versus extraction-dominated power as determined by α plays a central role in determining how C should be optimized.
Figure <ref>D shows bounds on C as a function of 1/α for both TM and TE sources, allowing for structuring only in R but fixing the distance between the S and R regions to be 1.2λ (see inset). In the extraction dominated regime (α→ 0), the Shannon capacity exhibits a logarithmic dependence on α attributable to “dark currents” which do not radiate into the far field as input signals.
Mathematically, these dark currents lie within the nullspace of _SS, so the extracted power contribution -1/2ωj_i^†j_i to the drive power constraint (<ref>) is 0. While dark currents do not radiate nor consume power, they nonetheless produce evanescent fields that are picked up by a receiver at a finite distance, allowing for information transfer.
The dark current amplitudes are limited by α, so for fixed P/N their contribution to the covariance is inversely proportional to α, resulting in a logarithmic dependence of C on 1/α via the log_2 objective. This leads to a logarithmic divergence in Shannon capacity as α→ 0, even for finite power. As α increases, C decreases due to power restrictions on input currents, eventually reaching the insertion dominated regime (α→∞) where the extracted power is dwarfed by the insertion impedance term αj_i^2.
Neglecting the extracted power, the resulting power constraint on the input currents ∼_i^2 ≤ P/α leads to a problem that can be bounded via both the direct convex relaxation and the biconvex relaxation of (<ref>), as illustrated below.
Insertion-Dominated Power:
In this section we study the insertion dominated regime (α≫1/2ω_t,SS), where one can also apply the biconvex relaxation given in Eq. (<ref>).
As described in Section <ref>, this simplification permits a definition of channels as the singular vectors of _t,RS, with the relevant physics encoded via sum-riules on the associated singular values. This leads to a simplified, biconvex optimization problem (<ref>) with two analytical solutions: C(M_𝙵) maximizes Shannon capacity subject to a bound M_F on the maximum sum of all channel capacities (Frobenius norm), and C(M_𝙵, M_1) additionally incorporates a bound on the spectral norm M_1 or the maximum channel capacity of the Green's function. The resulting bounds depend only on the single universal total signal-to-noise ratio parameter γ' = P/αω^2 N× M_F.
We consider a simplified scenario shown schematically in Fig. <ref>, consisting of discrete S and R regions comprising 4× 4 vacuum pixel arrays separated by a designable contiguous M mediator of material χ. Hence, there are 16 orthogonal singular vector channels available for communication.
The figure shows bounds C(M_𝙵) and C(M_𝙵, M_1) alongside comparable inverse designs.
Notably, the tightest bounds are found to be within or near an order of magnitude of achievable performance for all studied noise regimes, with particularly realistic performance for high SNR.
The additional tightness in C(M_𝙵, M_1) compared to C(M_𝙵) for high-noise regimes motivates extending the domain of this solution via further analysis.
The corresponding channel occupation n_opt and power allocation P_i = 1/n_opt for C(M_𝙵) (shown above the main plot) demonstrate the nonlinear dependence of these two values on noise.
Even allocation f_n∈[1,16] are also shown, demonstrating how the solution C(M_𝙵) is a pointwise maximum of these uniform allocations.
As discussed earlier, higher SNR pushes allocation of J_i and channel capacity to a single channel leading to logarithmic dependence of Shannon capacity on γ', while low SNR pushes even allocations of J_i and channel capacity across all channels, leading instead to linear scaling. This is consistent with the results presented in Fig. <ref>, where large and small 1/α result in logarithmic and linear scaling of Shannon capacity, respectively. The biconvex formulation, however, suggests that while it may not be physically possible to optimize for any desired channel matrix, the SNR determines a target number of high-capacity channels.
Accordingly, high SNR structures display higher symmetry than the seemingly random structure at low SNR owing to maximizing information transfer among one or many channels, respectively.
Finally, the figure also shows bounds C_convex pertaining to the convex relaxation of Eq. (<ref>) and obtained by enforcing large α and P so as to reach the insertion-dominated power regime. The latter is found to be looser than C(M_𝙵, M_1) in the low SNR regime and tighter in the high SNR regime, showcasing the versatility of the convex formulation and, despite its complexity, its general agreement with the conceptually simpler biconvex relaxation.
§ CONCLUDING REMARKS
In this article, we formalized an appropriate figure of merit for maximizing electromagnetic communication subject to arbitrary photonic structuring, and connected it to existing work restricted to vacuum propagation and/or fixed antenna geometries.
The proposed framework rigorously connects the mathematics of electromagnetic wave propagation in photonic media to key information-theoretic quantities such as the mutual information and the Shannon capacity.
We derived two possible relaxations of this problem (one valid in general settings and the other in regimes where insertion losses dominate) that encode wave physics from Maxwell's equations along with drive-power considerations, to arrive at structure-agnostic upper bounds on Shannon capacity.
These bounds not only yield quantitative performance targets, but also critical scaling information with respect to device size, SNR, and material strength. In particular, our numerical evaluation of illustrative examples has highlighted the importance of optimizing the receiver, with more than an order of magnitude potential improvement of the Shannon capacity. The bounds have weak scaling with material loss, a fact with both theoretical and practical significance and likely related to similar sub-linear (logarithmic) scaling with loss seen in prior works involving distributed sources such as in near-field radiative heat transfer <cit.>.
When radiated power costs are dominant, the Shannon capacity displays logarithmic growth as α→ 0 driven by nonradiative dark currents.
As α is increased and insertion impedance restricts input currents, we observe capacity scaling with SNR transitioning from linear for low SNR to logarithmic for high SNR, alongside a gradual increase in the number optimal sub-channels used. Furthermore, topology optimized structures are shown to generally achieve performance within an order of magnitude of our bounds across a wide range of SNR.
Looking ahead, we believe there is significant promise for further research into photonics design for information transfer.
The results in this manuscript were computed in 2D and serve mainly as a proof of concept; we anticipate that our formulation may be adapted to study the information capacity of practical devices of increasing interest such as on-chip optical communication and image processing using integrated photonics.
Doing so would also require improving computational techniques for evaluating the bounds at larger scale and in 3D, along with the development of new inverse design schemes specifically for optimizing information transfer.
Our current results are also derived at a single frequency; further work remains to be done on generalizing the bounds for finite bandwidths, perhaps in the spirit of spectral sum rules <cit.> or delay-bandwidth products <cit.>. Another promising line of study is to incorporate macroscopic quantum electrodynamics <cit.> into the formulation for investigating information transfer in quantum communication systems.
Funding: we acknowledge the support by a Princeton SEAS Innovation Grant and by the Cornell Center for Materials Research (MRSEC).
S.M. acknowledges financial support from the Canada First Research Excellence Fund via the Institut de Valorisation des Données (IVADO) collaboration.
The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University.
§ PHYSICAL BOUNDS ON GREEN'S FUNCTION CHANNEL CAPACITIES
We will now show how the Lagrange dual relaxation as described in Refs. <cit.> can be utilized to obtain limits on the Frobenius norm and largest singular value of the total Green's function under arbitrary structuring.
Ultimately, we seek to place sum-rule bounds on the squares of the singular values of _t,RS.
These limits are combined with solutions to Eq. (<ref>) to find limits on the Shannon capacity.
Unlike the prior algebraic relaxations exploiting passivity (e.g., <cit.>), these incorporate significantly richer physics through the enforcement of local energy conservation constraints and thus yield tighter limits.
Note that the trace of _t,RS^†_t,RS can be evaluated by ∑_j v_j^†_t,RS^†_t,RSv_j in the basis of the singular vectors of _0,RS = ∑_j s_j u_j v_j^†.
Defining the vacuum “source" fields in the receiver region S_j ≡_0 v_j = s_j u_j with amplitude s_j^2 and the polarization field in the photonic structure p_j ≡( χ( r)^-1 - _0 )^-1 S_j ≡ S_j <cit.>, we find
v_j^†_t,RS^†_t,RS v = v_j^†_0,RS^†_0,RSv_j +
p_j^†_0,RD^†_0,RDp_j
+
2v_j^†_0,RS^†_0,RDp_j
where _0,RD acts on polarization fields in the design region D (which may contain any region) and gives fields in the receiver region.
The first term describes the square of the j-th singular value of the vacuum Green's function s_j^2, while the rest describe the contribution from the photonic structure.
In this picture, p_j, S_j are vectors in the design region, while v_j are vectors in the source region.
Generalized energy conservation scalar identities <cit.>, can be written
S_j^†_v p_k - p_j^†(χ^-† - _0,DD^†) _v p_k = 0, ∀ j,k (see Appenidx <ref>), where _v is projection into any spatial subdomain.
For the purposes of clarity, we take _v →, noting that constraints will be enforced in every region for the numerical calculation of bounds.
From these components, quadratically constrained quadratic programs (QCQPs) can be written to maximize different objectives related to the singular values of _t,RS, from which bounds on their optimal values can be computed with duality relaxations as described below.
Frobenius norm ‖𝔾_t,RS‖_𝙵^2 = (_t,RS^†_t,RS) = ∑_i σ_i^2 ≤ M_𝙵:
max_ p_j ∑_j v_j^†_t,RS^†_t,RSv_j
s.t. S_j^† p_k - p_j^†(χ^-† - _0,DD^†) p_k = 0,
∀ j,k
L_2 norm ‖𝔾_t,RS‖_2^2 = σ_max≤ M_1:
max_v,p_j v^†_t,RS^†_t,RSv
s.t. v^†_0,DD^†p - p^† (χ^-1† - _0,DD^†) p = 0,
v^†v = 1,
∀ j ≤ k
The first problem is simply maximizing the Frobenius norm of _t,RS subject to energy conservation constraints. In the second problem, we maximize the largest singular value by co-optimizing the source currents v and the polarization currents p and enforcing the singular vector is normalized.
We note that in all calculations, “cross"-constraints between sources j≠ k are not enforced for computational efficiency.
This relaxes the problem: full incorporation of these constraints is expected to tighten limits.
To compute limits on these problems, we note that the Lagrangian ℒ of a QCQP with primal degrees of freedom ψ is given by
(ψ, λ) = -ψ^†𝔸(λ) ψ + 2 ( ψ^†𝔹(λ) S )
+ S^†ℂ(λ) S
where 𝔸, 𝔹, and ℂ represent the quadratic, linear, and constant components of the Lagrangian and contain both objective and constraint terms.
The Lagrange dual function 𝒢 and its derivatives can be written, in terms of ψ_opt≡𝔸^-1𝔹 S and for positive-definite 𝔸,
𝒢(λ) = S^†( 𝔹^†𝔸^-1𝔹 + ℂ) S,
∂𝒢∂λ = 2 ( ψ_opt^†∂𝔹∂λ S ) - ψ_opt^†∂𝔸∂λψ_opt
+ S^†∂ℂ∂λ S.
The positive-definiteness of 𝔸 ensures that the dual problem has feasible points with finite objective values.
We must therefore show that for each primal problem, there exist dual feasible Lagrange multipliers λ such that 𝔸 is positive-definite.
In the case of (1), we simply note that (_0,DD) is positive-semidefinite <cit.>.
Therefore, for a lossy design material χ > 0, we can increase the Lagrange multiplier for the imaginary part of this constraint to ensure 𝔸 is positive-definite at some λ. All other multipliers can be initialized to zero.
In the case of (2), the same strategy can be employed to make 𝔸 positive-definite in the p, p sub-block.
To ensure positive-definiteness in the v, v sub-block, the semidefinite quadratic constraints v_i^† v_i = 1 can be employed.
Overall, the existence of these points proves the existence of a bound on their respective primal problems.
§ PROOF THAT THE CONVEX RELAXATION YIELDS A FINITE BOUND
The convex relaxation approach outlined in Section <ref> gives a bound written as the solution to maximizing an objective function of the form log( + ^†) subject to the PSD constraint ≽0 and various linear trace constraints on .
Upon first sight, it may be unclear whether such an optimization has a finite maximum: one can imagine that the eigenvalues of ^†≽0 may increase without bound and lead to a diverging log, and indeed this is what happens without any linear trace constraints.
The key for a finite maximum of (<ref>) is the existence of a linear trace constraint of the form [A] ≤ P_A where A≻ 0 and P_A>0, i.e., the program
max_ log( + ^†)
s.t. [A] ≤ P_A
≽ 0
has a finite maximum.
To see this, observe that given strictly definite A, it must be the case that [A]>0 for all positive semidefinite ≠0.
Therefore, in the vector space of Hermitian matrices, the intersection between the cone of semidefinite matrices and the halfspace given by [AJ] ≤ P_A is bounded and the log objective cannot diverge.
In problem (<ref>), we can use a simple sum of the drive power constraint (<ref>) and resistive power conservation over the enter design domain [the imaginary part of (<ref>)] to construct such a linear trace inequality, with A given by
[ω R _S, (1/χ^*) _D] + [(_0,SS) (_0,SD)
(_0,DS) (_0,DD)].
The first matrix is a diagonal operator with positive entries and is positive definite. The second operator is positive semidefinite since it is a sub-operator of the positive semidefinite operator
[(_0,AA) (_0,AA)
(_0,AA) (_0,AA)]
where the domain A = S ⋃ D. To see that (<ref>) is positive semidefinite, we start from the fact that (_0,AA) is positive semidefinite due to passivity <cit.>, and therefore it has a Hermitian positive semidefinite matrix square root √((_0,AA)). It suffices now to note that
[x
y]^†[(_0,AA) (_0,AA)
(_0,AA) (_0,AA)] [x
y]^†
= √((_0,AA)) (x + y)_2^2 ≥ 0.
It is also worth pointing out that if the constraint matrix in (<ref>) is indefinite, then the optimum diverges. To see this, consider any vector h outside the kernel of , i.e., h≠0. If h^†h < 0, then we are done: = βhh^† will lead to a divergence of the objective as β→∞ while [] = h^†h→ -∞ < P_A. If h^†h≥ 0, simply pick a vector v such that v^†v < 0; this is always possible given indefinite . Then by constructing = β_1(hh^† + β_2 vv^†) and choosing β_2>0 such that h^†h + β_2 v^†v < 0 the same argument as before works: as β_1 →∞ the objective diverges thanks to the hh^† contribution while ≻ 0 remains within the constraint halfspace.
§ CONVEX RELAXATION SOLUTION VIA WATERFILLING DUALITY
This appendix describes the details of solving the direct convex relaxation (<ref>) using an approach we dub the “waterfilling dual”, which is a generalization of the solution technique used in <cit.>. Specifically, we seek the solution of a convex optimization problem over the n × n PSD matrix :
C = max_ f() ≡log( + ^†)
s.t. [] ≤ P
[_l ] = 0 l ∈ [1⋯ m]
≽0
where ≻ 0 and we assume that the solution is non-zero, i.e., the _l and PSD constraints do not restrict =0 and the PD constraint is needed for C to be finite. While (<ref>) is not explicitly of this form, Appendix <ref> shows how one can straightforwardly transform (<ref>) into this form by taking linear combinations of constraints to obtain a PD constraint. The waterfilling dual approach to (<ref>) starts from the fact that a simplified version of (<ref>) without any [_l ]=0 constraints admits an analytical solution, i.e., waterfilling. Unfortunately, waterfilling does not directly generalize to multiple linear constraints, so we define the waterfilling dual function
𝒟(b) = max_ log( + ^†)
s.t. [( + ∑_l=1^m b_l _l) ] ≤ P,
≽0.
The trace constraint in (<ref>) is a linear superposition of the trace constraint in (<ref>), so 𝒟(b) ≥ C for any combination of multipliers b; furthermore, the value of 𝒟(b) can be evaluated using waterfilling. 𝒟 can be understood as a partial dual function of (<ref>) where we have not dualized the ≽0 constraint; 𝒟(b) ≥ C is thus a statement of weak duality. We now show explicitly that, as expected for a convex problem, strong duality holds: the solution to the waterfilling dual problem
min_ b 𝒟( b)
is exactly equal to C. To see this, consider the objective gradient at the primal optimal point f(). First order local optimality conditions imply that
f() = (∑_l=1^m b̅_l _l) + a + ∑ s_k S_k
where a, s_k ≥ 0, and S_k are outward normal matrices of the PSD cone 𝒮^n_+ locally at : such a local description of 𝒮^n_+ is possible due to the fact that any non-zero matrix on the boundary of 𝒮^n is an interior point of a face of 𝒮^n_+ <cit.>. Since by assumption the primal problem has a non-zero solution, the PD constraint must be active and therefore a > 0. Now, consider the dual value 𝒟(b̅/a) as the optimum of (<ref>): it is clear that is a feasible point of (<ref>), and furthermore by construction f() satisfies the first order local optimality condition
f() = a ( + ∑_l=1^m b̅_l/a_l ) + ∑ s_k S_k.
Conclude that 𝒟(b̅/a) = f() = C and therefore strong duality holds. Note also that for 𝒟( b) to remain finite, + ∑ b_l _l must be positive semidefinite per Appendix <ref>, so we can always do waterfilling to evaluate finite 𝒟 values. Finally, given the analytic form of the waterfilling solution of (<ref>) one can also derive dual gradients ∂𝒟 / ∂ b using matrix perturbation theory, and thus the dual problem (<ref>) can be solved using gradient-based optimization.
§ GENERAL SOLUTIONS TO SHANNON CAPACITY RELAXATION
In this section, we provide a general solution of (<ref>).
The problem can be written
x, ymax f( x, y) = ∑_i=1^nlog_2 ( 1 + γ x_i y_i )
s.t. 1^T x = 1^T y = 1
x, y ≥ 0
where the c and prime in n_c, γ have been dropped, respectively. Let x^*, y^* be optimal solutions to this problem, which exist because the function is continuous over the compact set defined by the constraints.
First, we consider asymptotic solutions in γ.
γ≪ 1:
∑_i log_2 (1+γ x_i y_i) ≈γ/log 2 ∑_i x_i y_i. Given the constraints 1^T x = 1 and 1^T y = 1 (easy to see the optimum must have equality), and that log(1+x) ≤ x, we get the bound
C ≤γ / log 2
using the method of Lagrange multipliers, first on the maximization with respect to x_i and then y_i. This is a bound for all γ but is most accurate when γ is small.
γ≫ 1:
In this case, we find that
∑_i log_2 (1+γ x_i y_i) ≈∑_i log_2 x_i + log_2 y_i + log_2 γ.
This maximization problem can also be analytically solved given the same constraints by the method of Lagrange multipliers to find
C → n_c log_2 1n_c + n_c log_2 1n_c + n_c log_2γ = n_c log_2 γn_c^2.
This corresponds to equal allocation 1/n_c and 1/n_c for all channels.
This value approaches a bound on C as γ→∞.
Now, we show how this problem can be solved exactly, analytically.
In the following analysis, we take log_2 →log for mathematical convenience, understanding that solutions are scaled by a constant factor.
Let S_x^* = { i ∈ [n]: x_i^* > 0} and S_y^* = {i∈ [n] : y_i^* > 0} be the support sets of x^* and y^* respectively. Then S_x^* = S_y^*≡ S.
Pick two indices i≠ j. If x_i = 0, y_i = c, then taking y_i → 0 and y_j → y_j + c strictly increases the objective.
Intuition: allocating mass to some y_i if x_i=0 contributes 0 to the objective, whereas allocating that same mass to another y_j contributes positively. Thus the support sets of x^* and y^* are equal.
For any pair of indices i, j ∈ S, x_i^* y_i^* x_j^* y_j^* = 1/γ^2 (Condition 1) or x_i^* = x_j^* (Condition 2).
Fix y^* and consider the following optimization problem
max_x f^γ,y^*( x) = ∑_i =1^n log( 1 + γ x_i y_i^* )
s.t. 1^T x = 1,
x ≥ 0.
We know that x^* is a KKT point in the above optimization problem.
The KKT conditions are, for some λ > 0, γ_1, …ν_n ≥ 0,
λ 1 - ∑_i=1^n ν_i 1_x_i^* = 0 = ∇ f_γ,y^*(x^*),
where 1_x_i^* = 0 is an indicator function for x_i^* = 0.
Now x_i^* = 0 if and only if y_i^* = 0 by Lemma <ref>.
If y_i^* = 0 then it is easy to see that ∇ f_i^γ,y^*(x^*) = 0, and therefore ν_i = λ. Otherwise, 1_x_i^* = 0 is 0 everywhere.
Therefore, for each pair of i,j∈ S,
y_i^* γ1+γ x^*_i y_i^* = γ y_j^* 1+γ x^*_j y_j^*.
Solving for x^*_j - x^*_i, we obtain
x^*_j - x^*_i = (y_j^* - y_i^*)γ y_j^* y_i^*.
Making the same argument fixing x^*, we find
y_j^* - y_i^* = (x_j^* - x_i^*)γ x_j^* x_i^*
Combining these, we find that
x_j^* - x_i^* = 1/γ^2x_i^* y_i^* x_j^* y_j^* (x_j^* - x_i^*),
from which it follows that either x_i^* y_i^* x_j^* y_j^* = 1/γ^2 or x_i^* = x_j^*.
For every i, x_i^* = y_i^*.
Enumerate the unique nonzero values of x_i^* for i=1… n by X_1, … X_K. Suppose K ≥ 2 (if K=1 the result is clear). For i=1 … k let S_k = {i ∈{1 … n} such that x_i=X_k} for k ∈{1 … K}.
By (<ref>) if x_i^* = x_j^* then also y_i^* = y_j^*. Denote the unique value of y_i^* in the set S_k by Y_k.
Now choose i ∈ S_k for some k ≥ 2 and j ∈ S_1. From (<ref>) combined with the fact that x_i^* y_i^* x_j^* y_j^* = 1/γ^2, we observe
γ y_i^* = γ y_j^* ( 1 + 1/(γ x_j^* y_j^*)/1 + γ x_j^* y_j^*)
and symmetrically in x,
γ x_i^* = γ x_j^* ( 1 + 1/(γ x_j^* y_j^*)/1 + γ x_j^* y_j^*).
Now divide the above two equations to see that for every k,
X_k/Y_k = X_1/Y_1.
Since 1=∑_i=1^n x_i^* = ∑_i=1^n y_i^*, this implies that x_i^* = y_i^* for each i such that x_i^* > 0.
Now we know that for each i≠ j, X_i^2 X_j^2 = 1/γ^2. This is impossible if K ≥ 3 (xy=c,yz=c,xz=c y=z) so we know K=1 or K=2.
First, we consider K=1.
Let g(x) = x log (1 + 1/x^2). There is a unique local maximizer of g over the real numbers, which is also the global maximizer, and is between 0 and 1. Let this real number be r.
g'(k→ 0) →∞ and g'(1) = log(2) - 2/3 ≤ 0. For x ≥ 1, g'(x) = log(1 + 1/x^2) - 2/(x^2 + 1) ≤ 1/x^2 - 2/(x^2 + 1) = (1 - x^2) / (x^2(x^2 + 1)) ≤ 0. Also,
g”(x) = 2x/x^2 + 1 - 2x/x^2 + 4x/(x^2 + 1)^2 = 2x(x^2 - 1)/x^2(x^2 + 1)^2
so g”(x) ≤ 0 for x ∈ [0,1] and therefore g is concave. So there can be only one local maximizer in this interval.
Let h(k) = k log (1 + γ/(k^2)). The unique maximizer of h over the integers is in the interval [⌊ r √(γ)⌋, ⌈ r √(γ)⌉ ].
h(k/√(N)) = √(γ) g(k) which is maximized over ℝ at r√(γ). Since the function is concave on the relevant interval, the solution is one of ⌊ r √(γ)⌋, ⌈ r √(γ)⌉.
Since h(k) is concave on [0,n], if r√(γ)≥ n then the optimal k is n. Therefore we show that the K=1 solution to (<ref>) is
n_optlog(1 + γn_opt^2)
n_opt =
r√(γ) or r√(γ) r√(γ)≤ n
n r√(γ)≥ n
where n_opt≥ 1, r = max_x∈ [0, 1][ x log(1+1x^2) ] ≈ 0.505 and the floor and ceiling functions are a consequence of the number of populated channels being an integer allocation.
Now we consider K=2 solutions.
Suppose that |S_1| = n_1 and |S_2| = n_2 such that n_1 + n_2 ≤ n.
Observe that X_1 = λ/n_1 and X_2 = (1-λ)/n_2 for some λ∈ [0,1].
Consider the function,
f(λ) = n_1 log( 1 + λ^2 γ/n_1^2) + n_2 log( 1 + γ(1 - λ)^2/n_2^2).
Any maximizer in the K=2 case must be of this form.
We can explicitly find the maximizing λ for this function in terms of n_1 and n_2.
The values of λ which can be a maximum of f in [0,1] are λ= 0, λ = 1,
λ_0 = n_1/n_2 + n_1 or λ_± = 1 ±√(1 - 4 n_1 n_2 / γ)/2.
These are the values of λ which have zero derivative, and those at the extremes of λ∈ [0, 1].
These are the possible candidates for being a maximum of our function when K=2. The first and second cases correspond to K=1. In the third case, X_1 = 1/(n_1 + n_2) = X_2, so in fact K = 1.
In the fourth case, we can assume without loss of generality that n_1 ≥ n_2, meaning λ_+ is the optimal K=2 point (as it must be between local minima of λ_0 and λ_-).
In practice, we numerically compare the K=1 with all K=2 solutions to get the global optima of this problem.
In theory, we believe it is possible to prove that for all K=2 solutions, there exists a K=1 solution with a higher objective value.
We now prove this for most values of N.
Towards a contradiction, we assume that λ_+ ≡λ, n_1, n_2 in (<ref>) represent a global maximum of the function (<ref>).
We will now show that λ_+ representing a global maximum leads to a contradiction. That is, we can either change λ, n_1, or n_2 to a different value to get a better objective value.
Case 1: 1 ≥max{ (1/√(γ)) n_1 /λ, (1/√(γ)) n_2 / (1 - λ) }
By the concavity of h (on the interval [0, 1]), we find that for λ such that 1 ≥n_1 / √(γ)λ and 1 ≥n_2 / √(γ)1 - λ,
f(λ) = λ h( n_1/λ) + (1 - λ) h( n_2/1 - λ)
≤ h(n_1 + n_2)
meaning we can get a better optimal value by summing n_1 and n_2 and considering a point in the K=1 case, giving a contradiction.
Case 2: 1 ≥ 1/√(γ) n_1 /λ, but 1 ≤ (1/√(γ)) n_2 / (1 - λ), and n_2 ≥ 2. First, we prove the following lemma:
For x ≥ 1, g(x/2) ≥ g(3x / 4) ≥ g(x) and g is strictly decreasing on [3x / 4, x].
We already showed that for x ≥ r g is decreasing, and r ≤ 3/4 so the second inequality is trivial.
For the first inequality, for x ≥ 2 r again it is clearly trivial. We just need to show that it is true for x ∈ [1, 2 r].
Observe that g(1/2) = 1/2 log(5) ≥ 0.7 and for x ≤ r, g is decreasing. Therefore for any x ∈ [1/2, r], g(x) ≥ 0.7. Also for any x ≥ 3/4, again using the decreasing property of g, g(x) ≤ g(3/4) ≤ 0.6.
Now, if n_2/√(γ)1 - λ≥ 1 then (n_2 - 1)/√(γ)1 - λ≥ 1/2 so, using Lemma <ref>, λ h(n_1 / λ) + (1 - λ) h (n_2 / (1 - λ)) ≤λ h(n_1 / λ) + (1 - λ) h ((n_2 - 1) / (1 - λ)).
Therefore, n_2 → n_2 - 1 increases the objective value, violating the assumption of optimality.
Case 3: 1 ≥ n_1 / (√(γ)λ), but 1 ≤ (n_2 / √(γ)) / (1 - λ), and n_2 = 1.
Note that in this case we must have n_1 = ⌊ r √(γ)⌋ / √(γ) because otherwise n_1 n_1 + 1 would yield an objective improvement.
First we must observe that n_2 / (√(γ)λ) ≥ 2. This can be done by numerically solving the following bounded three-dimensional program and observing that the optimal objective value is less than 0,
x ≤ 1, y ∈ [1, 2], λ≥ 0.5maximizeλ g(x) + (1 - λ) g(y) - g(λ x + (1 - λ) y ).
Now using this fact, plus the fact that g is concave on [0, 1],
λ g ( n_1/√(γ)/λ) + (1 - λ) g ( 1/√(γ)/1 - λ)
≤λ g ( n_1/λ√(γ)) + (1 - λ) g (2)
= λ[ g(n_1 /√(γ)) + g ( n_1/λ√(γ)) - g(n_1√(γ)) ] + (1 - λ) g(2)
≤λ[ g(n_1 /√(γ)) + g'(n_1 /√(γ)) ( n_1/λ√(γ) - n_1√(γ)) ] + (1 - λ) g(2)
= g(n_1√(γ)) λ + (1 - λ) n_1√(γ) g'(n_1 /√(γ)) + (1 - λ) g(2).
So an optimal point of the K=2 type is only possible if
mmax g(m/√(γ))
≤ g(n_1 /√(γ)) γ + (1 - γ) (n_1 /√(γ)) g'(n_1 /√(γ)) + (1 - γ) g(2)
or,
g'(n_1 /√(γ)) ≥(mmaxg(m /√(γ)) - λ g(n_1 /√(γ))/1 - λ- g(2)) √(γ)/n_1
≥ (g(n_1 /√(γ))- g(2)) √(γ)/n_1.
We can now look at x for which,
g'(x) ≥ (g(x) - g(2)) 1/x.
These are x ≤ 0.235 and x ≥ 4.245 (solved via a numerical solver). This means that to get an optimal point of this form we need ⌊ r / √(N)⌋√(N) = n_1 √(N)≤ 0.235 which, since r > 0.5, is impossible.
In conclusion, if both n_1√(γ)λ and n_2 / √(γ)1-λ≤ 1, by case 1 there exists a value with K=1 with a better objective value. If either n_1λ√(γ) or n_2 / √(γ)1 - λ are greater than 1 (WLOG the second case), with n_2 ≥ 2, by case 2 this allocation is not optimal.
Finally, if both n_1/√(γ)λ and n_2 / √(γ)/1 - λ are greater than 1, then numerically optimal K=2 points are not possible.
K=2 points are nevertheless numerically checked to not be optimal in the examples presented in the main text.
§ ENERGY-CONSERVING QCQP CONSTRAINTS
It is possible to derive the energy conservation constraints presented in the main text via operator constraints (which, like Maxwell's equations are dependent on the spatial profile of permittivity) relaxed to structure-agnostic scalar constraints <cit.>.
Although these constraints encode less information than Maxwell's equations in full generality, they allow the relaxed problem to be formulated as a QCQP from which techniques in the main text (i.e., Lagrange duality) can be employed to calculate structure-agnostic bounds on photonic performance.
In this section, we show how these constraints can be directly derived from energy conservation as expressed by Poynting's theorem for time-harmonic complex fields <cit.>:
∫_∂ Vσ· (EH^*) = iω∫_V (H^*·μ·H - E·ϵ^*·E^*) V
- ∫_V E·J^* V.
For simplicity, we will assume a non-magnetic material μ=μ_0=1 and scalar isotropic permittivity and susceptibility ϵ = 1 + χ, though the derivation is valid for anisotropic ϵ as well <cit.>.
Consider a scattering theory picture where a free current source J_v generates the fields E_v, H_v in vacuum and E_t, H_t in the presence of a structure with material distribution ϵ(r) = 1 + χ_s(r) where _s(r) is an indicator function. There is an induced polarization current J_s in the material which produces scattered fields E_s and H_s that combine with the vacuum fields to give the total field: E_t = E_v+E_s, H_t = H_v + H_s. The complex Poynting theorem thus (<ref>) applies to three sets of currents, fields, and environments: (J_v, E_v, H_v) in vacuum, (J_s, E_s, H_s) in vacuum, and (J_v, E_t, H_t) over the structure, giving
∫_∂ V_k σ·(E_vH^*_v) = iω∫_V_kH_v^* ·H_v V
- iω∫_V_kE_v ·E_v^* - ∫_V_kE_v ·J_v^* V .
∫_∂ V_k σ·(E_sH^*_s) = iω∫_V_kH_s^* ·H_s V
- iω∫_V_kE_s ·E_s^* V - ∫_V_kE_s ·J_s^* V .
∫_∂ V_k σ·(E_t H^*_t)
= iω∫_V_kH_t^* ·H_t V
- iω∫_V_k (1 + χ^*_s) E_t ·E_t^* V - ∫_V_kE_t ·J_v^* V .
Subtracting (<ref>) and (<ref>) from (<ref>) gives
{ i ω∫_V_kH_v^* ·H_s V - ∫_∂ V_k σ·(E_s H_v^*)
- iω∫_V_kE_s ·E_v^* V }
+{i ω∫_V_kH_s^* ·H_v V - ∫_∂ V_k σ·(E_v H_s^*)
- iω∫_V_kE_v ·E_s^* V }
= ∫_V_kE_s ·J_v^* V - ∫_V_kE_s ·J_s^* V + iω∫_V χ^* E_t ·E_t^* V .
Now, using vector calculus identities along with the Maxwell wave equations E_v - ω^2 E_v = iωJ_v and E_s - ω^2 E_s = iωJ_s, the two curly brackets in (<ref>) can be shown to be equal to ∫_V_kE_s ·J_v^* V and ∫_V_kE_v ·J_s^* V respectively. Finally, the induced current J_s can be swapped out by the polarization p via J_s = -iωp, and the scattered field E_s = Gp, to give
∫_V_kE_v^* ·p V = ∫_V χ^-1*p^*·p V - ∫_V_kp^* · ( G^†p) V.
This can be written in a compact operator notation
E_v^†_V_kp = p^† (χ^-† - G^†) _v_kp,
giving a form of the energy conservation constraints in the main text, where E_v → S.
From this derivation it is clear that the constraint (<ref>) encodes conservation of power during the electromagnetic scattering process for every region V_k.
In the case of many sources (as in this paper), each source defines a different scattering problem, with individual source-polarization pairs (S_j, p_j) satisfying constraints of the form (<ref>). There are also additional “cross-constraints” that capture the fact that the same structured media generates the p_j induced in each case:
S_j^† p_k - p_j^†(χ^-† - G^†) p_k = 0, ∀ j,k.
For a more detailed discussion of these constraints, we refer the reader to <cit.>.
For computational simplicity, only j=k constraints are enforced in the bounds calculated in this paper, although constraints are enforced for all computational voxels V_k; future incorporation of these constraints is expected to tighten bounds.
|
http://arxiv.org/abs/2409.02502v1 | 20240904075940 | Dispelling Four Challenges in Inertial Motion Tracking with One Recurrent Inertial Graph-based Estimator (RING) | [
"Simon Bachhuber",
"Ive Weygers",
"Thomas Seel"
] | cs.RO | [
"cs.RO"
] |
First]S. Bachhuber
First]I. Weygers
Second]T. Seel
[First]Department Artificial Intelligence in Biomedical Engineering, FAU Erlangen-Nürnberg, 91052 Erlangen, Germany (e-mail: [email protected])
[Second]Institute of Mechatronic Systems, Leibniz Universität Hannover, 30167 Hannover, Germany (e-mail: [email protected])
§ ABSTRACT
In this paper, we extend the Recurrent Inertial Graph-based Estimator (RING), a novel neural-network-based solution for Inertial Motion Tracking (IMT), to generalize across a large range of sampling rates, and we demonstrate that it can overcome four real-world challenges: inhomogeneous magnetic fields, sensor-to-segment misalignment, sparse sensor setups, and nonrigid sensor attachment. RING can estimate the rotational state of a three-segment kinematic chain with double hinge joints from inertial data, and achieves an experimental mean-absolute-(tracking)-error of 8.10± 1.19 degrees if all four challenges are present simultaneously.
The network is trained on simulated data yet evaluated on experimental data, highlighting its remarkable ability to zero-shot generalize from simulation to experiment.
We conduct an ablation study to analyze the impact of each of the four challenges on RING's performance, we showcase its robustness to varying sampling rates, and we demonstrate that RING is capable of real-time operation.
This research not only advances IMT technology by making it more accessible and versatile but also enhances its potential for new application domains including non-expert use of sparse IMT with nonrigid sensor attachments in unconstrained environments.
Recurrent Neural Networks, Inertial Measurement Units, Orientation Estimation, Sparse Sensing, Magnetometer-free, Sensor-to-Segment Alignment
§ INTRODUCTION
Numerous recent developments in biomedical engineering applications require precise estimation of the motion of articulated bodies in space. Some prominent examples include unobtrusive human motion tracking outside the lab <cit.>, and realizing intelligent symbiosis between humans and robots that enter immersive environments <cit.>. Inertial Measurement Units (IMUs) are used in all these systems because of their unique ability to track movements of articulating rigid bodies of Kinematic Chains (KCs), in a cheaper and more reliable way than State-Of-The-Art (SOTA) multi-camera systems that require continuous line of sight.
All IMU-based motion tracking use cases heavily rely on Inertial Motion Tracking (IMT) algorithms that fuse different measurement signals to estimate motion parameters. This, however, is inherently limited by the following four key challenges <cit.>:
(1) Inhomogeneous magnetic fields indoors and near ferromagnetic materials or electric devices;
(2) Sensor-to-segment alignment that involves determining joint positions and axis directions in local sensor coordinates;
(3) Solving sparse problems where some segments of the KC are not equipped with a sensor;
(4) Addressing real-world disturbances due to nonrigid sensor attachment and caused by large acceleration signals from impacts and soft tissue artifacts.
In recent years, many highly specialized methods have been proposed to address these challenges. We will provide a brief overview of the latest and most notable developments accompanied by recent comprehensive methodological overviews. First, a multitude of different kinematic constraints are proposed to replace missing magnetometer heading information, as reviewed in <cit.>. Furthermore, recent general-purpose <cit.> magnetometer-free attitude estimators by <cit.> achieved remarkable accuracy improvements in comparison with, e.g., the widely used filters from <cit.> and <cit.>. Second, several algorithms have been developed to achieve automatic sensor-to-segment alignment, as outlined by <cit.> for specific joints with full sensor setups <cit.>.
Third, a recent trend in sparse sensor setups is visible with methods that either use a limited number of sensors but include magnetometer measurements <cit.> or are magnetometer-free <cit.>. Finally, the literature on IMT methods to overcome real-world disturbances is limited and focuses on late interception by outlier rejection techniques (<cit.>) or further advances in connection constraints <cit.>.
Real-world IMT applications typically present multifaceted challenges, requiring data-driven state observers like the Recurrent Neural Network-based Observer (RNNO) (<cit.>) that can effectively address the increasing complexity. To overcome a redundant implementation task in retraining RNNOs for every combination in a large grid of IMT challenges, we proposed the Recurrent Inertial Graph-based Estimator (RING) (<cit.>) as a pluripotent approach that solves IMT Problems (IMTPs) of tree-structured systems.
Despite RING's ability to provide a solution to a variety IMT challenges, its real-world applicability for a combination of all the aforementioned IMT challenges has not been investigated and their individual impact are unknown. Furthermore, while RING is aimed to be applicable in a plug-and-play fashion, it still requires a specific fixed sampling rate, which vastly limits its applicability in practice. Moreover, the real-time capability in inference has not been explored. In this work, we enhance and validate RING's usability with the following contributions:
* Extending RING's usability by enabling applicability to data from a broad range of sampling rates.
* Solve for the first time the four IMT challenges at once.
* Show zero-shot experimental generalizability in an extensive ablation study to gain insights on the performance of RING on individual IMT challenges.
* Analyze the real-time capability of RING.
§ PROBLEM FORMULATION
Consider a KC with three segments that are connected by hinge joints with arbitrary and unknown joint axes directions.
Only the outer bodies are equipped with nonrigidly attached IMUs.
A KC is a rigid-body system and it consists of multiple rigid objects (segments) that are rigidly attached to coordinate systems (bodies).
In general, the topology of such a rigid-body system can be represented by a Connectivity Graph (CG) <cit.> where nodes represent bodies and edges represent degrees of freedom in the system.
Here, for each segment there is one body with one segment attached to it, such that there is a one-to-one correspondence between segments and bodies.
After the N bodies have been numbered, the CG can be encoded via a parent array λN where λi is the body number of the parent of body i.
For a three-segment KC, this graph representation and the one parent array utilized in this work is shown in Figure <ref>.
In this work, the goal is to estimate the complete rotational state of the KC up to a global heading offset <cit.>. We approach this through a filtering problem formulation, where an estimate of the complete rotational state x(t) is obtained at every time instant t from the current and all previous IMU measurements y(t' ≤ t) that are combined into one measurement signal y(t) ∈ℝ^12 defined as
y(t) = (
ω_1(t)^⊺,
ρ_1(t)^⊺,
ω_3(t)^⊺, ρ_3(t)^⊺)^⊺ ∀ t
where ω_i(t), ρ_i(t) denote gyroscope and accelerometer measurements at time t of the IMU that is nonrigidly attached to body 𝒮_i ∈{1, 3}.
The rotational state x(t) ∈ℍ^3 of the KC is straightforwardly defined by
x(t) = (10(t)^⊺, 21(t)^⊺, 32(t)^⊺)^⊺ ∀ t
where ij(t) denotes the orientation from body 𝒮_i to body 𝒮_j at time t and where body 0 denotes the earth frame.
Note that the 10 can only be estimated up to a heading offset from 6D measurements.
Real-world applicability requires solving all of the following challenges of the IMTP that is said to
* be magnetometer-free (or 6D in contrast to 9D) if the IMUs measure only three-dimensional angular rates and specific forces, and not provide magnetometer readings.
* require sensor-to-segment alignment when hinge joint axes directions are unknown.
* be sparse if not every segment that constitute the KC has an IMU attached. Here,the middle segment does not have an IMU attached. An IMTP that has an IMU attached to each body is said to have a full IMU setup.
* suffer from motion artifacts if the IMUs are not rigidly attached to the respective bodies, such that there can occur transnational and rotational motions between the segment and IMU. An IMTP without motion artifacts assumes that there cannot exist any relative motion between segment and IMU.
From this, it follows that the IMTP considered here is magnetometer-free, requires sensor-to-segment alignment, sparse, and suffers from motion artifacts.
§ METHODS
In this work, we extend the Neural Network-based (NN-based) multiple-IMU sensor fusion algorithm from <cit.>.
We address all four key IMT challenges outlined in Section <ref>, while enabling sampling rate robustness and showcase that RING is real-time capable. This is achieved by both adapting the training procedure (Section: <ref>) and the NN-based multiple-IMU sensor fusion algorithm (Section: <ref>).
§.§ Simulated Training Data at Various Sampling Rates
RING <cit.> is trained on large amounts of simulated input-output data at various sampling rates. The procedure that generates the data for the training of RING is called the Random Chain Motion Generator (RCMG) <cit.>.
It generates extensively augmented random motions of KCs with:
* different number of segments,
* randomized segment lengths,
* randomized IMU placement,
* randomized joint axes directions, and
* rigidly or nonrigidly attached IMUs (by simulating spring-damper-systems with randomized damping and stiffness parameters) <cit.>.
From these random KC motions we compute IMU and orientation measurements, but in this work at various sampling rates. These form the input-output pairs for training RING.
RCMG can be summarized as a function that only from PseudoRNG returns the training pair
* XTN10, where Xi6 is the 6D IMU data for body i (if it is not dropped out), and where Xi69 is the joint axis direction of the hinge joint between body i and its parent (if it is not dropped out, and if the parent is not the base), and where X[, i, 10] is the inverse sampling rate 1/F, and
* YTN where Yi is the orientation from body i to its parent λ[i], and
where T is the number of timesteps, and N is the number of bodies in the KC (here N=3).
To achieve a wide coverage, training data is generated for sampling rates drawn from
F ∈{40, 60, 80, 100, 120, 140, 160, 180, 200}
and to allow for efficient training data batching, the sequence duration is adjusted based on the sampling rate to achieve a common number of timesteps of T=6000. A training batch is then built up by stacking 512 sequences, and additional details regarding the RCMG can be found in <cit.>.
§.§ Neural Network Architecture: RING with Sampling Rate Input
We use a NN trained on data generated using the procedure described in Section <ref>.
The network architecture is based on RING <cit.>, a powerful multiple-IMU sensor fusion algorithm that is composed of a decentralized network of message-passing Recurrent NNs (RNNs). Most notably, RING's parameters are defined on a per-node level and shared across all nodes in the graph.
This design enables RING to be applied to broad range of IMTPs with a single set of parameters and enables its exceptional pluripotency.
In this work, the architecture of RING is extended to additionally accept a sampling rate input, such that the dimensionality of RING's per-timestep and per-node input increases by one.
To summarize, RING can be viewed as the following step function that maps the previous state of RING ξ_t-1N2H and network input X_t N10 at time t to the next RING state ξ_t and output Ŷ_t ∈ℍ^N, i.e.,
ξ_t, Ŷ_t = ring(ξ_t-1, X_t, λ)
∀ t
where H is the hidden state dimensionality, ξ_0 = 0, and where the vector λ is the parent array, defined in Section <ref>.
Internally, RING has the parameters of
* the Message-MLP-network f_θ: ℝ^H →ℝ^M, and
* the Stacked-GRUCell-network g_θ: ℝ^2H×ℝ^2M+10→ℝ^2H which consists of the sequence of Gated-Recurrent-Unit(GRU)Cell, LayerNorm, GRUCell <cit.>, and
* the Quaternion-MLP that combines a Layernorm, and a MLP-network h_θ: ℝ^H →ℝ^4, and
where M is the dimensionality of the messages that are passed along the edges of the graph.
Note that the two GRUCells each have a hidden state dimensionality of H, thus the hidden state of RING is of dimensionality of 2H.
Then, equation (<ref>) consists of several consecutive steps, for all N bodies:
* Messages M_t NM are computed using f_θ.
* Messages are passed along the edges of the graph.
* The hidden state is updated using g_θ.
* The unnormalized output ỸN4 is computed using the Quaternion-MLP.
* The output is normalized to allow interpretation as unit quaternions. The final RING output is one unit quaternion per node Ŷ_t ∈ℍ^N.
RING is trained by comparing Ŷ_t to the ground truth Y_t which is provided by the RCMG, and by minimizing the mean-squared orientation error.
Additional details regarding the the RING architecture and its optimization strategy can be found and is exactly the same as in <cit.>.
A software implementation of the RCMG and RING, and the source code for creating the results presented in Section <ref> are hosted on GitHub[<https://github.com/SimiPixel/ring>].
§ RESULTS AND DISCUSSION
In this section, we evaluate the accuracy of RING's orientation estimation, trained in simulation only, when applied to experimental data (see Section <ref>) of the problem specified in Section <ref>.
The performance of RING is compared to (SOTA) methods (see Section <ref>).
It is remarkable, that RING can solve an experimental IMTP that combines all four challenges in IMT (magnetometer-free, unknown sensor-to-segment alignment, sparse sensor setup, and nonrigid sensor attachment) simultaneously, despite being trained on simulated data only.
§.§ Experimental Setup and Data Acquisition
We utilize a five-segment KC to record the experimental data but only the data for the given IMTP relevant parts of the KC are used for evaluation. The five-segment KC includes a singular spherical joint followed by three hinge joints, each oriented along the x, y, and z axes, respectively. Each segment of the KC was equipped with two IMUs: one rigidly attached to the segment and another nonrigidly attached using foam padding, as depicted in Figure <ref>.
Two distinct trials were conducted, involving random movements of the five-segment KC. Here, two three-segment KCs are thus extracted, one with joint axes direction x and y, and one with y and z.
During evaluation, the first trial spans a duration of 66 and features a diverse range of motions.
Additionally, the second trial, with a length of 68, includes periods where the entire KC remained stationary.
Overall, this results in four trials in total.
We refer to <cit.> for additional details regarding the experimental setup and preprocessing.
§.§ Evaluation Metrics and Baselines
The ground truth orientations for the experimental trials (see Section <ref>) were recorded using optical motion capture.
Orientation estimation accuracy is quantified using the Mean-Absolute-(tracking)-Error (MAE) in degrees.
Here, the mean calculation reduces the dimensions of the different trials, time, and three orientations (including inclination and two relative orientations).
In the time dimension, initial 5 of each trial were deliberately excluded from the MAE calculations. This decision was made to ensure that the recorded errors accurately reflected the system's performance post-convergence.
To the best of the authors' knowledge, there exists no alternative method that can be applied to the IMTP as described in Section <ref>.
However, two SOTA baseline methods can be identified after simplifying the IMTP so that it does not contain all four challenges simultaneously.
The first baseline is obtained by using conventional IMT methods, that is, using a full 9D IMU setup and tracking each segment independently.
The SOTA method for such single-IMU sensor fusion is VQF <cit.>.
The second baseline is obtained after eliminating the challenge of anatomical calibration.
Under the assumption of known joint axes direction, RNNO can be applied <cit.>.
Note that since two KCs with different directions of the joint axes are used for experimental validation (see Section <ref>), two trained RNNO networks are required.
To compensate for the violation of the rigid-IMU-attachment assumption, both baselines additionally utilize a low-pass filter.
The cutoff frequency was grid searched and we report only the best result for each baseline method.
§.§ Experimental Validation of RING
The trained RING is applied to experimental data from an IMTP that combines the four challenges of nonrigid IMU attachment, misaligned sensors and segments, magnetometer-free measurements, and a sparse sensor setup.
The MAE in the orientation estimate for RING and the two SOTA baseline methods are reported in Table <ref> and confirms that RING outperforms both alternative methods despite solving the more challenging IMTP.
The first 15 seconds of one example sequence are shown in Figure <ref> and demonstrate RING's prediction performance and quick convergence.
In Table <ref>, we conduct an ablation study to analyze the impact of nonrigid IMU attachment, sensor-to-segment alignment, and sparse IMU setup on RING's orientation estimation accuracy.
In Figure <ref>, the experimental data is resampled to a wide range of sampling rates to assess the robustness of RING w.r.t. the sampling rate. RING achieves a nearly constant orientation estimation accuracy which only, unsurprisingly, degrades slightly for low sampling rates.
§.§ Real-time Applicability of RING
By design, RING can be applied online as it is defined by a step function (see eq. (<ref>)) that, based on the measurements at a certain timestep (see eq. (<ref>)), returns an updated internal state and the rotational state estimate (see eq. <ref>) of the KC.
Therefore, if the step function executes faster than the sampling rate requires, then it is said to be real-time capable.
After compilation, the runtime of the step function of RING is (794±16.7) s on a M2 Macbook Pro.
Thus, RING is real-time capable up to ≈1000.
§ CONCLUSION
In this work, we have extended RING, a powerful IMT method, to generalize across a wide range of sampling rates, and we have showcased that it can simultaneously overcome the four key challenges in IMT: inhomogeneous magnetic fields, sensor-to-segment misalignment, sparse sensor setups, and nonrigid sensor attachment.
With an experimental tracking MAE of 8.10± 1.19 degrees if all four challenges are present simultaneously, RING accurately estimates the rotational state of a three-segment KC from IMU measurements.
RING leverages a decentralized network of message-passing RNNs that is trained on simulated data but is capable of zero-shot generalization to real-world data.
Our evaluations reveal RING's superiority over SOTA methods in terms of accuracy and applicability, additionally, we demonstrate RING's robustness across various sampling rates, and its real-time capability.
By enabling plug-and-play usability and extending the applicability of inertial motion capture technology, RING not only advances the field but also opens new avenues for research and practical applications in environments previously deemed challenging.
The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The hardware is funded by the German Research Foundation (DFG).
|
http://arxiv.org/abs/2409.02845v1 | 20240904161741 | Multi-Track MusicLDM: Towards Versatile Music Generation with Latent Diffusion Model | [
"Tornike Karchkhadze",
"Mohammad Rasool Izadi",
"Ke Chen",
"Gerard Assayag",
"Shlomo Dubnov"
] | cs.SD | [
"cs.SD",
"cs.MM",
"eess.AS"
] |
MT-MusicLDM
T. Karchkhadze et al.
UC San Diego, CA 92093, USA Bose Corp., Framingham, MA 01701, USA Institute for Research and Coordination in Acoustics and Music (IRCAM), Paris, France
{tkarchkhadze, kchen, sdubnov}@ucsd.edu, [email protected], [email protected]
Multi-Track MusicLDM: Towards Versatile Music Generation with Latent Diffusion Model
Tornike Karchkhadze1 Mohammad Rasool Izadi2 Ke Chen1 Gerard Assayag3Shlomo Dubnov1
Accepted Sep 3 2024 to ApJ Letters
======================================================================================
§ ABSTRACT
Diffusion models have shown promising results in cross-modal generation tasks involving audio and music, such as text-to-sound and text-to-music generation. These text-controlled music generation models typically focus on generating music by capturing global musical attributes like genre and mood. However, music composition is a complex, multilayered task that often involves musical arrangement as an integral part of the process. This process involves composing each instrument to align with existing ones in terms of beat, dynamics, harmony, and melody, requiring greater precision and control over tracks than text prompts usually provide. In this work, we address these challenges by extending the MusicLDM—a latent diffusion model for music—into a multi-track generative model. By learning the joint probability of tracks sharing a context, our model is capable of generating music across several tracks that correspond well to each other, either conditionally or unconditionally. Additionally, our model is capable of arrangement generation, where the model can generate any subset of tracks given the others (e.g., generating a piano track complementing given bass and drum tracks). We compared our model with existing multi-track generative model and demonstrated that our model achieves considerable improvements across objective metrics, for both total and arrangement generation tasks. Sound examples can be found at https://mt-musicldm.github.iohttps://mt-musicldm.github.io
§ INTRODUCTION
In recent years, diffusion models <cit.> have demonstrated their ability to learn complex distributions, rendering them well-suited for data types such as raw audio. These advancements have significantly impacted the domains of speech and general audio generation <cit.>, as well as the generation of music directly within the audio domain <cit.>. User-controlled neural audio synthesis, particularly in music generation, has the potential to revolutionize music industry by providing musicians with tools for quick compositional prototyping, speeding up the creative process. Additionally, these tools contribute to the democratization of the field by allowing amateur musicians to leverage generated pieces to compose without extensive knowledge of instruments, music theory and years of musical education.
Most generative music models operate by conditioning music on high-level ideas expressed as text, such as genre and mood, leading to text-to-music (TTM) task. However, TTM paradigms have conceptual problems. One challenge is that music, as an abstract entity, is generally difficult to describe with words. Another issue is that text is not an effective medium to convey time-dependent musical attributes, which are crucial for musical expression. Furthermore, music is a multilayered art in which many tracks of instruments simultaneously play their unique roles while being in correspondence with each other on lower-level attributes like notes, timbre, dynamics, harmony, and rhythm. The essence of musical composition often boils down to arrangement—structuring the piece, orchestrating interactions, and determining the overall sonic character by distributing texture among different instruments or voices, a complexity that is challenging to convey through text.
To bridge the conceptual gaps in current music generation models, we introduce the Multi-Track MusicLDM, a diffusion-based model that generates coherent music in multiple tracks or stems (terms we use interchangeably), ensuring they correspond and collectively create a unified musical piece. To achieve this, we utilized the MusicLDM <cit.> model, which is an adaptation of AudioLDM <cit.> for music, and transformed it into a multi-track audio diffusion model. Taking a multi-track inspiration from the recent work MSDM <cit.> and operating on the latent space, our model learns a joint probability distribution for tracks that share a contextual structure and generates music mixtures in separate tracks, a process referred to as total generation. Leveraging the text and music conditioning capabilities of the CLAP <cit.> component, our model provides options for additional conditioning. Audio conditioning can be applied by using an existing reference track processed through CLAP's audio branch to influence the generation. This track can guide the overall character and content of the generated music, effectively enabling a transfer that preserves the essence while allowing creative deviations. Similarly, text conditioning can influence the genre, mood and overall character of the generated tracks. Additionally, by employing a well-known method from diffusion model, inpainting, our model is capable of imputation of tracks or generating any subset of tracks given others. We refer to this process as arrangement generation. Arrangement generation enables, for example, the creation of specific musical parts, like piano or guitar, to accompany existing bass and drum tracks. The conditioning mentioned above can be used in combination with arrangement generation, giving our model an additional edge in creative endeavors. By designing desired instrument combinations and using audio and text conditioning, users have the flexibility to generate specific arrangements or full musical pieces, tailoring the model to their compositional needs.
In our experiments, we demonstrate that our model can generate realistic music across various scenarios: total track-by-track music generation, conditional generations, and arrangement generation with any combination of stems. Furthermore, we compared our model with the existing open source multi-track generative model, MSDM, and demonstrated that our model, trained on the same dataset, achieves considerable improvements in the Fréchet Audio Distance (FAD) <cit.> score compared to the baseline.
As part of our commitment to reproducibility and open science, the code and checkpoints of this study will be made available on GitHub upon acceptance of this paper.
§ BACKGROUND
Music Generation. Our system, Multi-Track MusicLDM (MT-MusicLDM), depicted in Figure <ref>, is an extended version of the MusicLDM model capable of learning and generating multiple simultaneous music stems. MusicLDM shares its architecture with AudioLDM, which in turn is based on a Latent Diffusion Model (LDM) <cit.> framework with a cascaded model structure. The system comprises a text-audio encoder, an LDM generator, a Variational Autoencoder (VAE) <cit.>, and a vocoder. The role of text and audio encoding is played by CLAP <cit.>, serving as the system's encoder and mapping audio and text into a shared embedding space for conditioning. This is followed by an LDM that acts as the main generator, operating in the latent space (instead of directly on audio or spectrograms), allowing for training and inference on limited computational resources while retaining generation quality. This latent space is achieved by a VAE that manages the data dimensionality, pre-trained to compress and reconstruct Mel-spectrograms into latent representations. Finally, a HiFi-GAN <cit.> vocoder synthesizes the audio output from the generated Mel-spectrograms.
Arrangement generation. The most similar work to ours is the commercial product JEN-1 Composer <cit.>, a framework that uses a latent diffusion-based music generation system for versatile multi-track music generation. Different from our approach, they use a latent space obtained directly from audio with their own pre-trained autoencoder model for audio reconstruction, following the Jen-1 architecture proposed in <cit.>. Additionally, the dataset used is also different from ours, as they used their own private dataset, which is larger and of higher quality than the publicly available Slakh2100 <cit.> dataset that we used. Unfortunately, as it is a commercial product, they do not share code, datasets, or provide a free API for audio generation, making direct comparison to our work impossible. Other recently published works have explored music-to-music and arrangement-like music generation. STEMGEN <cit.> presented an alternative paradigm for music generation by introducing a model that can listen and respond to musical context. Different from ours, STEMGEN uses a transformer language model-like architecture on hierarchical discrete representations from VQ-VAEs <cit.> to model musical tracks. Some other works approach the arrangement task with a one-to-many or many-to-one paradigm. For example, <cit.> generates bass accompaniment using audio autoencoders. On the other hand, SingSong <cit.>, utilizing an architecture similar to AudioLM <cit.>, proposed a method for generating instrumental music to accompany input vocals and demonstrated promising results. However, these works primarily concentrate on one instrument, offering limited versatility. Another work that serves as our baseline, MSDM <cit.>, introduced a novel diffusion-based, multi-source generative framework trained via denoising score-matching <cit.>. MSDM is capable of synthesizing music, creating arrangements, and separating musical sources while operating in the raw waveform domain. While MSDM is notable for its flexibility, it exhibits limitations in audio quality and musical coherence. Interestingly, although our model surpasses MSDM in most tasks, it struggles with source separation—a limitation we attribute to its operation within a latent space where mixtures and stems do not maintain a linear relationship. This insight is prompting our future research in this direction.
Arrangement generation was extensively studied in the symbolic music domain. Using the audio-to-MIDI generation paradigm, these works typically focus on generating harmonization for a given melody, such as in <cit.>, or extracting pitch information from input vocals and generating chords suitable for the melody, as seen in the commercial product Microsoft Songsmith, inspired by <cit.>. Other significant contributions include the works by <cit.> and <cit.>, which explored generating kick drum and bass accompaniments, respectively, in the MIDI domain. The multi-track MIDI music generation paradigm was also employed in systems like MuseGAN <cit.>, MIDI-Sandwich <cit.>, Multitrack Music Machine <cit.>, and MTMG <cit.>. Additionally, MIDI accompaniment generation based on audio conditioning was suggested in <cit.>.
§ METHOD
We introduce MT-MusicLDM, depicted in Figure <ref>, as an integration of MusicLDM and MSDM. In section 1, we expand the LDM latent space to adopt MusicLDM for multi-stem music generation. This expansion enhances our control and versatility over the music generation process. Next, in section 2, we create musical arrangements for a given subset of tracks. Section 3 explores the use of CLAP encoders to control the style of music with text and musical prompts.
§.§ Multi-Track LDM
Following AudioLDM and MusicLDM work, we employ denoising diffusion probabilistic models (DDPMs) <cit.> for audio/music generation. DDPMs belong to a category of latent generative variable models. Given two mappings between a time-domain sample x and its corresponding latent space representation x_latent, i.e. x ↔ x_latent, the generation problem is to model q(x_latent) instead of q(x). We first describe the mappings to and from the latent-space for multi-track music and then explain the training and inference of the LDM.
Let x_mix be a mixture composed of S stems x_s with a duration of T_mix for s ∈{1, …, S} such that x_mix = ∑_s=1^S x_s. Also, denote the stack of stems as x with dimensions S × T_mix.
As shown in Figure <ref>, the individual stack of stems x is transformed into a Mel-spectrogram x_Mel, using short-time Fourier transform (STFT) and Mel operations, with dimensions S × T × F where T and F show the time and frequency dimensions of Mel-spectrogram, respectively. Subsequently, the VAE encoder translates x_Mel into a latent, compressed representation x_latent with dimensions S × C ×T/r×F/r, where r indicates the VAE's compression ratio, and C denotes the number of channels in the latent space. Following this, the VAE decoder converts the latent, compressed representation, x_latent, to the Mel-spectrogram domain as x̂_Mel, which is then transformed to the time-domain x̂ by the HiFi-GAN vocoder. Therefore, the time-domain to latent space mapping, x → x_latent, is composed of STFT, Mel, and VAE encoder, while the reverse mapping x_latent→ x, employs VAE decoder and HiFi-GAN vocoder.
The generative model for x_latent via the diffusion process is defined as p_θ(z_0) = ∫ p_θ(z_0:N) dx_1:N where z_0 = x_latent is the latent representation and θ corresponds to the parameters of the LDM model. Within the DDPM framework, the LDM generator operates in the latent space to generate a latent representation z_0 ∼ q(z_0), either conditionally or unconditionally, from Gaussian noise z_N ∼𝒩(0, σ_N^2 I). The variable n in z_n, where n ∈ [1, …, N], represents the step number in the diffusion model's forward or reverse process, and N denotes the number of total steps. The forward pass gradually introduces Gaussian noise, ϵ∼𝒩(0, I), to z_0 = x_latent, i.e. z_n = z_0 + σ_n ϵ ultimately resulting in isotropic Gaussian noise z_N with a distribution 𝒩(0, I) over N steps. Conversely, the reverse process aims to eliminate noise by estimating the injected noise, ϵ, at every step and obtaining z_n-1 from z_n, thereby incrementally reconstructing z_0 = x̂_latent. The DDPM parameterizes the reverse Gaussian distribution p(z_n-1|z_n) with a neural network ϵ_θ(z_n, n):
p(z_n-1|z_n) =𝒩( z_n-1 | μ_θ(z_n, n), σ_n^2)
μ_θ(z_n, n) = ((σ_n^2 - σ_n-1^2) ϵ_θ(z_n, n) + σ_n-1^2 z_n ) / σ_n^2.
Optimizing the evidence lower bound on the log-likelihood q(z_0) simplifies to minimizing the mean squared error between the predicted noise ϵ_θ and the Gaussian noise ϵ, at each step, as follows:
L(θ) = 𝔼_z_0, ϵ, nϵ - ϵ_θ(z_n,n, [c_cond]) ^2
where [c_cond] denotes the optional use of conditioning, which we will touch upon in more detail later in the section. Therefore, having the time-domain mapping, x → x_latent, one can estimate the LDM θ by minimizing the loss function L(θ), presented in Eq. <ref>. For inference, the generation begins with the Gaussian noise prior z_N, followed by an iterative backward sampling process from p(z_n-1|z_n) for each n ∈{N, …, 1}, as outlined in Eq. <ref>. The final step involves mapping the generated latent representation, x̂_latent = z_0, to the time-domain using x_latent→ x. To form the mixture of tracks, one can simply add all the rows of x̂, i.e. x̂_mix = ∑_s=1^S x̂_s, with each x̂_s being the s-th row of x̂.
For high-quality reconstructions using DDPM, a large number of steps, typically N = 1000, are traditionally necessary. However, to streamline the process and reduce computational demands, denoising diffusion implicit models (DDIM) <cit.> offer a compelling alternative. In this study, we utilize the DDIM protocol, which allows for a substantial reduction in the number of required steps, down to approximately N = 200 during inference, while still preserving the generative quality.
As Audio/MusicLDM frameworks for audio and music generation are heavily inspired by methods originally used in image generation, their network architecture is borrowed from this domain. Like in image generation, a large UNet <cit.> architecture is a common choice for diffusion models in audio and music domain. The UNet architecture consists of two symmetrical halves: an encoder and a decoder, both enhanced with skip connections that bridge corresponding layers. To accommodate z_n with an additional dimension compared to typical image or single-channel audio representations, S × C ×T/r×F/r vs × C ×T/r×F/r in Audio/MusicLDM, we extend the UNet architecture by using 3D convolutional operations. We effectively enhance the operational dimensionality of UNet by interpreting the channel dimension of z_n as an additional spatial dimension. This adjustment leads to the stems’ dimension now serving as the channel dimension.
§.§ Conditional Generation
To have control over the generation p_θ(z_0), one can introduce some condition c to the diffusion process, resulting in p_θ(z_0|c). The conditional diffusion process is defined similarly to the unconditional process p_θ(z_0:N ) as follows:
p_θ(𝐳_0:N | c) = p(𝐳_N) ∏_n=1^N p_θ(𝐳_n-1 | 𝐳_n, c).
Additionally, to enable more controllable generation, LDM models often employ classifier-free guidance (CFG) <cit.>.
CFG is a technique in diffusion models that enhances control over the adherence to conditioning information during inference.
This is achieved by randomly dropping the conditioning information during training, thereby simultaneously training both conditional and unconditional versions of the LDM model. In the inference time, the strength of the conditioning can be modulated by the CFG weight ϵ̂ = wϵ_u + (1 - w)ϵ_c, where w is the guidance scale weight that balances the model's unconditional ϵ_u and conditional ϵ_c predictions.
In this study, we employ CLAP to convert text prompts and musical tracks into embeddings, which serve as the basis for conditioning the LDM.
For example, users can specify the type of guitar by conditioning on the CLAP embedding of a reference track.
Additionally, the strength of the conditioning can be adjusted using the CFG weight, increasing their creative options.
§.§ Arrangement generation
From a musical perspective, arrangement composition refers to the task of creating plausible musical accompaniments for a particular subset of given tracks. In a broader machine learning literature setting, the task of filling a partially observed the data is commonly referred to as imputation (inpainting in the image domain) and aims to fill out the missing segments of variable. Learning the joint distribution of musical tracks offers us a clear path to explore this task in the latent space.
For a given subset of tracks, x_I = { x_s | s ∈ I }, arrangement generation task is to find x_I̅ = { x_s | s ∈I̅}, as follows
_x_I̅ p_θ( x_I̅ | x_I),
where I̅ = {1, …, S} - I.
Given the LDM, the search for x_I̅ happens in the latent space, i.e. z_I̅ given z_I.
Note that given I, one can find the latent representation of x_I and x_I̅ as follows:
x_I → m ⊙ z_0,
x_I̅→ (1 - m) ⊙ z_0,
where m is a binary mask in the latent space with the same dimension as z_0 and m_s = 1 if s ∈ I and m_s = 0 otherwise.
The generation of x_I̅ starts with sampling a Gaussian noise p(z_N) ∼ N(0, I) as the total generation, but each denoising step in the reverse process if followed by
z_n-1← (1-m) · z_n-1 + m · z'_n-1.
where z'_n-1∼𝒩(z_n-1 | z_0 , σ^2_n-1) is obtained by adding n-1 noise steps to z_0 through the forward process.
In essence, arrangement generation becomes a generation problem where, at every step, parts of the latent space corresponding to the given tracks are masked and replaced with their noise-added version. This approach compels the model to perform generation under constraints, ensuring that the generated arrangement tracks align well with the given ones.
§ EXPERIMENTAL SETUP
§.§ Dataset
Following a similar research path as MSDM and to facilitate direct comparison, we used Slakh2100 <cit.>, a dataset widely recognized as a benchmark in the domain of music source separation. Although source separation was not our primary focus, the choice of Slakh2100 was motivated by our need for clean and high-quality multi-track audio data for learning multi-track generation. Synthesized from MIDI files using premium virtual instruments, the dataset consists of 2100 individual tracks into subsets of 1500 for training, 375 for validation, and 225 for testing. While the original Slakh compilation offers up to 31 distinct instrumental classes, our experiment and subsequent analyses were limited to S=4 most prevalent classes: Bass, Drums, Guitar, and Piano. These classes were selected due to their dominant presence in the dataset, ensuring a robust and consistent basis for our evaluations.
In our experiments, we performed preprocessing steps on the dataset. We downsampled the audio from its original 22.05 kHz to 16 kHz to align with the specifications of our model. We read audio from the original tracks, creating audio segments of 10.24 seconds with an additional random shift for training samples. To convert the audio clips into a suitable feature representation, we utilized a window length of 1024 and a hop size of 160 samples to generate Mel-spectrograms with dimensions F × T = 64 × 1024. For the creation of mixed audio samples, individual tracks were combined to form mixtures. Differing from the original MusicLDM, we abstained from normalizing the separate tracks or their mixtures to prevent any potential peaking in the audio signals.
§.§ Model, Training and Evaluation Specifics
Our parameter configuration closely mirrors that of MusicLDM <cit.>, with only minimal adjustments. We extended the LDM model to accommodate the stem dimension of S=4, transforming it into a 3D LDM model. For our new 3D LDM model, we employed a UNet architecture comprising 2 encoder blocks, a middle block transof, and 2 decoder blocks. We maintained the settings consistent with previous configurations of MusicLDM, with the sole modification being the adaptation to 3D convolutional layers. Additionally, we switched from the "spatial transformer" used by MusicLDM and AudioLDM to a generic attention block transformer, as the former was tailored for 2D, picture-like data, which did not align with our model's 3D data processing requirements.
To obtain a latent representation of Mel-spectrograms, we employed a mono-track VAE from the MusicLDM model, which boasts a compression ratio r = 4, effectively encoding stacks of Mel-spectrograms of size S × T × F = 4 × 1024 × 64 into a 3D LDM latent vector with dimensions S × C ×T/r×F/r = 4 × 8 × 256 × 16, which respectively represent stems, channels, time, and frequency. The components of MusicLDM—including the CLAP encoder, VAE, and the HiFi-GAN vocoder—were taken from the pre-trained, publicly released checkpoint of MusicLDM[https://github.com/RetroCirce/MusicLDMhttps://github.com/RetroCirce/MusicLDM]. This checkpoint was trained on an extensive collection of music audio data. However, it is worth mentioning that these components were not trained or fine-tuned for processing separate tracks, which imposes certain limitations on the final audio quality of our model. These components remained unchanged during the training phase of the 3D LDM generator, as illustrated in Figure <ref>. We utilized pre-trained weights for MusicLDM components from publicly available checkpoints.
We trained our model, MT-MusicLDM, with a dropout rate of 0.1 applied during conditional generation, effectively resulting in training both unconditional and conditional models. Training was conducted using the Adam optimizer with a learning rate of 3 × 10^-5 for a duration of up to 1000 epochs. The number of denoising steps for the LDM is set at N = 1000 during training and reduced to N = 200 for DDIM sampling during inference.
We evaluated our models using the Frechet Audio Distance (FAD) <cit.> metric, a widely recognized benchmark in music quality assessment. This metric was employed across all our experiments, including total generation, arrangement generation, and both audio- and text-conditioned tasks.
§ EXPERIMENTS AND RESULTS
§.§ Total Generation
We evaluated our MT-MusicLDM model in unconditional mode on the total music generation task using the Slakh test dataset. For this evaluation, we generated audio stems, mixed them to create mixtures, and then calculated the FAD between these generated mixtures and the mixtures from the test set. Given that MSDM was the first and only open source model capable of generating music in individual parts, we selected it as our baseline for comparison. Table <ref> shows the performance of our model, reported as "MT-MusicLDM (Uncond)", compared to MSDM on the same dataset. We observed that our model significantly outperforms MSDM, with a dramatic reduction in FAD scores from 6.55 to 1.36. This substantial improvement highlights our model's capability to generate high-quality and coherent music audio track-by-track.
For broader context, we also incorporated the benchmark MusicLDM scores from <cit.> in the table. We took the highest-performing variant, "MusicLDM w/ BLM Text-Finetune." This model underwent specialized training with beat-synchronous latent mixup and was further enhanced through fine-tuning on text prompts. We denote MusicLDM with an asterisk in the table to indicate that evaluations were performed on distinct datasets: Audiostock <cit.> for MusicLDM and Slakh for our model. Given the significant differences in dataset characteristics these comparisons should be viewed as contextual rather than direct.
§.§ Audio Conditional Generation
In the audio-conditioned experiment, we explored the MT-MusicLDM conditioned through CLAP audio branch, focusing on total audio generation with existing audios used for conditioning. We utilized audio stems mixed from the Slakh test dataset as inputs for our model's CLAP encoder and generated audio by conditioning our model with a CFG weight of w =2.0. Subsequently, these generated stems were summed to form new audio mixtures. We then calculated the FAD score between these mixtures and the Slakh test set to evaluate the model's performance in generating coherent audio outputs. In the Table <ref>, we reported our result as "MT-MusicLDM (Audio Cond)." It is evident that audio conditioning with CLAP noticeably steers generation towards the test set, resulting in further improvement in FAD score. This validates our hypothesis and underscores the potential of our model
for audio content adaptation with CLAP conditioning, leveraging a user-selected reference track as a dynamic catalyst for creativity, opening new avenues for personalized and expressive music generation.
§.§ Text Conditional Generation
To elucidate the impact of text prompts on the generation capabilities of our model, we employed the MT-MusicLDM conditioned through CLAP's text encoder branch and used the Audiostock <cit.> dataset as a validation set. We searched for tag words "energetic" and "soft" within the dataset, creating subsets of corresponding audio files. Then, we generated audio files conditioning our model with text prompts "soft music" and "energetic music" through CLAP text encoder, with CFG weight w =2.0. We calculated FAD score across generated and target audios. Table <ref> presents the results of this experiment. Notably, the cross-prompt and target folders yielded higher FAD scores, suggesting that the model successfully follows the text prompts. It is worth mentioning that the model was neither trained nor fine-tuned on text, nor on the Audiostock dataset. These results underscore the potential of the model being effectively conditioned on text prompts to influence the perceptual quality of generated music, aligning with our statement that the model represents a step forward towards a versatile audio model.
§.§ Arrangement Generation
In our arrangement generation experiment, we provided the model with a subset of stems and tasked it with generating the remaining stems unconditionally. We conducted 14 distinct experiments, each focused on generating a specific stem, starting from a singles and expanding to include all possible combinations of them. To assess the model's performance, we calculated the FAD scores for each combination by comparing the generated stems to their corresponding targets from the test dataset.
We chose MSDM as our baseline for the arrangement generation task, as it is the only other open source model known to us capable of generating arrangements in a similar manner. For direct comparisons, we adopted the performance evaluation approach used by MSDM, which in turn utilized a generalized version of the FAD protocol from <cit.>, designed for arrangement generation involving multiple tracks. According to this protocol, generated tracks are mixed with existing originals, and the FAD score is calculated for the resultant mixtures, providing a robust measure of the model’s performance in producing coherent total audio outputs. Additionally, to gain a comprehensive picture, we utilized a publicly available implementation of MSDM
to generate arrangements for all combinations for direct comparison of FAD scores between solely generated stems rather than just mixtures. We pursued this approach because we believe that mixing with given stems can mask a model's performance details, thus not allowing for a detailed analysis of the model's capacity to generate each stem subset.
In the Table <ref>, we report the FAD score for all instrument stems (B: Bass, D: Drums, G: Guitar, P: Piano) and their combinations. The upper two rows of table present a comparison of FAD scores for mixes following the protocol described above, while the lower two rows detail the FAD scores between generated tracks and their targets. Our model significantly outperforms MSDM in every combination except for guitar stem generation. By analyzing the values in the bottom row and reflecting on our listening experiences during experiments, we noticed that our model demonstrates notably stronger performance on drums and bass compared to guitar and piano outputs, which are occasionally slightly inferior, as evidenced by the scores (0.76 and 1.07 versus 2.76 and 1.80), and at times exhibit similarities with each other. This observation is further supported, notably by the GP combination, which refers to guitar and piano pair generation and registers as the highest FAD score among all combination categories. Additionally, we observed that when drums are not provided and the model lacks clear rhythmic cues, it often struggles to maintain rhythmic coherence of generated tracks with the given ones. This limitation is reflected by slightly higher FAD scores for the combinations where drums are not included in provided stems. Addressing these limitations and achieving balanced performance across all stems poses an interesting challenge for future research.
§ CONCLUSION AND FUTURE WORK
We proposed the MT-MusicLDM model, a versatile framework designed to empower creators to generate and compose music in a variety of modes. This includes total track-by-track generation, conditioning generation with reference music tracks or textual inputs, and creating arrangements using any combination of given and generated instrument tracks. Our experiments and evaluations demonstrate that our model produces high-quality sounds across these generative tasks, achieving musical coherence and significantly outperforming the baseline. This work opens several promising avenues for future research.
Although our generative modeling MT-MusicLDM model has shown significant results, limitations remain. These limitations stem from the fact that our model relies on pre-trained components from MusicLDM, such as the VAE and HiFi-GAN vocoder, which are not fine-tuned and specialized in processing individual stems but were trained on music mixes. Additionally, the use of the Mel-spectrogram domain further limits our model's capacity to yield high-fidelity state-of-the-art audio compared to commercial counterparts. Another source of limitation is the choice of dataset, as Slakh2100 is very small considering the data-extensive nature of diffusion models and the task of audio generation. Furthermore, the Slakh dataset doesn't contain any tags or textual descriptions for genre or any other contextual information about its musical pieces.
Looking ahead, we aim to enhance the musical and rhythmic coherence of our model and increase its versatility by expanding the list of available instruments, potentially allowing for user-specified combinations. We intend to extend our investigations by moving away from the Mel-spectrogram domain and possibly shifting to higher sample rates. We plan to incorporate more state-of-the-art VAEs and move to larger datasets beyond Slakh, including large text-to-music datasets, as demonstrated in <cit.>. Additionally, we will explore incorporating different methods of conditioning for source separation to achieve truly versatile music generation—a comprehensive music generation framework that supports a wide range of creative expressions, including generation, arrangement, and separation.
§ ACKNOWLEDGMENTS
We thank the Institute for Research and Coordination in Acoustics and Music (IRCAM) and Project REACH: Raising Co-creativity in Cyber-Human Musicianship for their support. This project received support and resources in the form of computational power from the European Research Council (ERC REACH) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement 883313).
splncs04
|
http://arxiv.org/abs/2409.02721v1 | 20240904135800 | Muon collider probes of Majorana neutrino dipole moments and masses | [
"Michele Frigerio",
"Natascia Vignaroli"
] | hep-ph | [
"hep-ph",
"astro-ph.SR",
"hep-ex"
] |
Efficient simulation of non-uniform cellular automata with a convolutional neural network
Michiel Rollier10000-0001-8467-734X Aisling J. Daly10000-0002-3390-2495 Odemir M. Bruno10000-0002-2945-1556 Jan M. Baetens0000-0003-4084-9992
September 9, 2024
=================================================================================================================================================
Majorana neutrinos may have transitional dipole moments, which violate lepton number as well as lepton flavour. We estimate the sensitivity of future colliders to the electron-muon neutrino dipole moment, λ_eμ, by considering same-sign dilepton final states. We find that hadron colliders, even the proposed FCC-hh upgrade, are sensitive only to |λ_eμ|≳ 10^-9μ_B (with μ_B the Bohr magneton), a value two-three orders of magnitude larger than current bounds from astrophysics and low-energy neutrino-scattering experiments. In the case of a future muon collider, we show that the sensitivity varies from |λ_eμ|∼ 5· 10^-9μ_B for energy √(s)≃ 3 TeV, to ∼ 10^-12μ_B for √(s)≃ 50 TeV, matching the current laboratory bounds for √(s)≃ 30 TeV.
The singular advantage of the muon collider signal would be a direct, clean identification of lepton number and flavour violation.
We also show that a muon collider would improve by orders of magnitude the direct bounds on m_eμ and m_μμ, two of the entries of the Majorana neutrino mass matrix. These bounds could be as strong as ∼ 50 keV, still far above the neutrino mass scale.
§ INTRODUCTION
Neutrino dipole moments (NDMs) are a window into new physics beyond the Standard Model (BSM) <cit.>.
Assuming that the neutrino mass eigenstates are Majorana fermions,
only transition dipole moments (TDMs), connecting different flavours, are non-vanishing, and they violate lepton number by two units.
In the flavour basis, we can denote them by λ_αβ with α,β=e,μ,τ, and αβ since the matrix λ is antisymmetric.
Majorana masses induce a one-loop contribution to TDMs, which is suppressed not only by the smallness of neutrino masses, but also by a Glashow-Iliopoulos-Maiani (GIM) mechanism.
However, the magnitude of the TDMs can significantly increase in new physical models <cit.>. In particular, compelling BSM scenarios predict a ν_μ-ν_e TDM of the order of λ_eμ∼ 10^-12μ_B, where μ_B≡ e/(2m_e) denotes the Bohr magneton (see for example <cit.> and reference therein). Such large Majorana-neutrino TDMs can be also natural <cit.>, in the light of effective-field-theory arguments.
In this range, TDMs could be accessible experimentally, as detailed below.
The observation of lepton number violation (LNV), in a process induced by neutrino TDMs, would additionally establish the Majorana nature of neutrinos.
In the case of Dirac neutrino mass eigenstates, diagonal NDMs are possible, and the neutrino-mass contribution to those is not GIM-suppressed, but it is still very small, λ_ν∼ 10^-19μ_B <cit.>. More in general,
Dirac NDMs exceeding about 10^-15μ_B would not be natural, as they would induce unacceptably large neutrino masses <cit.>.
We will neglect the Dirac case in the following.
Extensive searches for NDMs have been undertaken in laboratories, as well as through astrophysical and cosmological observations. Laboratory searches typically focus on measuring low-energy scattering of solar neutrinos <cit.>.
Astrophysical bounds are obtained from stellar energy loss measurements. While these latter are subject to uncertainties in astrophysical models, they have historically been more stringent than laboratory constraints.
However, the latest laboratory bounds, λ_ν < 6.3 × 10^-12μ_B at 90% C.L. from XENONnT <cit.> and λ_ν < 6.2 × 10^-12μ_B at 90% C.L. from LUX-ZEPLIN (LZ) <cit.>, are approaching the astrophysical limits <cit.>. More precisely, these bounds apply to an effective dipole moment <cit.>, given by
λ^2_ν_e≃∑_k |U_ek|^2∑_j |λ_jk|^2, where U_ek is a neutrino mixing-matrix element, and λ_jk are the neutrino TDMs in the basis of neutrino mass eigenstates, λ_jk≡∑_αβ U_α jU_β kλ_αβ.[ For E_ν≳ 1 MeV, the expression for λ_ν_e should be corrected, to account for solar matter effects, which are E_ν-dependent <cit.>.] Therefore, these searches are sensitive only to a combination of the different TDMs.
Bounds on λ_ν are also placed by short-baseline accelerator and reactor experiments like CEνNS. These, however, are weaker by at least one order of magnitude, see <cit.> and references therein.
The current most stringent astrophysical limit, λ_ν≲ 2× 10^-12 μ_B, is obtained from plasmon decays in the red giant branch of globular
clusters <cit.>.
Cosmological bounds of the order of λ_ν≲ 3 × 10^-12μ_B can be also obtained from CMB and BBN measurements of the effective number of neutrinos <cit.>.
Another set of constraints is obtained from measurements of neutrino-to-antineutrino conversion in the solar magnetic field <cit.>. The sensitivity to λ_ν in this case depends crucially on the value of the solar transverse magnetic field, which is poorly known, so that a conservative bound on λ_ν is typically weaker than those above <cit.>. On the other hand, in contrast with all observables considered previously, the observation of an antineutrino flux from the Sun would be a direct evidence for LNV.
In this study we will explore for the first time the sensitivity to NDMs, in particular to the ν_μ-ν_e TDM, of collider experiments. These could provide important complementary tests to those that can come from astrophysics and low-energy scattering experiments. We will focus on same-sign dilepton final states, that could provide a direct evidence for LNV, and a clean measurement of λ_eμ.
We will consider current and future hadron colliders: the upcoming high-luminosity phase of the LHC (HL-LHC) and the proposed FCC-hh upgrade <cit.>, as well as a possible future multi-TeV muon collider experiment. The latter has been considered among the key points of the strategic plans for the development of particle physics both in Europe <cit.> and in the United States <cit.> and it has developed a considerable community effort to explore its potential for the discovery of BSM physics (see for example <cit.> for some of the most recent studies).
The LNV signatures which we consider to probe neutrino TDMs at colliders are also sensitive to Majorana neutrino masses, more precisely to the mass matrix entries in the flavour basis, m_αβ, which also violate lepton number by two units.
Neutrinoless double beta decay (0νββ) experiments can reach a very high sensitivity on |m_ee|, of order ∼0.1 eV <cit.>. In particular, 90% C.L. limits of 79-180 meV and 61-165 meV are placed respectively by GERDA <cit.> and KamLAND-Zen <cit.>. Indirect constraints on the other entries of the Majorana mass matrix can be inferred from this bound on m_ee, with some little amount of theoretical prejudice. On the other hand, colliders
can be directly sensitive to m_αβ for αβ ee,
providing complementary information on the nature of neutrino masses.
The hadron collider sensitivity on |m_μμ| has been recently explored in <cit.>, which estimates sensitivities ranging from ∼ 5.4 GeV at the HL-LHC to ∼ 1.2 GeV at the FCC-hh. Experimental searches <cit.> have been also performed at the 13 TeV LHC with 140 fb^-1, giving the following 95% C.L. bounds on Majorana masses: |m_eμ|<13 GeV, |m_ee|<24 GeV <cit.> and |m_μμ|<16.7 GeV <cit.>.
The recent study in <cit.> also presents projected sensitivities on Majorana masses of a future same-sign muon collider. The study reports sensitivities similar to those of the FCC-hh for a 30 TeV collision energy.
The B- and K-meson factories like LHCb or NA62 experiments are also sensitive to m_μμ or m_eμ through LNV meson decay processes like B^±(K^±) →π^∓ℓ^±ℓ^± <cit.>. The bounds derived in <cit.> on the basis of the study in <cit.> show, however, that their sensitivities would be lower than the projected sensitivities of the FCC-hh: NA62 can test with its 2017 data set masses of the order of 50 GeV, while LHCb sensitivities with 300 fb^-1 are about five orders of magnitude worst <cit.>.
Ref. <cit.> also reviews the possibility to use muon-to-positron conversion to set a bound on m_eμ, which is however subject to large uncertainties from nuclear matrix elements.
In this work, we will identify for the first time a strategy to probe Majorana masses at a future μ^+μ^- muon collider, and we will present the corresponding projected sensitivities on m_eμ and m_μμ.
The paper is organized as follows.
We introduce the theoretical framework for neutrino TDMs in section <ref>. The possible probes of TDMs at hadron colliders are described in section <ref>, while we present in section <ref>
our search for TDMs in the case of a future muon collider. In section <ref>, we employ the muon collider to test Majorana neutrino masses. We finally discuss our results
in section <ref>.
§ NEUTRINO DIPOLE MOMENTS: THEORETICAL FRAMEWORK
Majorana neutrinos can have
transition dipole moments (TDMs), that is to say, dipole moments off-diagonal in flavour space <cit.>. They can be described by five-dimensional operators,
[ L⊃14 (ν_Lα)^cσ^μνλ_αβν_Lβ A_μν + h.c.
= 14 ν_ασ^μν
(μ_αβ +i d_αβγ_5)
ν_β A_μν ,; ; μ_αβ≡ i Imλ_αβ ,
d_αβ≡ i Reλ_αβ , ]
where ν_L are left-handed spinors, ν≡ν_L + (ν_L)^c are Majorana spinors,
σ^μν≡ (i/2)[γ^μ,γ^ν],
A_μν is the photon field strength,
and α,β are flavour indexes.
The TDM matrix λ has mass-dimension minus one, and it is antisymmetric in flavour space, λ_αβ=-λ_βα.
Like the Majorana neutrinos masses m_αβ, defined later by Eq. (<ref>), the neutrino TDMs λ_αβ violate the lepton number by two units.
The magnetic dipole moments (MDMs) μ_αβ and the electric dipole moments (EDMs) d_αβ are defined to be imaginary.
The prefactor 1/4 guarantees the standard normalisation for the ν̅νγ vertex <cit.>.
The electromagnetic dipole operators in Eq. (<ref>) should be the low-energy realisation of SU(2)_w× U(1)_Y invariant operators, generated by some LNV new physics at scale Λ_NP.
The lowest-dimensional such operators contributing to neutrino TDMs are two dimension-7 operators <cit.>,
(O_B)_αβ = g^'( ℓ_Lα^ c ϵ H ) σ^μν(H^T ϵℓ_Lβ) B_μν ,
(O_W)_αβ = i g ε_abc( ℓ_Lα^ c ϵσ^a σ^μνℓ_Lβ) ( H^T ϵσ^b H ) W^c_μν ,
where H and ℓ_L are the Higgs and lepton SU(2)_w doublets,
the σ^a are the Pauli matrices, ϵ=-i σ^2, W_μν^a and B_μν are the SU(2)_w and U(1)_Y gauge field strengths.
In unitary gauge, H=[0 (v+h)/√(2)]^T. After
spontaneous symmetry breaking (SSB), setting to zero the Higgs boson field h, and going to the basis of physical gauge bosons, one obtains the following effective interactions:
(O_B)_αβ|_v = -g^' v^22( ν^ c_Lα σ^μνν_Lβ) (c_w A_μν-s_w Z_μν) ,
(O_W)_αβ|_v = -g v^2 ( ν^ c_Lα σ^μνν_Lβ) (s_w A_μν+c_w Z_μν+2ig W^-_μ W^+_ν)
- g v^2√(2) ( ν^ c_Lα σ^μν e_Lβ + e^ c_Lα σ^μνν_Lβ) [W^+_μν
+2ig W^+_μ(s_w A_ν +c_w Z_ν)
] .
Assuming L⊃ C^B_αβ (O_B)_αβ + C^W_αβ (O_W)_αβ + h.c., the (tree-level) matching onto the neutrino TDM reads[
In order to compare with <cit.>, note their μ_αβ
corresponds to λ_αβ/2, and they use the convention v≃ 174 GeV while here we take v≃ 246 GeV.]
λ_αβ= -2ev^2(C^B_αβ+2C^W_αβ) .
The operator O_W will be of particular relevance for our analysis at hadron colliders.
For this part of the analysis, we will work under the simplifying assumption that new physics above the electroweak scale generates only O_W.
For the muon collider analysis, we will study the effect of both O_B and O_W.
One should keep in mind that new physics may violate lepton number without inducing a neutrino TDM, at least at tree level.
In particular, if new physics happens to generate C^B=-2C^W, the combination in Eq. (<ref>) vanishes, still one could observe same-sign different-flavour dileptons at colliders.
More in general, other dimension-7 operators violate lepton number by two units, without contributing to neutrino TDMs. As an example, let us consider the operator
(O[origin=c]180_W)_αβ = g
( ℓ_Lα^ c ϵσ^μνℓ_Lβ) ( H^T ϵσ^a H ) W^a_μν .
It involves exactly the same fields as O_W, but with a different contraction of SU(2)_w indexes, such that O[origin=c]180_W is actually symmetric in flavour,
(O[origin=c]180_W)_αβ=(O[origin=c]180_W)_βα.[
We remark that the two operators, O_W and O[origin=c]180_W, both involve two lepton doublets, two Higgs doublets and one SU(2)_w field strength, and they are clearly independent. In the classification of dimension-7 operators presented in Refs. <cit.>, there is only one operator which involves this same set of fields. This discrepancy might be due to the choice of basis for the complete set of dimension-7 operators.]
Once the Higgs doublet is replaced by its vev, one obtains
(O[origin=c]180_W)_αβ|_v =
g v^2√(2)(-ν^ c_Lα σ^μν e_Lβ + e^ c_Lα σ^μνν_Lβ) [W^+_μν
+2ig W^+_μ(s_w A_ν +c_w Z_ν)
] .
Such effective interactions can produce same-sign dileptons at colliders, with either equal or different flavours.
This or other Δ L =2 operators may induce a neutrino TDM via operator mixing, that is, they may contribute to the Wilson coefficient of Eq. (<ref>) at loop level.
An exception is the case of lepton-flavour conserving new physics, which may induce e.g.
(O[origin=c]180_W)_αα, but it does not generate neutrino TDMs even at loop level.
Since we are interested in setting the strongest possible constraint on the neutrino TDMs, in the following we will focus on O_W or O_B only.[
A recent survey of LNV dimension-7 operators has been presented in <cit.>. In this study, bounds from low-energy experiments and from current and future hadron colliders are derived on the different operators. In the case of collider bounds, however, the analysis does not consider different lepton flavors in the final state, therefore an analysis of the O_B and O_W operators here considered is missing. ]
§ TESTING NEUTRINO DIPOLE MOMENTS
AT HADRON COLLIDERS
We focus on neutrino dipole transitions from the electron flavour to the muon flavour.
At hadron colliders,
it is possible to probe TDMs generated by the O_W operator, through the effect of the W-lepton interactions contained in Eq. (<ref>),
L⊃ 2√(2) g C^W_eμ v^2 (ν_Lμ^ cσ^μν e_L
- ν^ c_Leσ^μνμ_L) ∂_μ W^+_ν +h.c. ,
where we added the two identical contributions from αβ=eμ,μ e.
The signal to search for is the Δ L=2 process with two same-sign different-flavour leptons, accompanied by two jets, whose dominant Feynman diagram is shown in Fig. <ref>.
We include the effective W interaction in Eq. (<ref>) in MadGraph 5 <cit.> by using Feynrules <cit.>, and calculate the cross section, at LO in QCD, at a hadron collider with √(s)=13.6 TeV, corresponding to the High-Luminosity LHC collision energy and the current LHC Run-3, and with √(s)=100 TeV, the nominal value for the future FCC-hh collider <cit.>. We apply acceptance cuts of 20 GeV on the p_T of the two leptons and the jets and we require the leptons to lie in the central region, with a rapidity |η|<2.5, and the jets in the detector region, |η|<5. The cross section values we obtain, as function of the absolute value of λ_eμ (in units of μ_B) are shown in Fig. <ref>, where we used the relation in Eq. (<ref>)
to relate C^W_eμ to λ_eμ. We consider both positive and negative charges for the final-state leptons: because of the proton structure, the cross section for the positive-charge configuration is roughly two times larger than the one for negative lepton charges.
The ATLAS collaboration has recently performed a search for heavy Majorana neutrinos in e^± e^± and e^±μ^± final states via WW scattering in pp collisions at √(s) = 13 TeV <cit.>. Since this search focuses on the same final state we are considering, we can recast it to obtain projected sensitivities of hadron colliders to the ν_e-ν_μ TDM.
The dominant background component is given by SM processes producing two same-sign leptons, which are mainly given by WZ and W^± W^± production in association with two jets. Other, subdominant, sources of background are represented by SM processes with two opposite-sign leptons where the charge of one of the leptons is
misidentified, and by non-prompt lepton production events, where leptons are generated from decays of long-lived particles and mainly arise from semileptonic decays of heavy-flavour hadrons.
The ATLAS signal selection strategy in the e^±μ^± channel requires two jets with invariant mass m_jj>500 GeV and rapidity separation |Δ y_jj|>2. The transverse momentum cuts on the leading-p_T and second-leading-p_T jets, p_T j(1)>30 GeV, p_T j(2)>25 GeV, are applied. It is also applied a condition on the azimuthal separation of the two leptons, |Δϕ_eμ|>2.0.
We find that the efficiency of this selection to our neutrino TDM signal is of 0.40 at the HL-LHC and of about 0.70 at the FCC-hh.
For the background, ATLAS estimates a total SM background of 1.51 ± 0.08 fb.
We use this estimate for HL-LHC.
Based on this value, we can also infer the magnitude of the SM background at the FCC-hh, by considering a scaling with the different collision energy of the cross section for the dominant background processes.
We evaluate that the background would be of the order of 13 fb at the FCC-hh. This is a rather conservative estimate, since the background to our signal could be reduced with a tailored analysis at future colliders. For example, one could exploit the higher energy of the jj system at the FCC-hh, by applying a stronger cut on m_jj, or the high statistics collected at the HL-LHC, with the possible application of Boosted-Decision-Tree strategies.
Estimating the statistical significance (i.e. the number of σ's) by the ratio S/√(S+B), with S (B) denoting the signal (background) number of events, we find the following 2σ sensitivities:
HL-LHC |λ_eμ| < 4.0 · 10^-7 μ_B {300 fb^-1} , |λ_eμ| < 2.2 · 10^-7 μ_B {3 ab^-1} ,
FCC-hh |λ_eμ| < 6.8 · 10^-8 μ_B {3 ab^-1} , |λ_eμ| < 3.8 · 10^-8 μ_B {30 ab^-1}.
In the most optimistic case in which the background could be reduced to a negligible level,
the sensitivities would be
HL-LHC |λ_eμ| < 1.2 · 10^-7 μ_B {300 fb^-1} , |λ_eμ| < 3.8 · 10^-8 μ_B {3 ab^-1},
FCC-hh |λ_eμ| < 6.8 · 10^-9 μ_B {3 ab^-1} , |λ_eμ| < 2.0 · 10^-9 μ_B {30 ab^-1}.
We can see that, even for the FCC-hh in the most optimistic scenario of a background reduced to a negligible level, the hadron collider sensitivity would be about three orders of magnitude below the latest astrophysical and laboratory bounds.
§ TESTING NEUTRINO DIPOLE MOMENTS AT A MUON COLLIDER
In order to probe the neutrino TDMs at a muon collider,
we identify as an efficient channel
the Δ L=2 process in Fig. <ref>, with two same-sign, different-flavour leptons, accompanied by two same-sign W's, which is induced directly by the TDM operator in Eq. (<ref>). We focus again on the electron-muon TDM. In principle, a signature ℓ^±τ^± can be used to probe electron-tau and muon-tau TDMs. However, we expect a lower sensitivities on these latter TDMs, due to a more difficult identification of the taus.
Note that we are considering processes with the exchange of two virtual, off-shell neutrinos.
The high collision energy of a muon collider, together with the derivative present in the effective photon-neutrino vertex of the TDM operator, allow for a
significant cross section even for these processes.
The more convenient final state to search for, because of the lower (reducible) background and the larger signal branching ratios, is the one with the two W's decaying hadronically. This also permits a clear identification of the signal, with just two same-sign leptons in the final state.
Also in this case, we include the neutrino TDM interaction in Eq. (<ref>) in MadGraph 5 <cit.> by using Feynrules <cit.>, and calculate the cross section at a muon collider for several possible nominal collision energies,
√(s) = 3, 10, 20, 30, 50 TeV .
Signal and background events are simulated with MadGraph 5. Events are then passed to Pythia8 <cit.> for showering. We also apply a smearing to the jet 4-momenta, following the Delphes <cit.> default card, in order to minimally take into account detector effects.
We apply acceptance cuts of 20 GeV on the p_T of the two leptons and the jets in the final state and we require the leptons and the jets to lie in the detector region, with a rapidity |η|<5. Shrinking the acceptance of the final muon to the central region, |η|<2.5, would indeed decrease by almost a factor of 2 the signal cross section.[The final muon is in fact emitted at a relatively large rapidity, as characteristic of a t-channel process of the type of the signal in Fig. <ref> (left diagram). This is evident also from the muon rapidity distribution shown in Fig. <ref>. ] We require the jets and the leptons to be separated by an angular distance Δ R_jℓ>0.4, with Δ R=√(Δη^2+Δϕ^2). We also consider, conservatively, an identification efficiency of 85% for both muons and electrons in the final state <cit.>.
Each of the final state W is produced with a large boost
so that its hadronic decay products can be efficiently collected into a single fat-jet. We reconstruct the W-originated fat-jets with FastJet <cit.> by using an anti-kt algorithm with cone size R=0.5. We find W_h reconstruction efficiency of 93% already at √(s)=3 TeV and approaching the 100% efficiency at higher collision energies.
Cross section values we obtain, as function of the λ_eμ absolute value (expressed in units of μ_B) are shown in Fig. <ref>. Notice that a 3 TeV muon collider would already surpass the FCC-hh yield. The dominant contribution to the signal cross section is given by the t-channel process on the left of Fig. <ref>, which makes approximately the 90% of the total signal cross section.
This is mainly due to the general √(s)/E_V enhancement, with E_V≪√(s) the energy of the exchanged EW boson, of the cross section for processes with t-channel boson exchange, compared to s-channel ones (see for example <cit.> and references therein).
The signal we are considering, with a single pair of same-sign and different-flavour leptons accompanied by two hadronically decayed W's, W_h, is very distinctive, since there are no processes in the SM which can reproduce the same final state. Therefore, the background would consist of reducible components and will be at a very low level at a muon collider. It would mainly arise from W_h W_h ℓ^±ℓ^∓ events, dominated by the muonic component W_h W_h μ^±μ^∓,
where both the flavour and the charge of one of the final leptons is misidentified. Misidentification rates at a muon collider have been estimated to be less than 0.5% <cit.> for the lepton flavour and about 0.1% for the lepton charge <cit.>.[ The lepton charge misidentification rate is estimated on the basis of hadron collider detector performances.] An additional background source is generated by four-W production processes, W^±_h W^±_h W^∓ W^∓, where two W's decay hadronically and two like-sign W's decay leptonically, leading to a final state with two W_h plus a e^±μ^± lepton pair, accompanied by large missing energy (E_miss).
This background component can be reduced by applying a requirement on the maximally allowed missing energy. In particular, in our analysis we will exclude events with E_miss>5%·√(s).[This cut is chosen conservatively based on the achievable detector performances of a muon collider <cit.>.] We verify that a potential source of background coming from four-boson production processes, W_h W_h VV, with V=γ^⋆,Z, where two of the final four leptons are missed, because either they fall outside the detector region or do not satisfy the minimum p_T requirement, is negligible. We evaluate a total background at the level of 𝒪(1) ab.
We report in Table <ref> the background cross section values for the different collision energies.
The signal we are focusing on is also characterized by a peculiar kinematics, which can be exploited to further reduce the background and identify the signal.
We show in Fig. <ref> the transverse momentum distributions and in Fig. <ref> the rapidity distributions for the final state particles: the electron, the muon, and the two W_h's, distinguished based on their p_T: W_h(1) is the boson with the highest p_T in the event, W_h(2) the one with the lowest.[Conservatively, we do not apply restrictions on the background based on the reconstruction of the two W_h's. For the background, in this analysis the two W_h's are identified based on Monte Carlo truth. In principle, the requirement of only two fat-jets in the final state, reconstructed for example
by an anti-kt algorithm with cone-size R=0.5, would lead to a very efficient reconstruction of the W_h's in the signal.
For the background, however, the two W_h's have a boost which is, although still high, generally lower than in the signal. Therefore,
the same reconstruction procedure would be less efficient, possibly leading to a significant background reduction.
A precise evaluation of this effect would need a refined analysis of the inclusive ℓℓ+jets background, including potential misidentification of W_h's from ℓℓ VV or ℓℓ VVV events, which goes beyond the scope of the present study. ]
The signal is characterized by a final state electron and W_h's emitted at high-p_T, much higher than those from the background, and mostly in the central region, while background final particles tend to be emitted with a relatively higher rapidity, in particular the component W_hW_hℓℓ, which is dominated by EW boson scattering processes characterized by forward (high rapidity) emission of low-energy final state muons (one of which is misidentified).
Similarly, for the signal, the final muon, as typical of the t-channel topology of the dominant signal process, is characterized by a relatively lower p_T and a higher rapidity.
In order to refine our analysis, we apply the following set of minimal cuts on the p_T of the electron and of the leading-p_T W_h:
p_T e > 2.5%·√(s) , p_T W_h(1) > 5%·√(s) .
The signal efficiency to this selection is of 98% at √(s)=3 TeV and approaches 100% for higher collision energies, while the background, in particular the W_hW_h ℓℓ component from lepton misidentification, is significantly reduced. Background cross section values after this selection are reported (values in parenthesis) in Table <ref>. The selection we apply is rather simple and conservative, we therefore expect a significant improvement in the analysis strategies for a future experiment, which could for example exploit all the different kinematic distributions and advanced boosted-decision-tree (BDT) techniques.
Taking into account the maximal achievable integrated luminosity at a future muon collider <cit.>,
ℒ = 10 ( √(s)/10 TeV)^2 ab^-1 ,
which is expected to be obtained in a 5 years data-taking time <cit.>, we can estimate the 2σ sensitivities to |λ_eμ|, as given in Table <ref>.
In the square brackets we also report the sensitivity values that could be obtained in a more optimistic, but still realistic, scenario, where the background is reduced to a negligible level and slightly better lepton identification efficiencies are considered, 90% instead of 85%.
In our analysis we do not consider the impact of systematics, which will clearly be precisely assessable only once the experiment is actually operational. We emphasise however that the research channel we propose, enjoying a sizable signal-to-background ratio, is generally little subject to systematic uncertainties.
We can see that the current best laboratory bound, |λ_eμ|≲ 6 · 10^-12μ_B, can be matched by a muon collider with √(s)≃ 30 TeV. Sensitivities can improve by about 40% in the more optimistic scenario.
§.§ Probing the effective operators O_B and O_W
In the previous section we considered a ν_e-ν_μ TDM in isolation. Here we will take into account the embedding of the TDM into the operators O_B and O_W, defined in Eqs. (<ref>) and (<ref>).
Indeed, once SU(2)_w× U(1)_Y invariance is imposed, other effective interactions emerge from the O_B and O_W operators, as displayed in Eqs. (<ref>) and (<ref>), which lead to additional contributions to the process μ^+μ^-→ e^±μ^± W_h^∓ W_h^∓. For the O_B operator, beside the contribution to the neutrino TDM effective vertex with a photon, there is an analogous vertex with the Z boson: the Feynman diagrams are analogous to those in Fig. <ref> with the photon replaced by the Z. For the O_W operator, beside the Z and γ interactions with two neutrinos, there are effective vertices with a W, a neutrino and a charged lepton (with or without an additional neutral gauge boson), as shown in the second line of Eq. (<ref>). These lead to contributions to the LNV process we are analysing, with a few representative Feynman diagrams shown in Fig. <ref>.[We obtain a total of 144 diagrams for the process μ^+μ^-→ e^±μ^± W_h ^∓ W_h^∓ induced by (O_W)_eμ. Specifically, 64 s-channel diagrams (which are subdominant), 72 t-channel processes from effective single-W, Wγ, and WZ interactions (which include the diagrams in Fig. <ref>), and 8 diagrams from single-photon and single-Z interactions (analogous to the diagrams in Fig. <ref>).]
Considered individually, the two types of contribution, those from a Wγ or WZ vertex (first diagram in Fig. <ref>) and those from a single-W vertex (second and third diagrams in Fig. <ref>), would give a strong increase to the cross section. However, we observe a destructive interference between the two types of contribution (note the former contains one less t-channel propagator, while the latter contains an extra momentum in the effective vertex). As a consequence, the cross section for O_W is overall of the same size as that for the O_B operator, with the dominant contribution actually coming from single-photon or Z interactions. Indeed, before electroweak symmetry breaking one expects a comparable rate for Δ L = 2 processes induced by O_B and O_W, and this result has to persist once the details of electroweak symmetry breaking
are taken into account.
Cross section values at a muon collider for the process μ^+ μ^- → e^±μ^± W_h^∓ W_h^∓, generated by the (O_B)_eμ and the (O_W)_eμ operators, are presented in Fig. <ref>. Cross sections are shown as a function of the muon collider collision energy, and for a reference value of the Wilson coefficients C^B_eμ and C^W_eμ, normalised to a dimensionless quantity which matches their contribution to |λ_eμ|/μ_B, according to Eq. (<ref>).
Cross sections for the O_B case are roughly a factor of two higher than those from the purely TDM operator in Fig. <ref>, while cross sections for the O_W case are slightly lower.
Kinematic distributions for the signal are very similar to those shown in Figs. <ref> and <ref> for the case of neutrino TDM only, due to the analogous dominant topology (kinematic distributions for the background are obviously identical). Therefore, we can repeat the signal selection strategy applied for the purely TDM case. We obtain the 2σ sensitivities presented in Table <ref>. We conclude that a muon collider has a slightly higher sensitivity to the C^B_eμ Wilson coefficient compared to C^W_eμ.
§ TESTING NEUTRINO MASSES AT A MUON COLLIDER
The LNV signatures here considered to probe neutrino TDMs at colliders are also sensitive to Majorana neutrino masses,
L⊃ - 12 (ν_Lα)^c m_αβ ν_Lβ+h.c. ,
with m a complex symmetric 3× 3 matrix in flavour space.
Such Majorana mass term can be induced by the only dimension-5 invariant operator of the SM effective field theory, the so-called Weinberg operator <cit.>,
ℒ_5=
C^5_αβ(ℓ_Lα^ cϵ H) (H^Tϵℓ_Lβ)
+ h.c. ,
m_αβ≡ C^5_αβv^2 .
As the neutrino TDMs, Majorana masses violate lepton number by two units. The only matrix element subject to a tight, direct experimental constraint is m_ee, with an upper bound from neutrinoless 2β decay searches of the order of 0.1 eV <cit.>. In the minimal scenario with three active Majorana
neutrinos only, oscillations experiments imply that all matrix elements m_αβ should respect a similar upper bound. Nonetheless, it is interesting to conceive measurements which could be directly sensitive to m_αβ for αβ ee to avoid theoretical prejudice and test non-minimal scenarios.[
For example, in the presence of light sterile neutrinos, one can define effective mass parameters m_αβ which are enhanced by the sterile neutrino contribution, see e.g. Ref. <cit.>.]
The same LNV signature pp →ℓ^±_αℓ^±_β jj, considered in section <ref> to probe TDMs at hadron colliders, has been recently exploited as a channel to probe Majorana neutrino masses
<cit.>.
The analysis in <cit.> indicate projected sensitivities on |m_μμ| of ∼ 5.4 GeV at the HL-LHC with 3 ab^-1, and of ∼ 1.2 GeV at the FCC-hh with 30 ab^-1. The recent ATLAS searches <cit.>, performed at the 13 TeV LHC with 140 fb^-1 on the basis of the study in <cit.>, give the following 95% C.L. bounds on Majorana masses: |m_eμ|<13 GeV, |m_ee|<24 GeV <cit.> and |m_μμ|< 16.7 GeV <cit.>.
Based on these experimental results and on the projected sensitivities of <cit.>, we can roughly estimate the following exclusion sensitivities:
HL-LHC {3 ab^-1}: |m_μμ|∼ 5.4 GeV |m_ee|∼ 7.8 GeV |m_eμ|∼ 4.2 GeV ,
FCC-hh {30 ab^-1}: |m_μμ|∼ 1.2 GeV |m_ee|∼ 1.7 GeV |m_eμ|∼ 0.93 GeV .
A recent study in <cit.> also presents projected sensitivities on Majorana masses of a future same-sign muon collider, analysing vector boson scattering processes. The study reports sensitivities similar to those of the FCC-hh, assuming a 30 TeV collision energy.
In this section we will highlight a strategy to probe Majorana masses at a future μ^+μ^- muon collider, and we will present the corresponding projected sensitivities.
As the initial state consists of muons, the processes induced by a single neutrino-mass insertion are those involving either m_μ e,
m_μμ, or m_μτ. For simplicity, as usual, we will focus on final states without τ leptons.
The LNV processes μ^+ μ^- →ℓ^±_αℓ^±_β W^∓_h W^∓_h, which we used to probe neutrino TDMs in the case ℓ_αℓ_β≡ eμ, can indeed test Majorana masses as well. Some representative leading diagrams for the Majorana-mass contribution to the process μ^+μ^- → e^±μ^± W^∓_hW^∓_h are presented in Fig. <ref>.
We adopt the
UFO MadGraph implementation, SMWeinberg [Available at https://Feynrules.irmp.ucl.ac.be/wiki/SMWeinberg. We refer the reader to this web page and to Ref. <cit.> for details on the model implementation and usage. ], of the study in <cit.> for signal simulation.
As in the previous analysis for the TDMs, we consider as acceptance criteria two same-sign leptons in the final state, with p_T>20 GeV and rapidity |η|<5, and two hadronically decayed W's. Final state jets are also required to have a p_T>20 GeV and a rapidity |η|<5. The jets and the leptons must be separated by an angular distance Δ R_jℓ>0.4. We consider an identification efficiency of 85% for both muons and electrons in the final state and we exclude events with large missing energy, E_miss>5%·√(s).
Fig. <ref> shows the cross section values for the Majorana-mass signal at a muon collider, as a function of the mass |m_eμ|,
for all other masses set to zero.
Cross section values two times smaller (due to the symmetry factor for final state identical particles) are obtained for the process μ^+μ^- →μ^±μ^± W^∓_hW^∓_h, when m_μμ
is the only non-zero mass.
The kinematic distributions for the process μ^+μ^- → e^±μ^± W^∓_h W^∓_h, induced by the Majorana-mass operator, which we show in Figs. <ref> and <ref>, are similar to those for the neutrino TDM case, shown in Figs. <ref> and <ref>, with the only exception of the distributions of the final muon, which, for the Majorana-mass case, tends to be emitted more central, with p_T and η shapes analogous to those of the final electron. We can therefore repeat the same signal selection strategy applied in the previous section for the neutrino TDM case.
We apply a similar selection also for the process μ^+μ^- →μ^±μ^± W^∓_hW^∓_h. In this case, we require two same-sign muons in the final state, instead of an electron and a muon and, analogously to the signal selection cuts in Eq. (<ref>), we apply
p_Tμ (1) > 2.5%·√(s) , p_T W_h(1) > 5%·√(s) ,
where μ (1) is the leading-p_T muon in the final state.
The background for the μμ W_hW_h final state is higher than those for eμ W_hW_h, due to a larger contribution from the W_hW_hμ^+μ^- component, which is reduced by a lepton-charge misidentification factor, but it is not additionally reduced by a lepton-flavour misidentification rate. We indicate the corresponding background cross sections in Table <ref>, after acceptance criteria and, values in parentheses, after the cuts in Eq. (<ref>).
We can thus estimate the 2σ sensitivities to the modulus of the effective Majorana masses m_eμ and m_μμ, which are shown in Table <ref>. As with the TDM analysis, we also report the sensitivities achievable in a more optimistic scenario, where the background is reduced to a negligible level, and slightly better lepton identification efficiencies are assumed.
The sensitivities of a muon collider to the Majorana neutrino masses would be significantly higher than the projected ones for the FCC-hh, in Eq. (<ref>), by about a factor of four for |m_μμ| and almost one order of magnitude for |m_eμ|, already at a 3 TeV collision energy, and by more than three orders of magnitude for a 30 TeV muon collider. A similar gain in sensitivity is found with respect to a same-sign muon collider <cit.>.
Finally, let us provide a qualitative comparison between the neutrino TDM signal and the neutrino mass signal. Comparing the Feynman diagrams of Figs. <ref> and <ref>, one observes that, in a typical leading diagram,
a λ_αβ
vertex with one derivative is traded by a m_αβ vertex with one additional neutrino propagator and one additional electroweak coupling. Therefore, one expects roughly equal sensitivities by the replacement
λ_αβμ_B ↔
N_m/λe· m_αβμ_B· p^2 = N_m/λ2m_e m_αβp^2 ,
with p the typical neutrino momentum in the process, we used μ_B≡ e/(2m_e),
and we introduced a coefficient N_m/λ to account for the different multiplicity of leading diagrams in the mass case versus the dipole case.
According to this recipe, one can compare the sensitivities in Table <ref> or <ref> with those in Table <ref>. Taking into account that our simulations indicate that N_m/λ∼ 10, we find that such sensitivities roughly match for a reasonable value of the neutrino momentum, p∼ 0.05 √(s).
§ SUMMARY AND DISCUSSION
We presented for the first time projected collider sensitivities to the electron-muon neutrino TDM.
We found that realistic hadron colliders cannot probe values λ_eμ≲ 10^-9μ_B. In contrast,
we showed that the LNV, Δ L=2 process μ^+ μ^- → e^±μ^± W^∓_h W^∓_h, at a future muon collider, could provide a much more powerful probe for the neutrino TDM.
The results of our study, in terms of sensitivities to
λ_eμ as a function of the muon collider energy, are summarised in Table <ref> (in Table <ref> we give the sensitivities to the Wilson coefficients
of the SM effective operators, which can generate such neutrino TDM).
The sensitivity strongly depends on the center-of-mass energy of the muon collider: one can cover the window between ∼ 10^-9μ_B
and ∼ 10^-12μ_B, as the energy grows from √(s)≃ 3 TeV to 50 TeV.
In particular, we showed that a muon collider with a collision energy √(s)≃ 30 TeV would reach the same level of sensitivity of the latest
low-energy laboratory experiments.
Therefore, a muon collider would provide crucial complementary information on the neutrino properties, especially in the event of a near future observation at low-energy experiments.
In order to assess the complementarity with other constraints on neutrino TDMs, we remark that,
currently, the strongest laboratory bounds, λ_ν≲ 6· 10^-12μ_B, comes from low-energy solar neutrinos scattering elastically on nuclei. They apply to an effective neutrino dipole moment, therefore
they are not directly sensitive to a specific TDM λ_αβ, as the flavours of the initial and final neutrino are not detected, in contrast with direct searches at colliders.
More in general, in the event of a future detection of new physics in low-energy scattering experiments, it would be difficult to identify the kind of underlying physics beyond the SM involved. For example, it would be challenging to distinguish a possible neutrino TDM signal from a dark matter scattering on nuclei. Finally, LNV cannot be established in neutrino scattering experiments. In contrast, LNV can be tested by neutrino-to-antineutrino conversion in the solar magnetic field,
however the bound is subject to the large uncertainty on the value of such magnetic field. Finally, stringent bounds on the NDMs, slightly stronger than those from low-energy neutrino scattering,
are obtained from astrophysical and cosmological observations. These, however, might be subject to larger systematic uncertainties.
We also studied the same process, μ^+μ^-→ℓ^±_αℓ^±_β W^∓_h W^∓_h, when the source of LNV is not a NDM, but rather a Majorana neutrino mass, m_αβ. We estimated projected sensitivities on |m_μμ| and |m_eμ|, shown in Table <ref>. At 2σ, the sensitivity to m_eμ ranges from
∼ 100 MeV at a 3 TeV muon collider to ∼ 30 keV at √(s)≃ 50 TeV, with slightly weaker sensitivities in the case of m_μμ. This would represent an improvement of up to three orders of magnitude, with respect to the sensitivities of the FCC-hh, which have been estimated recently in <cit.>. To the best of our knowledge, these would be by far the strongest direct bounds on these neutrino-mass-matrix elements.
What if a large same-sign dilepton signal is observed at a muon collider, incompatible with current bounds on Majorana neutrino TDMs and masses?
Indeed, |λ_eμ|≫ 10^-11μ_B would be very difficult to reconcile with the various complementary constraints. Similarly, |m_αβ|≫ 1 eV
would be incompatible with the minimal Majorana-neutrino-mass scenario,
given the upper bound |m_ee|≲ 0.1 eV from 0ν2β-decay searches.
In these cases, a positive signal at the muon collider would point to additional, LNV new physics.
This may correspond either to new light states, e.g. sterile neutrinos with masses below the collider energy, or to additional LNV higher-dimensional operators, as discussed at the end of section <ref>.
Alternatively, in the case of a positive signal in a low-energy experiment, compatible with a NDM λ_ν≳ 10^-12μ_B, we have shown that a muon collider would have the capability to confirm or disprove the interpretation of such signal in terms of a Majorana neutrino TDM,
by a direct observation of lepton number and flavour violation.
§ ACKNOWLEDGEMENT
We thank Marco Ardu, Sacha Davidson, Sudip Jana, Danny Marfatia and Daniele Montanino for interesting conversations on neutrino dipole moments. The authors would like to express special thanks to the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149), for its hospitality and support during the MITP Capri Program 2022 on `Neutrinos, Flavour and Beyond'.
MF has received support from the European Union Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreements No 860881-HIDDeN and No 101086085–ASYMMETRY.
The work of N.V. is supported by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union – NextGenerationEU, reference code CN_00000013.
99
Giunti:2014ixa
C. Giunti and A. Studenikin,
Rev. Mod. Phys. 87, 531 (2015)
doi:10.1103/RevModPhys.87.531
[arXiv:1403.6344 [hep-ph]].
Voloshin:1987qy
M. B. Voloshin,
Sov. J. Nucl. Phys. 48, 512 (1988)
ITEP-87-215.
Barr:1990um
S. M. Barr, E. M. Freire and A. Zee,
Phys. Rev. Lett. 65, 2626-2629 (1990)
doi:10.1103/PhysRevLett.65.2626
Babu:1990hu
K. S. Babu and R. N. Mohapatra,
Phys. Rev. D 43, 2278-2282 (1991)
doi:10.1103/PhysRevD.43.2278
Lindner:2017uvt
M. Lindner, B. Radovčić and J. Welter,
JHEP 07, 139 (2017)
doi:10.1007/JHEP07(2017)139
[arXiv:1706.02555 [hep-ph]].
Zhang:2024ijy
Z. Y. Zhang, J. L. Yang, H. B. Zhang and T. F. Feng,
[arXiv:2406.18323 [hep-ph]].
Bell:2006wi
N. F. Bell, M. Gorchtein, M. J. Ramsey-Musolf, P. Vogel and P. Wang,
Phys. Lett. B 642, 377-383 (2006)
doi:10.1016/j.physletb.2006.09.055
[arXiv:hep-ph/0606248 [hep-ph]].
Davidson:2005cs
S. Davidson, M. Gorbahn and A. Santamaria,
Phys. Lett. B 626, 151-160 (2005)
doi:10.1016/j.physletb.2005.08.086
[arXiv:hep-ph/0506085 [hep-ph]].
Fujikawa:1980yx
K. Fujikawa and R. Shrock,
Phys. Rev. Lett. 45, 963 (1980)
doi:10.1103/PhysRevLett.45.963
Bell:2005kz
N. F. Bell, V. Cirigliano, M. J. Ramsey-Musolf, P. Vogel and M. B. Wise,
Phys. Rev. Lett. 95, 151802 (2005)
doi:10.1103/PhysRevLett.95.151802
[arXiv:hep-ph/0504134 [hep-ph]].
Canas:2015yoa
B. C. Canas, O. G. Miranda, A. Parada, M. Tortola and J. W. F. Valle,
Phys. Lett. B 753, 191-198 (2016)
doi:10.1016/j.physletb.2015.12.011
[arXiv:1510.01684 [hep-ph]].
Montanino:2008hu
D. Montanino, M. Picariello and J. Pulido,
Phys. Rev. D 77, 093011 (2008)
doi:10.1103/PhysRevD.77.093011
[arXiv:0801.2643 [hep-ph]].
Borexino:2017fbd
M. Agostini et al. [Borexino],
Phys. Rev. D 96, no.9, 091103 (2017)
doi:10.1103/PhysRevD.96.091103
[arXiv:1707.09355 [hep-ex]].
Huang:2018nxj
G. Y. Huang and S. Zhou,
JCAP 02, 024 (2019)
doi:10.1088/1475-7516/2019/02/024
[arXiv:1810.03877 [hep-ph]].
Coloma:2022umy
P. Coloma, P. Coloma, M. C. Gonzalez-Garcia, M. C. Gonzalez-Garcia, M. Maltoni, M. Maltoni, J. P. Pinheiro, J. P. Pinheiro, S. Urrea and S. Urrea,
JHEP 07, 138 (2022)
[erratum: JHEP 11, 138 (2022)]
doi:10.1007/JHEP07(2022)138
[arXiv:2204.03011 [hep-ph]].
Schwemberger:2022fjl
T. Schwemberger and T. T. Yu,
Phys. Rev. D 106, no.1, 015002 (2022)
doi:10.1103/PhysRevD.106.015002
[arXiv:2202.01254 [hep-ph]].
Miranda:2021kre
O. G. Miranda, D. K. Papoulias, O. Sanders, M. Tórtola and J. W. F. Valle,
JHEP 12, 191 (2021)
doi:10.1007/JHEP12(2021)191
[arXiv:2109.09545 [hep-ph]].
XENON:2022ltv
E. Aprile et al. [XENON],
Phys. Rev. Lett. 129, no.16, 161805 (2022)
doi:10.1103/PhysRevLett.129.161805
[arXiv:2207.11330 [hep-ex]].
LZ:2022lsv
J. Aalbers et al. [LZ],
Phys. Rev. Lett. 131, no.4, 041002 (2023)
doi:10.1103/PhysRevLett.131.041002
[arXiv:2207.03764 [hep-ex]].
A:2022acy
S. K. A., A. Majumdar, D. K. Papoulias, H. Prajapati and R. Srivastava,
Phys. Lett. B 839, 137742 (2023)
doi:10.1016/j.physletb.2023.137742
[arXiv:2208.06415 [hep-ph]].
Akhmedov:2022txm
E. Akhmedov and P. Martínez-Miravé,
JHEP 10, 144 (2022)
doi:10.1007/JHEP10(2022)144
[arXiv:2207.04516 [hep-ph]].
Capozzi:2020cbu
F. Capozzi and G. Raffelt,
Phys. Rev. D 102, no.8, 083007 (2020)
doi:10.1103/PhysRevD.102.083007
[arXiv:2007.03694 [astro-ph.SR]].
Li:2022dkc
S. P. Li and X. J. Xu,
JHEP 02, 085 (2023)
doi:10.1007/JHEP02(2023)085
[arXiv:2211.04669 [hep-ph]].
KamLAND:2021gvi
S. Abe et al. [KamLAND],
Astrophys. J. 925, no.1, 14 (2022)
doi:10.3847/1538-4357/ac32c1
[arXiv:2108.08527 [astro-ph.HE]].
FCC:2018vvp
A. Abada et al. [FCC],
Eur. Phys. J. ST 228, no.4, 755-1107 (2019)
doi:10.1140/epjst/e2019-900087-0
Accettura:2023ked
C. Accettura, D. Adams, R. Agarwal, C. Ahdida, C. Aimè, N. Amapane, D. Amorim, P. Andreetto, F. Anulli and R. Appleby, et al.
Eur. Phys. J. C 83, no.9, 864 (2023)
[erratum: Eur. Phys. J. C 84, no.1, 36 (2024)]
doi:10.1140/epjc/s10052-023-11889-x
[arXiv:2303.08533 [physics.acc-ph]].
MuCol-nature
Nature 625, 423-424 (2024)
doi: https://doi.org/10.1038/d41586-024-00105-9
Narain:2022qud
M. Narain, L. Reina, A. Tricoli, M. Begel, A. Belloni, T. Bose, A. Boveia, S. Dawson, C. Doglioni and A. Freitas, et al.
[arXiv:2211.11084 [hep-ex]].
Capdevilla:2024bwt
R. Capdevilla, F. Meloni and J. Zurita,
[arXiv:2405.08858 [hep-ph]].
Barducci:2024kig
D. Barducci and A. Dondarini,
[arXiv:2404.09609 [hep-ph]].
Jana:2023ogd
S. Jana and S. Klett,
[arXiv:2308.07375 [hep-ph]].
Liu:2023jta
D. Liu, L. T. Wang and K. P. Xie,
JHEP 04, 084 (2024)
doi:10.1007/JHEP04(2024)084
[arXiv:2312.09117 [hep-ph]].
Vignaroli:2023rxr
N. Vignaroli,
JHEP 10, 121 (2023)
doi:10.1007/JHEP10(2023)121
[arXiv:2304.12362 [hep-ph]].
Bottaro:2021srh
S. Bottaro, A. Strumia and N. Vignaroli,
JHEP 06, 143 (2021)
doi:10.1007/JHEP06(2021)143
[arXiv:2103.12766 [hep-ph]].
Asadi:2023csb
P. Asadi, A. Radick and T. T. Yu,
[arXiv:2312.03826 [hep-ph]].
Spor:2022hhn
S. Spor,
Nucl. Phys. B 991, 116198 (2023)
doi:10.1016/j.nuclphysb.2023.116198
[arXiv:2207.11585 [hep-ph]].
Senol:2022snc
A. Senol, S. Spor, E. Gurkanli, V. Cetinkaya, H. Denizli and M. Köksal,
Eur. Phys. J. Plus 137, no.12, 1354 (2022)
doi:10.1140/epjp/s13360-022-03569-8
[arXiv:2205.02912 [hep-ph]].
Chen:2022msz
S. Chen, A. Glioti, R. Rattazzi, L. Ricci and A. Wulzer,
JHEP 05, 180 (2022)
doi:10.1007/JHEP05(2022)180
[arXiv:2202.10509 [hep-ph]].
Capdevilla:2021kcf
R. Capdevilla, D. Curtin, Y. Kahn and G. Krnjaic,
JHEP 04, 129 (2022)
doi:10.1007/JHEP04(2022)129
[arXiv:2112.08377 [hep-ph]].
Chiesa:2021qpr
M. Chiesa, B. Mele and F. Piccinini,
Eur. Phys. J. C 84, no.5, 543 (2024)
doi:10.1140/epjc/s10052-024-12882-8
[arXiv:2109.10109 [hep-ph]].
Ruiz:2021tdt
R. Ruiz, A. Costantini, F. Maltoni and O. Mattelaer,
JHEP 06, 114 (2022)
doi:10.1007/JHEP06(2022)114
[arXiv:2111.02442 [hep-ph]].
Franceschini:2021aqd
R. Franceschini and M. Greco,
Symmetry 13, no.5, 851 (2021)
doi:10.3390/sym13050851
[arXiv:2104.05770 [hep-ph]].
Asadi:2021gah
P. Asadi, R. Capdevilla, C. Cesarotti and S. Homiller,
JHEP 10, 182 (2021)
doi:10.1007/JHEP10(2021)182
[arXiv:2104.05720 [hep-ph]].
Dolinski:2019nrj
M. J. Dolinski, A. W. P. Poon and W. Rodejohann,
Ann. Rev. Nucl. Part. Sci. 69, 219-251 (2019)
doi:10.1146/annurev-nucl-101918-023407
[arXiv:1902.04097 [nucl-ex]].
GERDA:2020xhi
M. Agostini et al. [GERDA],
Phys. Rev. Lett. 125, no.25, 252502 (2020)
doi:10.1103/PhysRevLett.125.252502
[arXiv:2009.06079 [nucl-ex]].
KamLAND-Zen:2016pfg
A. Gando et al. [KamLAND-Zen],
Phys. Rev. Lett. 117, no.8, 082503 (2016)
doi:10.1103/PhysRevLett.117.082503
[arXiv:1605.02889 [hep-ex]].
Fuks:2020zbm
B. Fuks, J. Neundorf, K. Peters, R. Ruiz and M. Saimpert,
Phys. Rev. D 103, no.11, 115014 (2021)
doi:10.1103/PhysRevD.103.115014
[arXiv:2012.09882 [hep-ph]].
ATLAS:2024rzi
G. Aad et al. [ATLAS],
[arXiv:2403.15016 [hep-ex]].
ATLAS:2023tkz
G. Aad et al. [ATLAS],
Eur. Phys. J. C 83, no.9, 824 (2023)
doi:10.1140/epjc/s10052-023-11915-y
[arXiv:2305.14931 [hep-ex]].
CMS:2022hvh
A. Tumasyan et al. [CMS],
Phys. Rev. Lett. 131, no.1, 011803 (2023)
doi:10.1103/PhysRevLett.131.011803
[arXiv:2206.08956 [hep-ex]].
Li:2023lkl
T. Li, C. Y. Yao and M. Yuan,
JHEP 09, 131 (2023)
doi:10.1007/JHEP09(2023)131
[arXiv:2306.17368 [hep-ph]].
LHCb:2018roe
R. Aaij et al. [LHCb],
[arXiv:1808.08865 [hep-ex]].
NA62:2019eax
E. Cortina Gil et al. [NA62],
Phys. Lett. B 797, 134794 (2019)
doi:10.1016/j.physletb.2019.07.041
[arXiv:1905.07770 [hep-ex]].
Atre:2005eb
A. Atre, V. Barger and T. Han,
Phys. Rev. D 71, 113014 (2005)
doi:10.1103/PhysRevD.71.113014
[arXiv:hep-ph/0502163 [hep-ph]].
Schechter:1981hw
J. Schechter and J. W. F. Valle,
Phys. Rev. D 24, 1883-1889 (1981)
[erratum: Phys. Rev. D 25, 283 (1982)]
doi:10.1103/PhysRevD.25.283
Nieves:1981zt
J. F. Nieves,
Phys. Rev. D 26, 3152 (1982)
doi:10.1103/PhysRevD.26.3152
Shrock:1982sc
R. E. Shrock,
Nucl. Phys. B 206, 359-379 (1982)
doi:10.1016/0550-3213(82)90273-5
Kayser:1982br
B. Kayser,
Phys. Rev. D 26, 1662 (1982)
doi:10.1103/PhysRevD.26.1662
Grimus:1997aa
W. Grimus and P. Stockinger,
Phys. Rev. D 57, 1762-1768 (1998)
doi:10.1103/PhysRevD.57.1762
[arXiv:hep-ph/9708279 [hep-ph]].
AristizabalSierra:2021fuc
D. Aristizabal Sierra, O. G. Miranda, D. K. Papoulias and G. S. Garcia,
Phys. Rev. D 105, no.3, 035027 (2022)
doi:10.1103/PhysRevD.105.035027
[arXiv:2112.12817 [hep-ph]].
deGouvea:2022znk
A. de Gouvêa, G. Jusino Sánchez, P. A. N. Machado and Z. Tabrizi,
[arXiv:2209.03373 [hep-ph]].
Lehman:2014jma
L. Lehman,
Phys. Rev. D 90 (2014) no.12, 125023
doi:10.1103/PhysRevD.90.125023
[arXiv:1410.4193 [hep-ph]].
Liao:2016hru
Y. Liao and X. D. Ma,
JHEP 11 (2016), 043
doi:10.1007/JHEP11(2016)043
[arXiv:1607.07309 [hep-ph]].
Fridell:2023rtr
K. Fridell, L. Gráf, J. Harz and C. Hati,
JHEP 05, 154 (2024)
doi:10.1007/JHEP05(2024)154
[arXiv:2306.08709 [hep-ph]].
Alwall:2014hca
J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli and M. Zaro,
JHEP 07, 079 (2014)
doi:10.1007/JHEP07(2014)079
[arXiv:1405.0301 [hep-ph]].
Alloul:2013bka
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks,
Comput. Phys. Commun. 185, 2250-2300 (2014)
doi:10.1016/j.cpc.2014.04.012
[arXiv:1310.1921 [hep-ph]].
Bierlich:2022pfr
C. Bierlich, S. Chakraborty, N. Desai, L. Gellersen, I. Helenius, P. Ilten, L. Lönnblad, S. Mrenna, S. Prestel and C. T. Preuss, et al.
SciPost Phys. Codeb. 2022, 8 (2022)
doi:10.21468/SciPostPhysCodeb.8
[arXiv:2203.11601 [hep-ph]].
deFavereau:2013fsa
J. de Favereau et al. [DELPHES 3],
JHEP 02, 057 (2014)
doi:10.1007/JHEP02(2014)057
[arXiv:1307.6346 [hep-ex]].
MuonCollider:2022ded
N. Bartosik et al. [Muon Collider],
[arXiv:2203.07964 [hep-ex]].
Cacciari:2011ma
M. Cacciari, G. P. Salam and G. Soyez,
Eur. Phys. J. C 72, 1896 (2012)
doi:10.1140/epjc/s10052-012-1896-2
[arXiv:1111.6097 [hep-ph]].
Costantini:2020stv
A. Costantini, F. De Lillo, F. Maltoni, L. Mantani, O. Mattelaer, R. Ruiz and X. Zhao,
JHEP 09, 080 (2020)
doi:10.1007/JHEP09(2020)080
[arXiv:2005.10289 [hep-ph]].
Belfkir:2023lot
M. Belfkir, T. A. Chowdhury and S. Nasri,
Phys. Lett. B 852, 138605 (2024)
doi:10.1016/j.physletb.2024.138605
[arXiv:2307.16111 [hep-ph]].
Yu:2017mpx
D. Yu, M. Ruan, V. Boudry and H. Videau,
Eur. Phys. J. C 77, no.9, 591 (2017)
doi:10.1140/epjc/s10052-017-5146-5
[arXiv:1701.07542 [physics.ins-det]].
ATLAS:2019jvq
M. Aaboud et al. [ATLAS],
Eur. Phys. J. C 79, no.8, 639 (2019)
doi:10.1140/epjc/s10052-019-7140-6
[arXiv:1902.04655 [physics.ins-det]].
Liu:2022kid
W. Liu, S. Kulkarni and F. F. Deppisch,
Phys. Rev. D 105, no.9, 095043 (2022)
doi:10.1103/PhysRevD.105.095043
[arXiv:2202.07310 [hep-ph]].
MuonCollider:2022glg
S. Jindariani et al. [Muon Collider],
[arXiv:2203.07224 [physics.ins-det]].
Han:2020uak
T. Han, Z. Liu, L. T. Wang and X. Wang,
Phys. Rev. D 103, no.7, 075004 (2021)
doi:10.1103/PhysRevD.103.075004
[arXiv:2009.11287 [hep-ph]].
Weinberg:1979sa
S. Weinberg,
Phys. Rev. Lett. 43, 1566-1570 (1979)
doi:10.1103/PhysRevLett.43.1566
Abada:2017jjx
A. Abada, V. De Romeri, M. Lucente, A. M. Teixeira and T. Toma,
JHEP 02, 169 (2018)
doi:10.1007/JHEP02(2018)169
[arXiv:1712.03984 [hep-ph]].
|
http://arxiv.org/abs/2409.03002v1 | 20240904180018 | Disruption of exo-asteroids around white dwarfs and the release of dust particles in debris rings in co-orbital motion | [
"Kyriaki I. Antoniadou",
"Dimitri Veras"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR",
"nlin.CD"
] |
Department of Mathematics “Tullio Levi-Civita”, University of Padua, 35121, Padua, Italy
[email protected]
Department of Physics, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece
Centre for Exoplanets and Habitability, University of Warwick, Coventry CV4 7AL, United Kingdom
Centre for Space Domain Awareness, University of Warwick, Coventry CV4 7AL, United Kingdom
Department of Physics, University of Warwick, Coventry CV4 7AL, United Kingdom
Co-orbital dynamics in rings
K. I. Antoniadou and D. Veras
Close to the Roche radius of a white dwarf (WD), an asteroid on a circular orbit sheds material that then adopts a very similar orbit. Observations of the resulting debris show a periodic behaviour and changes in flux on short timescales, implying ongoing dynamical activity. Additional encounters from other minor planets may then yield co-orbital rings of debris at different inclinations. The structure, dynamics, and lifetime of these debris discs remains highly uncertain, but is important for understanding WD planetary systems.
We aim to identify and quantify the locations of co-orbitals in WD–asteroid–dust particle three-body systems by exploring the influence of 1:1 resonant periodic orbits. We begin this exploration with co-planar and inclined orbits in the circular restricted three-body problem (CRTBP) and model the dynamical evolution of these exosystems over observable timescales. The mass ratio parameter for this class of systems (≈ 2 × 10^-11) is one of the lowest ever explored in this dynamical configuration.
We computed the periodic orbits, deduced their linear stability, and suitably seeded the dynamical stability (DS) maps. We carried out a limited suite of N-body simulations to provide direct comparisons with the DS maps.
We derive novel results for this extreme mass ratio in the CRTBP, including new unstable 3D families. We illustrate through the maps and N-body simulations where dust can exist in a stable configuration over observable timescales across a wide expanse of parameter space in the absence of strong external forces.
Over a timescale of 10 years, the maximum orbital period deviations of stable debris due to the co-orbital perturbations of the asteroid is about a few seconds. Unstable debris in a close encounter with the asteroid typically deviates from the co-orbital configuration by more than about 20 km and is on a near-circular orbit with an eccentricity lower than ≈0.01.
Disruption of exo-asteroids around white dwarfs and the release of dust particles in debris rings in co-orbital motion
Kyriaki I. Antoniadou<ref>,<ref> Dimitri Veras<ref>,<ref>,<ref>
Received XXXX; Accepted YYYY
======================================================================================================================
§ INTRODUCTION
Transiting rocky debris is a signpost of the most dynamically active white dwarf (WD) planetary systems <cit.>. Unlike exoplanet transits around main-sequence stars, which feature a characteristic single solid-body dip in the light curve, minor planets that break up around WDs contain asymmetrical, sharp, shallow, sometimes periodic and sometimes ephemeral transit features on a nightly basis <cit.>.
Although the transit observations are too complex to be explained by a simple disruption model, analytical progress has been made. For WD 1145+017, the first WD discovered with transiting debris <cit.>, signatures corresponding to small periodic dips in the transit curves are assumed to be broken-off fragments and have been leveraged to estimate both the mass (≈ 10^20 kg) and orbital eccentricity (≈ 0) of the progenitor asteroid <cit.>. Furthermore, the frequency and shape of the transit features themselves have been linked to density, composition, and layering of the progenitor asteroid <cit.>.
More generally, the shedding of mass that follows the progenitor orbit has now been investigated extensively <cit.>, as has the process of circularising the debris <cit.>. In contrast, the analytical structure of this phase space has remained relatively unexplored, despite its importance <cit.>. Uncovering the stable and unstable co-orbital regions of these systems can aid in future modelling efforts, and it may help to explain the observations.
Dynamically speaking, WD planetary systems provide extreme examples of the circular restricted three-body problem (CRTBP). The masses of WDs are about half the mass of the Sun, and an orbiting minor planet represents the secondary, leading to extremely low mass ratios. Furthermore, the Roche radius of a WD is located at about 1R_⊙. Hence, the secondary orbits are on the scale of many hours, and spatial distances on the scale of 10^-3 au become important (similar e.g. to Saturn's rings).
The uncertainty about the lifetimes of these debris discs and rings adds to the complexity of these systems <cit.>. If both discs are sufficiently long lived and the flux of asteroids into the WD Roche radius is sufficiently high, then we expect the debris to be replenished anisotropically. Simulations have demonstrated that asteroids (and potentially debris) would enter the Roche radius at a wide variety of inclinations <cit.>. Furthermore, observations do not yet provide a clear picture of the three-dimensional shapes of these debris discs <cit.>. These arguments emphasise the importance of investigating the inclined CRTBP (or 3D-CRTBP) in addition to the co-planar CRTBP (or 2D-CRTBP).
The paper is organised as follows. In Sect. <ref>, a brief literature review of co-orbital motion is provided. In Sect. <ref>, the main aspects of the method are presented, that is, the definition of periodic orbits and mean motion resonances (MMRs), and the linear stability and long-term evolution of systems hosted in specific regions in phase space. In Sect. <ref>, we compute the 2D and 3D families of 1:1 resonant periodic orbits in the CRTBP. In Sect. <ref>, we provide an extended view of the phase space around the WD–asteroid–dust particle system via dynamical stability (DS) maps, which reveal all types of orbits in co-orbital dynamics. In Sects. <ref> and <ref>, we describe the setup and results of the N-body simulations. We discuss our results in Sect. <ref>, and we finally conclude in Sect. <ref>.
§ BRIEF HISTORY AND ASPECTS OF CO-ORBITAL MOTION
<cit.> and <cit.> were the first to clearly identify families of what today is referred to as quasi-satellite (QS) orbits in the three-body problem (TBP), based on some preliminary results on retrograde periodic orbits treated by <cit.>, which were then called retrograde satellite orbits. In general, this motion takes place outside of the Hill sphere surrounding the minor primary body of the restricted TBP (RTBP), and these trajectories are much closer to it than to the major primary body <cit.>. These orbits are not related with the tadpole (TP) or horseshoe (HS) orbits and can exist far from the Lagrangian points L_3, L_4, and L_5 <cit.>.
For clarity, we refer to co-orbital motion as two bodies that share the same orbit, and QS orbits are a special class of TBP orbits that belong to the family of stable symmetric periodic orbits in 1:1 MMR. In the planar circular RTBP (2D-CRTBP), this family of stable periodic orbits was called family f by <cit.>, and <cit.>. The orbits of family f are generated by family E_11^+ <cit.>, when the problem mass parameter μ>0, and they terminate at a collision orbit with the major primary body <cit.>. In Hénon's notation, the generating orbit of family E_11^+ is a third-species orbit (the orbits of the minor primary body and the mass-less body coincide when μ→ 0). The QS orbits of family f are stable for μ<0.0477 <cit.>.
In the 2D-CRTBP, there exist Lyapounov families with unstable symmetric periodic orbits and stable asymmetric periodic orbits. The former family, called family b by <cit.>, consists of HS orbits that are generated by family E_11^- in Hénon's notation when μ>0, and it includes L_3 (the orbits of the minor primary body and the mass-less body are diametrically opposed when μ→ 0). The latter family, called family E_11^a by Hénon, includes the short-period[The period of short-period orbits tends to 2π, whereas the period of the long-period orbits tends to infinity when μ→ 0.] orbits emanating from L_4 and intersects family E_11^- when the linear stability changes (see Sect. <ref> herein). This family has a mirror image that emanates from L_5. <cit.> called these families of periodic orbits in the rotating frame of the 2D-CRTBP ℒ_3 and short-period ℒ_4^s (or ℒ_5^s). We do not discuss the family of long-period unstable asymmetric periodic orbits, which terminates in the family of short-period asymmetric ones <cit.>, and the families starting from L_1 (prograde satellite motion in family c) and L_2 (family a) <cit.>.
We instead explore four distinct types of orbits. In our case study, namely WD–asteroid–dust particle, we have the categories listed below.
* When the motion of the dust particle takes place close to the asteroid and the gravitational perturbation of the WD is assumed to be negligible, the reference system is 'asteroid-centred' and the orbits are almost those of a Keplerian retrograde satellite. The dust particle forms a close binary with the asteroid, and their centre of mass revolves around the WD (Fig. <ref>a).
* When the motion of the dust particle is quite distant from the asteroid and the gravitational attraction of the WD dominates the orbits, we may use a 'WD-centred' system and compute Keplerian planetary-type orbits (orbits of second kind according to Hénon). We may have three possible configurations:
(a) The stable configuration of QS_h symmetric orbits is obtained when the bodies are anti-aligned (apsidal difference equal to 180^∘; Fig. <ref>b).
(b) The unstable configuration of HS symmetric orbits arises when the bodies are aligned (apsidal difference equal to 0^∘) and move in opposite phases (Fig. <ref>c).
(c) The stable configuration of TP asymmetric orbits is obtained when the bodies are neither aligned nor anti-aligned (Fig. <ref>d).
The QS_h orbits and the retrograde satellite orbits are separated by the binary QS_b orbits, which belong to a domain in which none of the primaries influences the mass-less body <cit.>.
Osculating orbital elements can be used to describe these orbits, that is, a_i (semi-major axis), e_i (eccentricity), i_i (inclination), ω_i (argument of pericentre), M_i (mean anomaly), and Ω_i (longitude of the ascending node). We also use the notation ϖ_i=ω_i+Ω_i for the longitude of pericentre, Δϖ for the apsidal difference, ΔΩ for the nodal difference, and λ_i=ϖ_i+M_i for the mean longitude. Subscript i= a denotes the asteroid, and i= d signifies the dust particle.
In the CRTBP, these orbits render a system consisting of a dust particle and an asteroid moving around the WD with the same orbital period in elliptic, e_ d>0, and in circular, e_ a=0, orbits, respectively.
In the Solar System, QS orbits have been computed numerically and analytically for the Mars-Phobos system <cit.>, while the first results were computed for the Phobos spacecraft mission <cit.>. Stable QS orbits around giant planets in our solar neighbourhood were identified by analytical, semi-analytical, or numerical methods <cit.>. Distant retrograde orbits are particularly informative regarding the space mission design to the Moon, Europa, and other satellites and asteroids <cit.>. Asteroid 2003 YN_107 was the first quasi-satellite of Earth that was discovered <cit.>, followed by studies of others in 1:1 resonance with Earth, such as asteroid 2002 AA_29 <cit.>, 2004 GU_9 and 2006 FV_35 <cit.>, 2013 LX_28 <cit.>, and 2014 OL_339 and 2016 HO_3 <cit.>. Other studies involve QS orbits around Venus <cit.>, Jupiter <cit.>, and the main asteroid belt co-orbitals <cit.>.
<cit.> studied the survival of dust particles in the Didymos-Dimorphos system and their connection with stable QS orbits, while <cit.> explored the QS dust trapping and dynamical evolution in Earth's regime, and <cit.> modelled the evolution of ring particles in the HS regime in the Janus-Epimetheus system. The Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect on a particular class of Earth and Mars co-orbitals was studied for instance by <cit.>.
The detectability of extrasolar trojans and bodies on QS-like orbits is in general very difficult because they can be discarded as false positives <cit.>. Nevertheless, this has not discouraged theoretical studies <cit.>.
For reasons of completeness, we refer to the computation of families of QS orbits in the planar general TBP (2D-GTBP). When perturbation was added to the problem and a continuation with respect to the mass of the mass-less body was performed <cit.>, the QS orbits of family f generated stable symmetric periodic orbits in the 1:1 MMR in the so-called family S in the 2D-GTBP computed for various mass ratios by <cit.> and <cit.>. Family S consists of planetary-type (computed in the heliocentric system) and satellite-type orbits (computed in the planetocentric system). This group of families S was later called g(f_1,E_a) by <cit.>. Additionally, numerical, analytical, and semi-analytical treatments of the phase space were conducted for example by <cit.> because planetary configurations in which co-orbital motion could be exhibited are very intriguing <cit.>.
§ METHOD
§.§ Notion of periodic orbits related to MMRs and resonant angles
We aim to determine the location of QS, HS, and TP orbits in the 1:1 MMR and explore the influence of the 1:1 resonant symmetric and asymmetric co-planar or inclined periodic orbits on the dynamical behaviour of dust particles sharing the same orbit with an exo-asteroid within a debris disc or ring around a WD. In general, the MMR acts as a phase-protection mechanism safeguarding any planetary system even when the orbits are highly eccentric <cit.>.
The periodic orbits indicate the exact location of the MMR in phase space and are fundamental for understanding the resonant dynamics of each problem. They coincide with the fixed or periodic points on a Poincaré surface of section and with the stationary equilibrium points of an appropriately averaged Hamiltonian, as long as the latter is sufficiently accurate <cit.>. The periodic orbits are not isolated in phase space. They are continued mono-parametrically (see Sect. <ref>) and form characteristic curves or families <cit.>. These families of periodic orbits may be either generated by bifurcation points, or are isolated.
We defined the resonant angle θ=λ_ a-λ_ d <cit.> in order to distinguish the various families of periodic orbits in the 2D- and the 3D-RTBPs that dominate and shape each region in phase space. The value of the resonant angle remains constant when computed on the exact periodic orbit. In the following, we discuss the behaviour of the angles (libration or rotation) that is linked with the linear stability of the periodic orbits and the vicinity of the latter in phase space.
§.§ Linear stability and long-term planetary stability
We computed the linear stability <cit.> of the periodic orbits by calculating the eigenvalues of the monodromy matrix of the linearised equations of motion, whose number depends on the degrees of freedom of each problem. If, and only if, all the eigenvalues lie on the unit circle, the periodic orbit is classified as linearly stable. Linearly stable periodic orbits, either planar or spatial, constitute the backbone of stability domains in phase space, where invariant tori exist and the motion is regular and bounded. Therefore, any planetary system therein will survive for long-time spans. In these regions, the resonant angles and the apsidal difference librate about 0^∘ or 180^∘ when the periodic orbit is symmetric, that is, when it is invariant under the fundamental symmetry Σ: (t,x,y) → (-t,x,-y) <cit.>, or about other values when it is asymmetric and Σ maps it to its mirror image <cit.>.
When the periodic orbits are linearly unstable, chaos manifests itself and leads to irregular evolution. This may not change the configuration of the system significantly when the chaos is weak. When the chaos is strong, planetary disruption may occur and lead to close encounters, collisions, or escapes, and hence, to planetary destabilsation. In the vicinities of these chaotic seas, the resonant angles and the apsidal difference rotate.
We also took an intrinsic property of the planar periodic orbits into account, namely the vertical stability, by computing the vertical stability index <cit.>. A system of celestial bodies moving on different inclined planes can evolve stably when their mutual inclinations are low and when they are located in the neighbourhood of vertically stable 2D periodic orbits. The orbits that are vertically critical (vco) act as bifurcation points that generate 3D periodic orbits. Celestial bodies with a high mutual inclination can only survive when they are located in the neighbourhood of linearly stable 3D resonant periodic orbits <cit.>.
Therefore, the methodical computation of families of periodic orbits and the deduction of their linear stability are precise and rigorous diagnostic tools to guide the exploration of the phase space. In this study, we consider both symmetric and asymmetric 1:1 resonant periodic orbits. Planar stable asymmetric orbits were for example found in the 2D-GTBP by <cit.>. Families of 2D asymmetric periodic orbits were also found to be vertically stable in the 2D-GTBP by <cit.>, and hence, these co-orbital bodies will survive small deviations from co-planar motion.
§.§ System set-up
We first considered a WD, an asteroid, and a dust particle as point masses, m_ WD, m_ a, and m_ d=0, respectively, and defined the problem parameter as μ=m_ a/m_ WD+m_ a. In this configuration, the motion of the dust particle does not affect the motion of the WD and the asteroid while it moves under their gravitational attraction. In 1:1 resonance, the semi-major axes of the asteroid and the dust particle are almost equal (a_ a≈ a_ d).
In the inertial frames of reference, for instance, OXY or GXYZ, the periodic orbits correspond to almost Keplerian ellipses, which are described by the osculating elements and rotate around the major primary body. In Fig. <ref>, the Keplerian ellipses presented in the 2D inertial frame (left column) are computed for one period, that is, T=2π, and hence, their rotation around the WD is not apparent.
We introduced suitable rotating frames of reference to reduce the degrees of freedom of the systems and define the periodic orbits. In the 2D-RTBPs, the rotating frame of reference, for instance, Oxy, is centred at the centre of mass of the primaries, that is, the WD (major primary body) and the asteroid (minor primary body), while the motion of the asteroid is restricted on the Ox-axis <cit.>. The dust particle describes a periodic motion on the Oxy-plane (right column of Fig. <ref>). The exact resonance is defined as the 1:1 resonant periodic motion in this frame.
In the 3D-RTBPs, the dust particle was allowed to evolve on inclined orbits with respect to the orbital plane of the WD and the asteroid (see e.g. the left column in Fig. <ref>). The origin of the 3D rotating frame, Gxyz, called G, coincides with the centre of mass of the primaries, the Gz-axis is perpendicular to the Gxy-plane, and its Gx-axis is directed from the WD to the asteroid <cit.>.
An orbit, Q(t), is considered as periodic when it satisfies the condition Q(0) = Q(T), where Q(0) is the set of initial conditions (positions and velocities defined in the rotating frame) at t=0, and T is the orbital period. We assumed a solution Q(t)=(x_ d(t),x_ a(t),y_ d(t),ẋ_ a(t),ẋ_ d(t), ẏ_ d(t)) in the 2D-RTBP. This solution is periodic with a period T when it satisfies the periodic conditions
[ ẋ_ a(T)=ẋ_ a(0)=0, ; x_ a(T)=x_ a(0), ; x_ d(T)=x_ d(0) , y_ d(T)=y_ d(0),; ẋ_ d(T)=ẋ_ d(0), ẏ_ d(T)=ẏ_ d(0). ]
Given the Poincaré surface of section at y_ d=0 and the fact that x_ a=1-m_ a/m_ WD+m_ a is constant and defined by the normalisation we used in the CRTBP, a symmetric periodic orbit that perpendicularly crosses the Ox-axis (ẋ_ d(0)=0) is defined as a point on the plane of initial conditions {(x_ d(0),ẏ_ d(0))}. The asymmetric periodic orbit can for example be defined in the space {(x_ d(0), y_ d(0), ẋ_ d(0), ẏ_ d(0))} when ẋ_ a(0)= 0.
For the 3D periodic orbits shown in the right column of Fig. <ref>, the xz-symmetric periodic orbits (top panel) can be represented by a point in the 3D space of the initial conditions
{(x_ d(0),z_ d(0),ẏ_ d(0))} and the x-symmetric ones (bottom panel) by {(x_ d(0),ẏ_ d(0),ż_ d(0))}.
The families are formed during the mono-parametric continuation when a parameter, for example, z_ d, is changed for the xz-symmetry, and the rest are differentially corrected in order to satisfy the respective periodicity condition. Continuation with respect to the mass of a body is also possible for example by varying the mass of m_ a and keeping z_ d fixed, for instance, while the respective periodicity condition still holds. The limitations of this continuation method can be found in <cit.>. The periodicity conditions that the periodic orbits must fulfil were described in greater detail for instance in <cit.> for the 2D-RTBPs and in <cit.> for the 3D-RTBPs.
To numerically computate the periodic orbits we considered, we located without loss of generality the asteroid at a_ a=1, and its period was therefore T_0=2π. We also normalised the masses and the gravitational constant, G, so that the sum of the former was equal to unity, namely m_ WD+m_ a+m_ d=1 or m_ WD=1-m_ a and G=1.
In physical units, given the WD 1145+017 system, we adopted a WD mass m_ WD = 0.6m_⊙. Then, we assumed an asteroid with a mass equal to approximately 1:10^ th the mass of Ceres, for instance, 2.5 × 10^19 kg, located at a_ a=0.0054 au around the WD <cit.>. In this configuration, we derived a μ equal to 2.1186× 10^-11. To our knowledge, this value is one of the lowest ever studied for the 2D and 3D CRTBP. A comparably low value of μ (10^-9) for the numerical computation of QS and HS orbits in the CRTBP was adopted by <cit.> for the specific case of retrograde orbits in the 2D-CRTBP and by <cit.> for the Saturn-Janus-Epimetheus system, respectively. <cit.> explored the domains around QS orbits for μ≥ 10^-7. Finally, numerical computations yielding the extent of regular domains around HS and TP orbits in 1:1 MMR as a function of μ were performed by <cit.>.
§.§ Visualisation of the phase space
In addition to the linear stability of the periodic orbits, which provides a direct insight into the long-term stability of any system in their vicinity, we also computed DS maps with the use of a chaotic indicator to visualise the extent of each domain. We adopted a version of the FLI, called detrended fast Lyapunov indicator (DFLI), that was established as reliable and accurate by <cit.>. We constructed 200 × 100 grid planes and chose a maximum integration time for the computation of the DFLI equal to t_ max=2.5 Myr, that is, almost 4.8 billion orbits of the dust particle, which was deemed adequate for these systems. We halted the integrations either when DFLI(t)>30 or when t_ max was reached. Orbits with DFLI<2 were classified as stable. Extensive numerical computations have shown that a system can be calssified as chaotic when DFLI>15, as the DFLI continues to increase steeply thereafter <cit.>.
§ FAMILIES OF PERIODIC ORBITS IN THE CRTBP
In this model for the 1:1 MMR, the asteroid moves on a circular orbit (e_ a=0) with a_ a=1, and the dust particle moves on an eccentric orbit (e_ d>0) with a_ d≈ 1. In the following sections, we present the families of 2D and 3D periodic orbits in the CRTBP.
§.§ 2D-CRTBP
In Fig. <ref> we present the families of periodic orbits in the 2D-CRTBP on the (x_ d,e_ d) plane, namely the QS and HS orbits. In Fig. <ref>, we present the families I (stable branch) and II (unstable branch) on the (a_ d,e_ d) plane and provide a comparison with μ=0.001 to highlight the fact that we approach the unperturbed case (μ→ 0). Symbol I corresponds to family f which consists of stable (blue coloured) periodic orbits, where the asteroid and the dust particle are in anti-alignment (Δϖ=180^∘ and θ=0^∘), while symbol II denotes family b of unstable (red coloured) periodic orbits, where the bodies are in alignment (Δϖ=0^∘ and θ=180^∘). We note that the two branches shown for each family are equivalent and differ only in the positive or negative direction (namely ẏ_ d>0 or ẏ_ d<0) of the orbit for which the Poincaré surface of section at y_ d=0 was chosen. More particularly, we obtain the configurations shown in Table <ref>, which are equivalent in pairs, that is, I with I' and II with II', at t=0 and at t=T/2. We provide them all here because the continuation process in the 3D-CRTBP, 2D- and 3D-ERTBP (which will be shown in a future study) was computationally easier when we started from the same bifurcation point but with a different cross section, since cusps appear in the rotating frame of reference.
In Fig. <ref> we show the family of asymmetric periodic orbits, which consists of TP orbits, and starts from L_4 at θ=60^∘ and ends at a symmetric periodic orbit of HS type at θ=180^∘, where the linear stability changes. We also present the mirror family, which starts from L_5 at θ=300^∘. These families are both horizontally and vertically stable (solid blue lines).
§.§ 3D-CRTBP
We considered the families of QS, HS, and TP orbits presented in the 2D-CRTBP and computed their vertical stability. Two types of vcos generate 3D periodic orbits with two different symmetries <cit.>, namely with respect to the xz-plane or the x-axis of the rotating frame Gxyz. We call the generated families F and G, respectively, and use a superscript to denote the planar family to which the vco (bifurcation point) belongs. We add a hat to represent the respective vcos. For example, vco F^I belongs to the 2D family I and generates the 3D family F^I of xz-symmetric (magenta vco) periodic orbits. In Table <ref>, we provide the eccentricity values for each vco. The family of asymmetric periodic orbits (TP orbits) is vertically stable.
The 3D families of unstable orbits, which stem from the unstable family II' (or II) of HS orbits in the 2D-CRTBP, have not been presented before for any value of μ. The same holds for the vcos that generate them. For the family F^I of stable QS orbits, we find no significant change in the location of the vco as μ→ 0, that is, as we approach the unperturbed case (m_ a=m_ d=0) (see the vcos (magenta dots) in panel a of Fig. <ref> and compare e.g. the F^I (or F^I') with the B_cs points in Table 1 in <cit.>). Nonetheless, we note that as μ→ 0, the QS domain becomes reachable by low-eccentricity orbits and the trajectories approach the minor primary, that is, the asteroid in our case, while for μ=0.001, this domain exists for e_ d>0.18 (see the top panel of Fig. <ref>).
In Fig. <ref> we present the three 3D families of periodic orbits in the 3D-CRTBP on the (e_ d,i_ d) plane. The family F^I consists of stable xz-symmetric periodic orbits (QS orbits) up to i_d, max=30.7^∘. Both F^II' and G^II' have only unstable periodic orbits (HS orbits) with an xz- and x-symmetry, respectively. In Table <ref> we present the configuration of each family. In Fig. <ref>, each family is shown in more detail in (x_ d,ẏ_ d, z_ d) space or (x_ d,ẏ_ d, ż_ d) space, depending on the symmetry of the periodic orbits together with the projections of the planar families and the vcos in the 2D-CRTBP (at z_ d=0 or ż_ d=0).
§ STABLE AND CHAOTIC QS, HS, AND TP ORBIT REGIMES IN THE 3D CRTBP
We identified the regular domains in phase space on the DS maps by monitoring the libration of the resonant angle θ. When it librated about 0^∘, 180^∘ or other values, we labelled the region QS, HS, or TP_4 (or TP_5), respectively. In the pale regions, the resonant angle θ rotates. All types of orbits in co-orbital dynamics are revealed in the DS maps with the boundaries of different domains having been delineated <cit.>. The breakdown of the phase space was organised as follows.
First, we chose three 3D periodic orbits (one from each family, namely F^I, F^II' and G^II') with mutual inclinations, Δ i, equal to 10^∘, 30^∘, and 50^∘ (Figs. <ref>-<ref>). Then, we varied some of the initial conditions to create the grids on the (e_ d,i_ d), (ω_ d,e_ d), and (M_ d,e_ d) plane, while keeping the rest fixed. The values of the angles are given in Table <ref>, and the orbital elements of the selected periodic orbits are also provided in each figure.
In Fig. <ref>,we chose three different 3D periodic orbits in which the dust particle had an inclination equal to 10^∘. In the top row, the periodic orbit that guided the search is stable and belongs to the F^I family with a_ d=1+1.887× 10^-11, e_ d=0.718, and ΔΩ =270^∘. The areas around the stable 3D orbit are populated with regular orbits, and chaoticity becomes apparent for e_ d>0.7 in the top left panel, where the planar family, I, becomes vertically unstable (dashed line in Fig. <ref>). In the middle and bottom rows, the periodic orbits that guided the search are unstable and belong to the F^II' and G^II' families, respectively. The xz-symmetric family has initial conditions a_ d=1-3.4903× 10^-11, e_ d=0.722, and ΔΩ =90^∘, and the x-symmetric family has a_ d=1-2.7865× 10^-11, e_ d=0.923, and ΔΩ =0^∘. The regions around both of them are dominated by irregular orbits. However, the remaining regular domains QS and TP_4 (or TP_5) become apparent farther away from this periodic orbit with changing angles.
For Fig. <ref>, we performed the same search, but chose three different 3D periodic orbits in which the dust particle had an inclination equal to 30^∘. We only present grids on the (ω_ d,e_ d) and (M_ d,e_ d) plane, as the variation in the (e_ d,i_ d) plane was qualitatively similar with that presented in the left column of Fig. <ref>. In the left column of Fig. <ref>, the stable 3D periodic orbit has a_ d=1+2.366× 10^-11, e_ d=0.881, and ΔΩ =270^∘. This 3D periodic orbit is shown in panel a of Fig. <ref>. In the middle column, the 3D unstable orbit has a_ d=1-3.291× 10^-11, e_ d=0.761, and ΔΩ =90^∘, and in the right column, the initial conditions are a_ d=1-2.449× 10^-11, e_ d=0.904, and ΔΩ =0^∘.
For Fig. <ref>, we performed the same search, but chose two unstable 3D periodic orbits in which the dust particle had an inclination equal to 50^∘ with a_ d=1-3.076× 10^-11, e_ d=0.845, and ΔΩ =90^∘ (left column) and a_ d=1-2.685× 10^-11, e_ d=0.858, and ΔΩ =0^∘ (right column). The latter 3D periodic orbit is shown in panel b of Fig. <ref>. The chaotic domains created around the 3D HS orbits span greater areas with increasing inclination and reach even lower eccentricity, e_ d, values for the dust particle.
In Fig. <ref> we created DS maps on the (ω_ d,M_ d) plane around three 2D asymmetric periodic orbits with e_ d=0 (left panel), e_ d=0.5 (middle panel), and e_ d=0.9 (right panel). The regular orbits dominate as the angles vary as long as e_ d does not approximate the bifurcation point on the HS orbit, that is, at 0.917.
In Fig. <ref> we varied the semi-major axis and the eccentricity of the dust particle on the (a_ d,e_ d) plane. In the top and middle rows, we used the same six periodic orbits as in Figs. <ref> and <ref> with Δ i=10^∘ and Δ i=30^∘, respectively. Therefore, islands of stability around the QS orbits were created in the DS maps. In the bottom row, we explored the spatial neighbourhood of the planar vertically stable asymmetric TP orbit close to L_4 (or L_5) for Δ i=10^∘ and three different nodal differences, namely ΔΩ=0^∘, 90^∘ and 180^∘. These numerical results and the boundaries of the domains, namely the libration of the resonant angle, agree with those presented by <cit.>.
§ SETUP OF THE N-BODY SIMULATIONS
After computing periodic orbits and creating DS maps, we performed N-body simulations. In order to accurately compare these simulations to the DS maps, care must be taken with regard to rescaling <cit.>.
The setup of the N-body simulations requires that the data extracted from the families of periodic orbits and the DS maps are transformed into real units. In this case, the results presented here can be applied to different planetary system configurations. For instance, different units of time, masses, and distances can be used. We needed to set a normalisation with respect to the masses assumed for the computation of the periodic orbits in our case. The details of the invariance of the equations of motion we used to compute the periodic orbits and the appropriate scaling factors were explained by <cit.> for the 2D-CRTBP and by <cit.> for the 2D-ERTBP and the two 3D-RTBPs.
We performed a scaling in our simulations with regard to the semi-major axis of the asteroid and the dust particle, so that
a_i^ (N) = a_i ζ^1/3,
where ζ≡m_ WD + m_ a/1m_ is the scaling factor, and N represents the scaled orbital elements used in the N-body simulations. Therefore, the values of the semi-major axes that were imported to the N-body integrator were multiplied by the factor 0.843432665311676. Time and the remainder of the orbital elements (e.g. the eccentricities and angles) remained the same.
For all of our N-body simulations, we used the IAS15 integrator in the REBOUND simulation package <cit.>. This integrator uses an adapative time step and conserves energy sufficiently well for our purposes. In order to model a realistically observable timescale, we performed each simulation over 10 years, corresponding to roughly 20,000 orbits of the minor planet. We output data snapshots every 0.05 years, but monitored if and when collisions occurred for every time step. Although our test particles were point masses, our minor planets were not: Their radius was 200 km.
Limiting the integration timescale to 10 years also prevented longer-term forces on the dust particles from becoming important. Several shorter-term forces might also act on the particles. We discuss how these can be contextualized within the outputs of our simulations in Sect. <ref>. These forces typically have a greater effect the smaller the orbital pericentre. In this context, the most consequential results are those with e_ d≈ 0.0, whereas those with more limited applicability have e_ d≳ 0.8. Nevertheless, we sampled e_ d values up to 0.9 in many cases to provide a direct comparison with the DS maps.
§ RESULTS OF THE N-BODY SIMULATIONS
We had a large number of parameters to select in order to report our results. We chose a parameter that has a direct observable link: the maximum variation in the orbital period of the dust particle. The orbital period is directly measured and is independent of other parameters. Transit photometry in these WD systems can be measured on the scale of seconds <cit.>.
We wished to determine the extent of this orbital period variation and how it relates, if it does, to periodic orbits and the structure seen in the DS maps. Because of the computational expense of integrating over a grid of initial conditions with an accurate time-variable integrator such as IAS15, we selected a few parts of a few parameter grids over which to perform simulations. These are outlined below. In all cases, each parameter was sampled uniformly.
For some parameter combinations, some systems become unstable. These instabilities are manifested entirely through collisions between a dust particle and the asteroid. In our simulations, no dust particles escape the system and no dust particles collide with the WD. Unstable systems are indicated by red crosses in the plots.
Within the set of six orbital parameters, the only parameter that determines the location of the dust particle rather than setting the shape of its orbit is M_ d. In some cases, in order to provide a direct comparison to the DS maps, we adopted the value of M_ d that corresponds to a particular family of periodic orbit, or DS map. However, in some other cases, we sampled a range of M_ d values in order to determine the maximum period deviation independent of M_ d, and we report only those that generated the maximum orbital period deviation.
§.§ Variation in e_ d versus a_ d
Fig. <ref> shows a 91 × 91 grid of a_ d = a_ a∓ 68.135 km and e_ d = 0.0001-0.1000 (left column) and e_ d = 0.0001-0.9000 (right column). For each combination of (a_ d, e_ d), we performed 18 simulations with M_ d uniformly sampled across its entire range. The maximum orbital period deviation occurs along the boundaries of the island of stability that is populated by low-eccentricity orbits with ΔΩ=270^∘ and Δ i=10^∘.
A similar behaviour is shown in Fig. <ref>, but guided by the 3D HS periodic orbit with ΔΩ=90^∘ and Δ i=10^∘. Again, the maximum orbital period deviation takes place along the boundaries of the island where ΔΩ=90^∘.
Fig. <ref> shows a grid guided by the 3D HS periodic orbit with ΔΩ=0^∘ and Δ i=10^∘. Again, the maximum orbital period deviation occurs along the boundaries of the respective island of stability.
Fig. <ref> shows a grid guided by a TP periodic orbit with ω_ d=60^∘ and Δ i=10^∘. In contrast to the previous simulations, we performed a single simulation with M_ d = 0^∘. Hence, the value of the maximum orbital period deviation observed in the region populated by TP orbits was lower.
§.§ Variation in e_ d versus ω_ d
Fig. <ref> shows a 91 × 91 grid of ω_ d = 0^∘-360^∘ and e_ d = 0.0001-0.9000 guided by the 3D QS periodic orbit with ΔΩ=270^∘ and Δ i=10^∘. For each combination of (ω_ d, e_ d), we performed a single simulation with M_ d = 180^∘ (left column) and 18 simulations with M_ d uniformly sampled across its entire range (right column). When M_ d = 180^∘, the correspondence between the DS map and the N-body simulations is excellent, although the maximum orbital period deviation observed was negligible in this exploration. This deviation did not increase even when a uniform sample was adopted for M_ d.
§.§ Variation in i_ d versus e_ d
Fig. <ref> shows a 181 × 181 grid of e_ d = 0.0001-0.8000 and i_ d = 0^∘-90^∘ guided by the HS periodic orbit with Ω_ d=90^∘ and Δ i=50^∘, and we performed a single simulation with M_ d = 180^∘. The maximum orbital period deviation observed around the 3D HS orbits was negligible, as the semi-major axis of the dust was fixed and equal to that of the periodic orbit.
§ DISCUSSION
Accurately modelling WD debris discs requires incorporating a vast variety of physical processes, including gas-dust interactions, gas drag, aeolian erosion, Ohmic heating, Lorentz drift, and external perturbations. We have isolated the effect of an important and fundamental force that acts on all the discs: the effect of gravity, and in the likely co-orbital scenario with a disrupted asteroid.
Nevertheless, there are some key physics that we highlight below and relate to our results.
§.§ General relativity
Similar to Newtonian gravity, general relativity affects all objects orbiting WDs. Furthermore, the effect is greater than that of most known main-sequence extrasolar systems because WD debris discs are so compact.
Over the course of a single orbit, general relativity changes each orbital element <cit.>. However, these changes all average out to zero, except for the change in the argument of pericentre. Hence, for circular orbits, the effect of general relativity is negligible. For non-circular orbits, in WD debris discs, ω_ d cycles across its entire range in a time of approximately
107 yr( M_⋆/0.60 M_⊙)^-3/2( a_ d/1.00R_⊙)^5/2( 1-e_ d^2 ).
As a result, we might imagine a dust particle traversing a horizontal path across the plots in Figs. <ref>-<ref>. However, because the maximum period deviation in the entire phase space is less than a few seconds, this traversal would probably not be observable, regardless of the manner in which it takes place.
§.§ Sublimation
The creation of gas through dust sublimation generates gas-dust interactions, which are still poorly understood and are difficult to model analytically. Hence, our results are best applied to dust alone.
The location in which dust sublimates depends on the temperature of the WD <cit.>, which in turn depends on how long it has been a WD. This time span is commonly known as its cooling age. WDs have been observed with cooling ages of up to ≈10 Gyr <cit.>. Old WDs hence allow the orbital pericentre of a dust particle to be far inwards of 1R_⊙ without sublimating.
In order to quantify this effect, we computed the maximum value of e_ d allowed for a dust particle to survive as dust before it is sublimated as a function of WD cooling age. To do this, we used the same prescription as in <cit.>, and we report our results in Fig. <ref>. This prescription involves combining equations for the blackbody radiation of the WD, the luminosity of a WD as a function of cooling age, and a relation between effective temperature, distance, and sublimation temperatures of three different materials. The plot considers three different potential compositions of the dust and shows that in many cases, values of e_ d≈ 0.8-0.9 may realistically be adopted.
§.§ Poynting-Robertson drag
Even when a dust particle is not sublimated, the radiation from the WD will drag the particle inwards by Poynting-Robertson drag. This drag occurs for all dust particles and for pebbles, boulders, and any other objects that would be considered as test particles in this study. Because the speed at which Poynting-Robertson drag acts strongly depends on the size of the particle, the relevance of this effect in our study in turn strongly depends on our assumption of the size of the test particle.
We can estimate the inward drift of a particle through the classic <cit.> formula for the semi-major axis variation due to Poynting-Robertson drag,
da/dt≈ - 3L_⋆(2 + 3 e_ d^2)/16 π c^2 R_ dρ_ d a_ d(1 - e_ d^2 )^3/2,
and we assumed that the particle density is ρ_ d = 2 g/cm^3.
Figure <ref> illustrates the drag rate for four different particle sizes that are large enough to be more appropriately classed as boulders. Over a 10 yr time span, even the largest boulders would be dragged inwards to a greater extent than their orbit would be gravitationally shifted due to a minor planet in a stable co-orbital configuration. We therefore suggest that observational evidence of a period shift is much more likely to arise from Poynting-Robertson drag than from co-orbital dynamics.
We can further deduce that for e_ d≈ 0, based on the x-axes of the N-body simulation plots for Figs. <ref>-<ref>, a dust particle that starts in a stable co-orbital configuration with a minor planet can be dragged inwards by ≈20-60 km into a region of gravitational instability. In this case, gravity might ultimately generate a collision between the particle and minor planet, even though the trigger was radiative drag.
§ CONCLUSIONS
Motivated by asteroids and dust in co-orbital configurations around WDs, we computed the families of symmetric and asymmetric periodic orbits in the 2D and 3D CRTBP for the WD–asteroid–dust particle configuration with μ≈ 10^-11 in the 1:1 MMR. The 3D families with unstable periodic orbits presented here are novel. In this configuration, QS orbits were additionally found as e_ d→ 0. Nonetheless, we found that the planar families exhibited qualitatively similar attributes with those computed for μ=0.001 by <cit.> and <cit.>, that is, their linear stability and their bifurcation points, even though we approached the unperturbed case.
We methodically explored the phase space by choosing specific 3D symmetric and 2D asymmetric periodic orbits and monitored the libration of the resonant angle. All types of orbits in co-orbital dynamics were revealed via the DS maps, in which the boundaries of different domains were delineated.
All types of orbits were simulated over a timescale of 10 yr with N-body simulations, namely QS, HS, and TP orbits in the 1:1 MMR. In all cases, an observable orbital period deviation was exhibited in the low-eccentricity regime e_ d<0.1 within each domain and nodal difference, while the maximum (about 3 seconds) was yielded when ΔΩ=90^∘ while drifting away from the periodic orbit (a_ d≠ 1) and being almost at the boundaries of the stability domain.
Before circularising, an asteroid approaches a WD on an eccentric orbit. Hence, a future useful extension of this work may investigate the three-body problem with an eccentric asteroid at a similarly extreme mass ratio. Even so, whether circular or eccentric, a companion of a WD with transiting dust may have another value of μ when a planetesimal is considered. The 2D and 3D periodic orbits play a crucial role in unravelling and determining possible regimes with expected transits <cit.>[Likewise for possible detections in the GTBP, <cit.>]. How large μ needs to be in order for the co-orbital dynamics to become dominant, that is, with an orbital period variation about 3D periodic orbits, namely a detection, remains to be explored.
We thank the anonymous reviewer for their astute comments and careful reading which improved the manuscript. The work of KIA was supported by the University of Padua under Grant No. BIRD232319. Results presented in this work have been produced using the Aristotle University of Thessaloniki (AUTh) High Performance Computing Infrastructure and Resources.
aa
|
http://arxiv.org/abs/2409.03287v1 | 20240905065052 | On the $\mathcal{ABS}$ spectrum and energy of graphs | [
"Swathi Shetty",
"B. R. Rakshith",
"Sayinath Udupa N. V"
] | math.CO | [
"math.CO",
"05C50, 05C09, 05C35, 05C92"
] |
On the 𝒜ℬ𝒮 spectrum and energy of graphs
Swathi Shetty^1, B. R. Rakshith^*,2, Sayinath Udupa N. V. ^3
Department of Mathematics, Manipal Institute of Technology
Manipal Academy of Higher Education
Manipal, India – 576104
[email protected]^1
[email protected]; [email protected]^*,2
[email protected] ^3.
§ ABSTRACT
Let η_1≥η_2≥⋯≥η_n be the eigenavalues of 𝒜ℬ𝒮 matrix. In this paper, we characterize connected graphs with 𝒜ℬ𝒮 eigenvalue η_n>-1. As a result, we determine all connected graphs with exactly two distinct 𝒜ℬ𝒮 eigenvalues. We show that a connected bipartite graph has three distinct 𝒜ℬ𝒮 eigenvalues if and only if it is a complete bipartite graph. Furthermore, we present some bounds for the 𝒜ℬ𝒮 spectral radius (resp. 𝒜ℬ𝒮 energy) and characterize extremal graphs. Also, we obtain a relation between 𝒜ℬ𝒞 energy and 𝒜ℬ𝒮 energy. Finally, the chemical importance of 𝒜ℬ𝒮 energy is investigated and it shown that the 𝒜ℬ𝒮 energy is useful in predicting certain properties of molecules.
AMS Classification: 05C50, 05C09, 05C35, 05C92.
.
Key Words: 𝒜ℬ𝒮 matrix, spectral radius, 𝒜ℬ𝒮 energy, QSPR analysis.
§ INTRODUCTION
Throughout this article, we assume that G is a graph with vertex set V (G) and edge set E(G), where V(G)={v_1,v_2,…,v_n} and |E(G)|=m. If two vertices v_i and v_j are adjacent, then
we write it as v_i∼ v_j otherwise, v_i≁v_j. We denote the degree of the vertex v_i by d(v_i)/d_i.
Adjacency matrix of G is one of the well-studied graph matrix, denoted by A(G) and defined as A(G)=[a_ij]_n× n, where a_ij=1 if and only if v_i∼ v_j or 0, otherwise.
If λ_1≥λ_2≥⋯≥λ_n are the eigenvalues of A(G), then the sum ∑_i=1^n |λ_i| is called the energy of graph G and is denoted by ℰ(G). The concept of graph energy, introduced by Gutman in 1978, slowly attracted mathematicians and chemists. In recent years, extensive research on graph energy has been carried out. For recent research on graph energy, see <cit.> and refer to the book "Graph Energy" by Li, Shi, and Gutman <cit.>. The study of graph energy is extended to various graph matrices, including (signless) Laplacian matrix, distance matrix, degree-based graph matrices and distance-based graph matrices. More than 50 graph energies have been defined so far. See <cit.> for more details.
A topological index is a numerical quantity derived from the graph's structure. In literature, plenty of topological indices are defined and used as molecular descriptors (see <cit.> ). Most of the degree-based topological indices
can be represented as TI(G)=∑_v_i∼ v_jℱ(d_i,d_j), where ℱ(d_i,d_j)= ℱ(d_j,d_i). As examples, we have first Zagreb index ℱ(d_i,d_j)=d_i+d_j, second Zagreb index ℱ(d_i,d_j)=d_id_j, Randić index (R(G)) ℱ(d_i,d_j)=1√(d_id_j), harmonic index (H(G)) ℱ(d_i,d_j)=2d_i+d_j, sum-connectivity index (χ(G)) ℱ(d_i,d_j)=1√(d_i+d_j), atom-bound connectivity index (𝒜ℬ𝒞(G)) ℱ(d_i,d_j)=√(d_i+d_j-2d_id_j), atom-bound sum-connectivity index (𝒜ℬ𝒮(G)) ℱ(d_i,d_j)=√(d_i+d_j-2d_i+d_j), etc.
For a topological index TI(G), Das et al. <cit.> defined a general extended adjacency matrix as 𝒯=(t_ij)_n× n, where t_ij=ℱ(d_i,d_j) if v_i∼ v_j or 0, otherwise. The sum of absolute values of all the eigenvalues of the matrix 𝒯 is called the energy of the general extended adjacency matrix 𝒯. In <cit.>, Das et al. obtained several lower and upper bounds for the energy of the matrix 𝒯, and deduced several known results about degree-based energies of graphs.
The 𝒜ℬ𝒮 index was introduced recently by Ali et al. in <cit.>. It combines both sum-connectivity index and atom-bound sum connectivity index. Bounds on 𝒜ℬ𝒮 index for the classes of (molecular) trees and general graphs are obtained in <cit.> and also extremal graphs are classified.
Chemical applicability of 𝒜ℬ𝒮-index is demonstrated in <cit.>. For more details about 𝒜ℬ𝒮 index we refer to the survey article <cit.> by Ali et al. The 𝒜ℬ𝒮 matrix of G is defined to be the matrix 𝒜𝒮(G)=(w_ij)_n× n, where w_ij=√(d_i+d_j-2d_i+d_j) if v_i∼ v_j and 0, otherwise. We denote the eigenvalues of 𝒜𝒮(G) by η_1≥η_2≥⋯≥η_n. The sum ∑_i=1^n|η_i| is called the 𝒜ℬ𝒮 energy of G and is denoted by ℰ_ABS(G). The study of properties of 𝒜ℬ𝒮 matrix began recently. In <cit.>, it is proved that the 𝒜ℬ𝒮 Estrada index (∑_i=1^ne^η_i) of trees is maximum from the star graph and it is minimum for the path graph. Also, in <cit.>, the authors proved that 𝒜ℬ𝒮 spectral radius of a tree is maximum for star graph and it is minimum for the path graph. The chemical importance of the 𝒜ℬ𝒮 Estrada
index and the 𝒜ℬ𝒮 spectral radius are investigated separately in <cit.>, and it is shown that the 𝒜ℬ𝒮 Estrada index and 𝒜ℬ𝒮 spectral radius can be useful in predicting certain properties of molecules.
Motivated by this, in Section 2 of the paper, we characterize connected graphs with 𝒜ℬ𝒮 eigenvalue η_n>-1. As a result, we determine all connected graphs with exactly two distinct 𝒜ℬ𝒮 eigenvalues. Further, we show that a connected bipartite graph has three distinct 𝒜ℬ𝒮 eigenvalues if and only if it is a complete bipartite graph. In Sections 3 and 4, we present some bounds for the 𝒜ℬ𝒮 spectral radius (resp. 𝒜ℬ𝒮 energy) and characterize extremal graphs. Also, we obtain a relation between 𝒜ℬ𝒞 energy and 𝒜ℬ𝒮 energy. In Section 5, the chemical importance of 𝒜ℬ𝒮 energy is investigated and it shown that the 𝒜ℬ𝒮 energy is useful to predict the boiling point and pi-electron energy of benzenoid hydrocarbons.
As usual, the complete graph, path graph and complete bipartite graph on n vertices are denoted by K_n, P_n and K_n_1,n_2 (n_1+n_2=n), respectively.
§ PROPERTIES OF ABS EIGENVALUES
The following proposition is one of the basic properties of 𝒜ℬ𝒮 eigenvalues. We omit its proof as it is straightforward.
Let G be a graph on n vertices. Let η_1≥η_2≥…≥η_n be its 𝒜ℬ𝒮- eigenvalues. Then
∑_i=1^nη_i=0, ∑_i=1^n η_i^2=2(m-H(G)) and ∑_1≤ i<j≤ nη_iη_j=H(G)-m.
Let M be a Hermitian matrix of order n. We denote the eigenvalues of M by θ_1(M)≥θ_2(M)≥⋯≥θ_n(M). The following lemma is the well-known Cauchy's interlacing theorem.
<cit.>
Let M be a symmetric matrix of order n and let M_k be its leading principal k × k
submatrix. Then θ_n-k+i(M)≤θ_i(M_k)≤θ_i(M) for i=1,2,…,k.
Let G be a graph on n vertices. Then the 𝒜ℬ𝒮 eigenvalues of G are all equal if and only if G≅ p K_2∪ qK_1, where p+q=n.
Suppose that the 𝒜ℬ𝒮 eigenvalues of G are all equal. Then by Proposition <ref>, ∑_i^n η_i=0, and so the 𝒜ℬ𝒮 eigenvalues of G are zeros. Let H be a component of G. If |V(H)|≥ 3, then there exists a vertex x in H of degree at least two. Let y be a vertex of H adjacent to x. Then the principal minor of 𝒜𝒮(G) corresponding to the vertices x and y is non-zero. Thus, by Cauchy's interlacing theorem (see Lemma <ref>), the least eigenvalue of 𝒜𝒮(G) is non-zero, a contradiction. Hence, |V(H)|≤ 2. Therefore, G≅ pK_2∪ qK_1, where p+q=n.
Conversely, if G≅ pK_2∪ qK_1, then all the entries of 𝒜𝒮(G) are zeros. Thus, η_1=η_2=…=η_n=0.
The diameter of a graph G is the maximum distance between any pair of vertices in G and it is denoted by diam(G). In the following theorem, we characterize connected graphs with η_n(G)> -1.
Let G be connected graph on n vertices. Then η_n(G)> -1 if and only if G≅ K_n or P_3.
Assume that diam(G)≥ 2 and G P_3. Let x-y-z be an induced path in G.
Then either d(y)≥ 3, d(x)≥ 2 or d(z)≥ 2. Let 𝒜𝒮[p,q,r] denote the principal submatrix of 𝒜𝒮(G) corresponding to the vertices p,q and r, where p-q-r is an induced path in G. Let θ_1[p-q-r]≥θ_2[p-q-r]≥θ_3[p-q-r] be the eigenvalues of 𝒜𝒮[p,q,r]. Then
θ_1[p-q-r] = √(2) √(1-1d(q)+d(r)-1d(q)+d(p));
θ_2[p-q-r] = 0;
θ_3[p-q-r] = -√(2) √(1-1d(q)+d(r)-1d(q)+d(p)).
Also, by Cauchy's interlacing theorem (see Lemma <ref>), η_n(G)≤θ_3[p-q-r].
Case 1: Let d(y)≥ 3, d(x)≥1 and d(z)≥1 (see Fig. <ref>(b)). Then
2(1-1d(y)+d(z)-1d(y)+d(x))≥ 1. Thus,
η_n(G)≤θ_3[x-y-z]≤ -1.
Case 2: Let d(y)=2, d(x)≥2 and d(z)≥1. If d(z)≥ 2, then
2(1-1d(y)+d(z)-1d(y)+d(x))≥ 1. So, η_n(G)≤θ_3[x-y-z]≤ -1. Otherwise, d(z)=1. Let s be a vertex adjacent with the vertex x in G (see Fig. 1(c)). If G≅ P_4, then η_n(G)=-1.0306<-1. Suppose G P_4, then either d(x)≥3 or d(s)≥ 2.
Subcase 2.1: Let d(x)≥2 and d(s)≥ 2 (see Fig. <ref>(d)). Then
2(1-1d(s)+d(x)-1d(y)+d(x))≥ 1. Thus, η_n(G)≤θ_3[s-x-y]≤ -1.
Subcase 2.2: Let d(x)≥ 3 and d(s)=1. Then there exists a vertex r adjacent with the vertex x in G (see Fig. 1(e)).Therefore, 2(1-1d(s)+d(x)-1d(r)+d(x))≥ 1. Thus, η_n(G)≤θ_3[s-x-r]≤ -1.
Thus for a connected graph G P_3 with diam(G)≥ 2, η_n(G)≤ -1. Hence G≅ K_n or P_3. Conversely, η_n(K_n)=-√(n-2n-1)>-1 and η_3(P_3)=-0.81649>-1. This completes the proof of the theorem.
Let G be a connected graph of order n≥2. Then η_n=-√(n-2n-1) if and only if G≅ K_n.
Suppose η_n=-√(n-2n-1). Then by Theorem <ref>, G≅ K_n or P_3. Since n_3(P_3)=-√(23) (≠ -√(12)), G≅ K_n. The converse part is direct.
Let B_1 and B_2 be two real matrices of same order, we write B_1≼ B_2 if every entry in B_1 does not exceed the counterpart in B_2. The following lemma is useful to prove our next result. <cit.>
Let B_1, B_2 be non-negative matrices of order n. If B_1≼ B_2, then ρ(B_1)≤ρ(B_2). Further, if B_1 is irreducible and B_1≠ B_2, then ρ(B_1)< ρ(B_2).
Let G be a connected graph of order n>2. Then 𝒜𝒮(G) has exactly two distinct eigenvalues if and only if G≅ K_n.
Suppose G has exactly two distinct eigenvalues. Since G is a connected graph of order n> 2, the matrix 𝒜𝒮(G) is irreducible, and thus by Perron-Frobenius theory its largest eigenvalue, i.e., η_1(G) is a simple eigenvalue of G. So, η_1(G) and η_n(G) are the two distinct eigenvalues of G, and η_2(G)=η_3(G)=⋯=η_n(G).
Let B=√(n-2n-1) A(K_n). Then the eigenvalues of B are (n-1)√(n-2n-1),-√(n-2n-1),…,-√(n-2n-1)_n-1. Since 𝒜𝒮(G)≼ B, η_1(G)≤ (n-1)√(n-2n-1) by Lemma <ref>. Therefore, -(n-1)η_n(G)≤(n-1)√(n-2n-1). That is, η_n(G)≥ -√(n-2n-1)>-1. Hence by Theorem <ref>, G≅ K_n or P_3. So, G≅ K_n because P_3 has three distinct eigenvalues. The converse part is straightforward.
The following lemma is important to prove our next result.
<cit.>
Let C∈ M_n,m, q=min{n,m}, σ_1≥σ_2 ≥…≥σ_q be the ordered singular values of C, and define the Hermitian matrix ℋ=[ 0 C; C^* 0 ]. The ordered eigenvalues of ℋ are -σ_ 1≤…≤ -σ_q≤ 0=…= 0≤σ_q≤…≤σ_1.
Let G be a graph of order n. Let M=(m_ij)_n× n be a non-negative symmetric matrix of order n, where m_ij is positive if and only if v_i∼ v_j. Then the graph G is bipartite if and only if the eigenvalues of the matrix M are symmetric about origin.
Suppose G is a bipartite graph. Then the matrix M can be written as M=[ 0 C; C^T 0 ], where C is a rectangular matrix with non-negative entries. Therefore, by Lemma <ref>, the eigenvalues of M are symmetric about the origin.
Conversely, suppose the eigenvalues of M are symmetric about the origin. Then trace(M^k)=0 for all odd integer k>0. By the definition of the matrix M, one can easily check that trace(M^k(G))>0 if and only if trace(A^k(G))>0. Now, assume that G contains an odd cycle of length k. Then trace(A^k(G))>0 (see <cit.>). So, trace(M^k)>0, a contradiction. Thus, G must be a bipartite graph.
The following corollary is immediate from the above theorem
A graph G is bipartite if and only if the eigenvalues of 𝒜𝒮(G) are symmetric about origin.
A connected bipartite graph G of order n>2 has three distinct 𝒜ℬ𝒮 eigenvalues if and only if G is a complete bipartite graph.
From Perron Frobenius theorem and by Corollary <ref>, η_1(G)>0 and η_n(G) are simple 𝒜ℬ𝒮 eigenvalues of G. Suppose G has a non-zero 𝒜ℬ𝒮 eigenvalue other than η_1(G) and η_n(G). Then by Corollary <ref>, G must have at least four distinct 𝒜ℬ𝒮 eigenvalues. Thus, 0 is an 𝒜ℬ𝒮 eigenvalue of G with multiplicity n-2. Hence, rank(𝒜𝒮(G))=2. Let U and W be the vertex partition sets of G. Let u∈ U and w∈ W. Since G is a connected bipartite graph, the rows corresponding to the vertices u and w are linearly independent. Further, since rank(𝒜(𝒮)(G))=2 and G is bipartite, the rows corresponding to the vertices belonging to U(respectively, W) are in the linear span of the row vector corresponding to the vertex u (respectively, w). Thus, the vertices in U (respectively, W) share the same vertex neighborhood set.
Assume that w∈ W and w∉ N(u). Then w is not adjacent with any vertices of U. Therefore, w is an isolated vertex of G, a contradiction because G is a connected graph of order at least 3. Thus, N(u)=W and N(w)=U. Therefore, G is a complete bipartite graph.
Conversely, if G is the complete bipartite graph K_n_1,n_2 of order n=n_1+n_2 (≥ 3), then 𝒜𝒮(G)=√(1-2n_1+n_2)A(G). Therefore the 𝒜ℬ𝒮 eigenvalues of G are
√(n_1n_2(1-2n_1+n_2)),0,0,…,0_n-2, -√(n_1n_2(1-2n_1+n_2)). Thus, G has exactly three distinct 𝒜ℬ𝒮 eigenvalues.
§ BOUNDS FOR THE ABS SPECTRAL RADIUS
In this section, we give some bounds for the largest 𝒜ℬ𝒮 eigenvalue η_1(G).
Let G be graph of order n with maximum degree Δ and minimum degree δ. Then the row sums of 𝒜𝒮(G) are equal if and only if G is a regular graph.
Suppose the row sums of 𝒜𝒮(G) are equal . Let u,v∈ V(G) such that d(u)=δ and d(v)=Δ. Then
∑_v_i:u∼ v_i√(1-2δ+d_i)=∑_v_j:v∼ v_j√(1-2Δ+d_j).
If δ≠Δ, then ∑_v_j:v∼ v_j√(1-2Δ+d_j)≥Δ√(1-2Δ+δ) >∑_v_i:u∼ v_i√(1-2δ+d_i), a contradiction to the equation (<ref>). Thus, δ=Δ. i.e., G is a regular graph. The converse part is straightforward.
The following theorem gives a lower bound for η_1(G) in terms of order and the atom-bond sum connectivity index of graph G.
Let G be a graph of order n, minimum degree δ and maximum degree Δ. Then
η_1(G) ≥2𝒜ℬ𝒮(G)n. Further, equality holds if and only if G is a regular graph.
Let x=(x_1,x_2,…,x_n)^T be a vector in ℝ^n. Then
x^T𝒜𝒮(G)x=2∑_v_i∼ v_j√(d_i+d_j-2d_i+d_j)x_ix_j.
Set x=(1,1,…,1)^T. Then by Rayleigh's inequality, η_1(G)≥x^T𝒜𝒮(G) xx^Tx=2𝒜ℬ𝒮(G)n, and the equality holds if and only if x=(1,1,…,1)^T is an eigenvector of 𝒜𝒮(G) corresponding to the eigenvalue η_1(G). Suppose η_1(G)=2𝒜ℬ𝒮(G)n. Then the row sums of 𝒜𝒮(G) are equal. Therefore, by Lemma <ref>, G is a regular graph.
Next, we provide a lower and upper bound for η_1(G) in terms of order, size and the harmonic index of graph G.
Let G be a graph of order n and size m with no isolated vertices. Then
√(2(m-H(G))/n)≤η_1(G)≤√(2(n-1)/n(m-H(G)))
with equality holds if and only if n is even and G≅n2K_2.
By Proposition <ref>,
nη_1^2≥∑_i=1^n η_i^2=2(m-H(G)).
Since ∑_i=1^n η_i=0, η_1^2=(∑_i=2^n η_i)^2. Therefore by Cauchy-Schwarz inequality,
η_1^2≤ (n-1)∑_i=2^n η_i^2 and the equality holds if and only if η_2=η_3=⋯=η_n.
So,
nη_1^2 ≤ 2(n-1)(m-H(G)).
Thus from equations (<ref>) and (<ref>) we get the desired inequality.
Suppose n is even and G≅n2K_2. Then 𝒜𝒮(G) is the null matrix, and so η_1=η_2=…=η_n=0 and H(G)=m. Thus the equalities in (<ref>) holds.
Conversely, suppose the right equality holds. Then from equation (<ref>), η_1^2=η_2^2=…=η_n^2. This implies that G has at most two distinct 𝒜ℬ𝒮 eigenvalue. Therefore, by Theorems <ref> and <ref>, either G≅ K_n (n>2) or n is even and G≅n2K_2. If G≅ K_n, then η_1^2≠η_2^2, a contradiction. Thus, G≅n2K_2 for an even integer n. Similarly, the left equality holds if and only if G≅n2K_2 for an even integer n.
Let G be a graph of order n with minimum degree δ and maximum degree Δ. Then √(δ(δ-1))≤η_1(G)≤√(Δ(Δ-1)). Further, equality holds if and only if G is a regular graph.
From <cit.>, min_1≤ i≤ n{R_i}≤η_1(G)≤max_1≤ i≤ n{R_i}, where R_i is the row sum of the ith row of 𝒜𝒮(G). Moreover, the equality on both sides holds if and only if all the row sums of 𝒜𝒮(G) are equal. Now,
max_1≤ i≤ n{R_i}=max_1≤ i≤ n ∑_v_i:v_i∼ v_j√(1-2/d_i+d_j) ≤√(Δ(Δ-1)),
where the equality holds if and only if one of the components of G is a Δ-regular graph,
and
min_1≤ i≤ n{R_i}= min_1≤ i≤ n ∑_v_i:v_i∼ v_j√(1-2/d_i+d_j) ≥√(δ(δ-1)),
where the equality holds if and only if one of the components of G is a δ-regular graph. Now, by Lemma <ref>, the row sums of 𝒜𝒮(G) is a constant if and only if G is regular. Thus,
√(δ(δ-1))≤η_1(G)≤√(Δ(Δ-1)) and the equality on both sides holds if and only if G is a regular graph.
The sum connectivity matrix of a graph G is a general extended adjacency matrix with ℱ(d_i,d_j)=1√(d_i+d_j). It is denoted by 𝒮(G). In the following theorem, we give a relation between the spectral radius of 𝒮(G) (ρ(𝒮(G))) and η_1(G).
If G is a connected graph of order n≥2 , then
ρ(𝒮(G)) min_
v_i∼ v_j√(d_i+d_j-2)≤η_1(G)≤ρ(𝒮(G))max_
v_i∼ v_j√(d_i+d_j-2).
Further, equality on both sides holds if and only if G is a regular graph or semiregular graph.
The matrices 𝒜𝒮(G) and 𝒮(G) are non-negative and irreducible. Moreover,
𝒮(G)min_
v_i∼ v_j√(d_i+d_j-2)≼𝒜𝒮(G)≼𝒮(G)max_
v_i∼ v_j√(d_i+d_j-2).
Thus, by Lemma <ref>,
ρ(𝒮(G))min_
v_i∼ v_j√(d_i+d_j-2)≤η_1(G)≤ρ(𝒮(G))max_
v_i∼ v_j√(d_i+d_j-2).
Now we consider the equality case. Suppose η_1(G)=ρ(𝒮(G))max_v_i∼ v_j√(d_i+d_j-2). Then by Lemma <ref>, 𝒜𝒮(G)=𝒮(G)max_v_i∼ v_j√(d_i+d_j-2). This implies that, for every edge v_iv_j in G, √(d_i+d_j-2)=max_v_i∼ v_j√(d_i+d_j-2). Therefore, d_i+d_j is constant for any edge v_iv_j of G. Let u (resp. v) be a vertex in G with maximum degree Δ (resp. minimum degree δ). Let u∼ u_1 and v∼ v_1. Then d(u)+d(u_1)≥Δ+δ≥ d(v)+d(v_1). Hence, d_i+d_j=δ+Δ, for all v_iv_j∈ E(G). Now, suppose there exists a vertex w in G such that d(w)∈(δ, Δ). Then G has a component whose vertex degrees are either d(w) or Δ+δ-d(w). Therefore, G is disconnected, a contradiction. Thus, G is Δ-regular or (δ, Δ)-semiregular graph. Similarly, if η_1(G)=ρ(𝒮(G))min_v_i∼ v_j√(d_i+d_j-2), then G is Δ-regular or (δ, Δ)-semiregular graph. The converse part is straightforward.
If G is a connected graph with maximum degree Δ and minimum degree δ, then 2χ(G)n√(2δ-2)≤η_1(G)≤√(2(Δ-1)(n-1)n R(G)), where the equality on the left side holds only if G is regular and the equality on the right side holds only if G is a complete graph.
From <cit.>, we have
2χ(G)n≤ρ(𝒮(G))≤√(n-1nR(G)),
where the left side equality holds only if 𝒮(G) has equal row sums, and the right equality holds only if G is a complete graph. Therefore by Theorem <ref>, we get the desired result.
Now, we obtain an upper bound for the spectral radius of unicyclic graphs using the following lemma.
<cit.>
Let P≥ 0 be an irreducible matrix with an eigenvalue θ.If PY≤ tY for t∈ℝ, Y∈ℝ^n and Y≥ 0, then t≥θ.
Let G be a unicyclic graph of order n and girth at least 5. Then η_1(G)≤√((n-3)(n+1)n-1).
Since G is unicyclic of order n, ∑_i=1^n d_i=2n.
Therefore,
∑_v_j:v_i∼ v_j d_j =2n-d_i-∑_v_j:v_j≁v_i d_j≤ 2n-d_i-∑_v_j:v_j≁v_i 1 =n+1.
That is, ∑_v_j:v_i∼ v_j d_j≤ n+1. Also, d_i+d_j=2n-∑_k≠ i, jk=1^nd_k. Since G is of girth at least 5, there exist at least 3 vertices v_p,v_q and v_r on the cycle which are distinct from the vertices v_i and v_j. Therefore, d_i+d_j≤ 2n-6-∑_k∉{i,j,p,q,r}d_k≤ n-1. That is, d_i+d_j≤ n-1.
Let Δ_a=max{d_i+d_j: v_iv_j∈ E(G)}, Y={√(d_1),√(d_2),…,√(d_n)} and (𝒜𝒮Y)_i denote the i^th row sum of the matrix (𝒜𝒮)Y.
Then Δ_a≤ n-1, and so
(𝒜𝒮Y)_i=∑_v_j:v_i∼ v_j√(1-2d_i+d_j)√(d_j)≤√(1-2n-1)∑_v_j:v_i∼ v_j√(d_j).
Now by Cauchy-Schwarz inequality,
∑_v_j:v_i∼ v_j√(d_j)≤√(∑_v_j:v_i∼ v_jd_j)√(d_i)≤√(n+1)√(d_i).
Therefore, (𝒜𝒮Y)_i≤√((n-3)(n+1)n-1)√(d_i), and thus
𝒜𝒮Y≤√((n-3)(n+1)n-1) Y.
Hence, by Lemma <ref>, η_1(G)≤√((n-3)(n+1)n-1).
§ PROPERTIES OF ATOM-BOND SUM-CONNECTIVITY ENERGY
In this section, we present some bounds on ℰ_𝒜ℬ𝒮(G).
Let G be a graph with n ≥ 2 vertices and m edges. Then
* ℰ_𝒜ℬ𝒮(G)≥2√(m-H(G)). Equality holds if and only if G≅ pK_n_1,n_2∪ qK_2∪ rK_1, where n_1+n_2>2, p=0 or 1 and p(n_1+n_2)+2q+r=n.
* ℰ_𝒜ℬ𝒮(G)≤√(2n(m-H(G))). Equality holds if and only if G≅ pK_2∪ qk_1, where 2p+q=n.
(i) From <cit.>, we have ℰ_𝒜ℬ𝒮(G)≥√(2trace(𝒜𝒮^2(G))) and the equality holds if and only if η_1=-η_n and η_2=η_3=⋯=η_n-1=0. Since trace(𝒜𝒮^2(G))=2m-H(G), we get ℰ_𝒜ℬ𝒮(G)≥2√(m-H(G)). Suppose η_1=-η_n and η_2=η_3=⋯=η_n-1=0. Then G is bipartite (see, Corollary <ref>). Also, if H is a component of G, then 𝒜𝒮(H) has either two or three distinct eigenvalues, or all its eigenvalues are equal to 0. Thus, from Theorems <ref>, we get H≅ K_n_1,n_2 with n_1+n_2>2, K_2 or K_1. Furthermore, if K_n_1,n_2 (n_1+n_2>2) is a component of G, then all other components of G are either K_2 or K_1. Otherwise η_2>0, a contradiction.
(ii) From<cit.>, we have ℰ_𝒜ℬ𝒮(G)≤√(n trace(𝒜𝒮^2(G))) and the equality holds if and only of |η_1|=|η_2|=⋯=|η_n|. Therefore, ℰ_𝒜ℬ𝒮(G)≤√(2n (m-H(G))). Suppose |η_1|=|η_2|=⋯=|η_n|. Then the eigenvalues of 𝒜𝒮(G) are all equal or it has exactly two distinct eigenvalues. So, by Theorems <ref> and <ref>, each component of G is K_n_1 for some positive integer n_1. Furthermore, n_1=1 or 2. Otherwise η_1>η_2. This completes the proof.
To prove our next upper bound on ℰ_𝒜ℬ𝒮(G), we need the following lemma.
<cit.>
A regular connected graph G is strongly regular if and only if it has three distinct eigenvalues.
Let G be a graph of order n with m edges.
If m=H(G) or 2(m-H(G))≥ n, then ℰ_𝒜ℬ𝒮(G)≤2𝒜ℬ𝒮(G)n+√((n-1)(2m-2H(G)-4 (𝒜ℬ𝒮(G))^2n^2)). Further, equality holds if and only if G≅ pK_2∪ qK_1, where 2p+q=n, G≅ K_n or G is a non-complete strongly regular graph.
Using Cauchy-Schwarz inequality,
ℰ_𝒜ℬ𝒮(G)=∑_i=1^n |η_i| =η_1+∑_i=2^n |η_i|
≤η_1+√((n-1)(2m-2H(G)-η_1^2)),
where the equality holds if and only if |η_2|=|η_3|=⋯=|η_n|. Let g(x)=x+√((n-1)(2m-2H(G)-x^2)). Then by first derivative test, the function g is decreasing for √(2(m-H(G))n)≤ x≤√(2(m-H(G))).
Now, 2𝒜ℬ𝒮(G)n=2n∑_v_iv_j∈ E(G)√(d_i+d_j-2d_i+d_j)≥2n∑_v_iv_j∈ E(G)d_i+d_j-2d_i+d_j=2(m-H(G))n≥√(2(m-H(G))n) (because 2(m-H(G))≥ n). That is, 2𝒜ℬ𝒮(G)n≥√(2(m-H(G))n). Upon combining the above inequality with Theorem <ref>, we get
√(2(m-H(G))n)≤2 𝒜ℬ𝒮(G)n≤η_1≤√(2(m-H(G))).
Therefore,
ℰ_𝒜ℬ𝒮(G)≤ g(η_1)≤ g(2𝒜ℬ𝒮(G)n)= 2𝒜ℬ𝒮(G)n+√((n-1)(2m-2H(G)-4 (𝒜ℬ𝒮(G))^2n^2))
.
Suppose the equality in equation (<ref>) holds. Then η_1=2𝒜ℬ𝒮(G)n and |η_2|=|η_3|=⋯=|
η_n|. Thus, 𝒜𝒮(G) has at most three distinct eigenvalues. If 𝒜𝒮(G) has at most two distinct eigenvalues, then by Theorems <ref> and <ref>, G≅ pK_2∪ qK_1, where p+q=n, or G≅ K_n. Otherwise, 𝒜𝒮(G) has exactly three distinct eigenvalues. Now, by Theorem <ref>, G is k-regular graph for some constant k, and so 𝒜𝒮(G)=√(k-1k)A(G). If 𝒜𝒮(G) has exactly three distinct eigenvalues, then G has exactly three distinct eigenvalues. Therefore by Lemma <ref>, G must be a non-complete strongly regular graph. Conversely, if G≅ pK_2∪ qK_1, where 2p+q=n, or G≅ K_n, then one can easily see that the equality in (<ref>) holds. Suppose G is a non-complete strongly k-regular graph, then η_1=√(k(k-1)), |η_j|=√((n-k)(k-1)n-1) for j=2,3,…,n. Now, one can easily check that equality in (<ref>) holds. This completes the proof of the theorem.
The following lemmas are useful to prove our next result.
<cit.>
If M is a Hermitian n× n matrix, then |θ_1(M)-θ_n(M)|≥ 2max_j(∑_k:k≠ j|a_jk|^2)^1/2.
Let G be a connected graph of order n≥ 2 with maximum degree Δ and minimum degree δ. Then
2√(Δ(Δ+δ-2)Δ+δ)≤η_1+|η_n|≤ 2√(m-H(G)).
By Lemma <ref>,
η_1+|η_n|≥ 2 max_i(∑_v_j:v_i∼ v_jd_i+d_j-2/d_i+d_j)^1/2≥ 2√(Δ(Δ+δ-2)/Δ+δ).
Proving the left inequality.
Now, by Cauchy-Schwarz inequality and from Proposition <ref>,
η_1+|η_n|≤√(2(η_1^2+η_n^2))≤√(4(m-H(G)))=2√(m-H(G)).
Let G be (n, m)-graph with maximum degree Δ.
If 2(m-H(G))n≤Δ(Δ-1)Δ+1, ℰ_𝒜ℬ𝒮(G)≤ 2√(Δ(Δ-1)Δ+1)+√(2(n-2)(m-H(G)-Δ(Δ-1)Δ+1)). Equality holds if and only if G≅ pK_1,Δ∪ qK_2∪ rK_1, where p=0 or 1 and p(Δ+1)+2q+r=n.
Let η_1≥η_2≥⋯≥η_n be the 𝒜ℬ𝒮 eigenvalues of G. Using Cauchy-Schwarz inequality,
ℰ_𝒜ℬ𝒮(G)=η_1+|η_n|+∑_i=2^n-1|η_i|≤η_1+|η_n|+√(∑_i=2^n-1(n-2)|η_i|^2),
where the equality holds if and only if |η_2|=|η_3|=⋯=|η_n-1|.
Therefore, by Proposition <ref>,
ℰ_𝒜ℬ𝒮(G) ≤η_1+|η_n|+√((n-2)(2m-2H(G)-η_1^2-η_n^2)).
Further, by A.M.-G.M. inequality, 2√(η_1η_n)≤η_1+|η_n| and the equality holds if and only if η_1=|η_n|, Thus
ℰ_𝒜ℬ𝒮(G) ≤η_1+|η_n|+√((n-2)(2m-2H(G)-(η_1+|η_n|)^2/2)).
Let f(x)=2x+√((n-2)(2m-2H(G)-2x^2)). Then f is decreasing for √(2(m-H(G))n)≤ x≤√(m-H(G)). By Lemma <ref>,
√(2(m-H(G))/n)≤√(Δ(Δ-1)/Δ+1)≤η_1+|η_n|/2≤√(m-H(G)).
So,
E_𝒜ℬ𝒮(G) ≤ f(η_1+|η_n|/2)≤ f(√(Δ(Δ-1)/Δ+1))
=2√(Δ(Δ-1)/Δ+1)+√((n-2)(2m-2H(G)-2Δ(Δ-1)/Δ+1)).
Suppose the equality in equation (<ref>) holds. Then η_1=|η_n|=√(Δ(Δ-1)Δ+1) and |η_2|=|η_3|=⋯=|η_n-1|. Thus, by Perron-Frobenius theorem, G is a bipartite graph.
If Δ=1, then, G≅ qK_2∪ rK_1, where 2q+r=n. Otherwise, Δ>1, and so η_1=|η_n|>0 . Let H be component of G having a vertex of degree Δ. Since G is bipartite, K_1,Δ is an induced subgraph of H. So, by Cauchy's interlacing theorem,
η_1(H)≥η_1(K_1,Δ)=√(Δ(Δ-1)Δ+1). Moreover the equality holds if and only if H≅ K_1,Δ. Now, η_1=√(Δ(Δ-1)Δ+1)≥η_1(H) because H is a component of G. Therefore, η_1(H)=η_1=Δ(Δ-1)Δ+1 and H≅ K_1,Δ. Further, 0 is an 𝒜ℬ𝒮 eigenvalue of H, and thus 𝒜ℬ𝒮 eigenvalues of G are η_1=|η_n|=√(Δ(Δ-1)Δ+1) and |η_2|=|η_3|=⋯=|η_n-1|=0. Therefore, if H_1 H is a component of G, then all its 𝒜ℬ𝒮 eigenvalues are equal to 0. Therefore, by Theorem <ref>, H_1≅ K_2 or K_1. Thus G≅ pK_1,Δ∪ qK_2∪ rK_1, where p=0 or 1 and p(Δ+1)+2q+r=n. Conversely, if G≅ pK_1,Δ∪ qK_2∪ rK_1, where p=0 or 1 and p(Δ+1)+2q+r=n, then one can easily verify that the equality holds.
Let φ_1≥φ_2≥⋯≥φ_n be the 𝒜ℬ𝒞-eigenvalues and η_1≥η_2≥⋯≥η_n be the 𝒜ℬ𝒮 eigenvalues of a graph G without pendent vertices. Then ℰ_𝒜ℬ𝒮(G)≥√(2n)ℰ_𝒜ℬ𝒞(G).
We have,
(ℰ_𝒜ℬ𝒮(G))^2 =(∑_i=1^n |η_i|)^2=∑_i=1^n η_i^2+2∑_i<j|η_i||η_j|
≥∑_i=1^n η_i^2+2 |∑_i<jη_iη_j| (by triangle inequality)
= 2 ∑_i=1^n η_i^2 (because∑_i=1^n η_i^2=-2 ∑_i<jη_iη_j)
=4∑_v_iv_j∈ E(G)d_i+d_j-2d_i+d_j
≥ 4∑_v_iv_j∈ E(G)d_i+d_j-2d_id_j
=2∑_i=1φ_i^2≥2n(∑_i=1|φ_i|)^2 (by Cauchy-Schwarz inequality)
=2n(ℰ_𝒜ℬ𝒞(G))^2.
Thus, ℰ_𝒜ℬ𝒮(G)≥√(2n)ℰ_𝒜ℬ𝒞(G).
§ QSPR ANALYSIS OF BENZENOID HYDROCARBON
In this section, we show that the physicochemical properties, namely, the boiling point (BP) and pi-electron energy (ℰ_π)of benzenoid hydrocarbons can be modeled using 𝒜ℬ𝒮-energy.
The experimental values listed in this section are taken from <cit.>. The hydrogen-suppressed molecular graphs are depicted in Figure <ref>.
The calculated values of 𝒜ℬ𝒮 energy for benzenoid hydrocarbons are shown in Table <ref> .
Consider the following model:
Y= A (± S_e)ℰ_𝒜ℬ𝒮+B(± S_e),
where Y, A, S_e and B denote the property, slope, standard error of coefficients and intercept, respectively. We denote the correlation coefficient, standard error of the model, the F-test value and the significance by r, SE, F and SF, respectively.
For benzenoid hydrocarbons, it is found that the 𝒜ℬ𝒮 energy has a strong correlation with the boiling point and pi-electron energy. In fact, we get the following regression equations for benzenoid hydrocarbons using model (<ref>).
BP=22.684(± 0.4013)ℰ_𝒜ℬ𝒮+2.25(± 8.7952),
r^2=0.9941, SE=7.8838, F=3194.002, SF=1.23× 10^-22.
ℰ_π=-1.2668(± 0.01235)ℰ_𝒜ℬ𝒮+1.0586(± 0.2706),
r^2=0.9982, SE=0.2425, F=10522.29, SF=1.54× 10^-27.
The data variance for BP and pi-electron energy is around 99%. The standard errors are very low, particularly in model (<ref>), where they are significantly small. This low standard error enhances the model's consistency and increases the F-value, especially for pi-electron energy. The SF values are significantly below 0.05. The predicted properties from model (<ref>) are compared with the experimental properties using bar diagrams, where series 1 is related to experimental value and series 2 is related to predicted value. These figures show that the experimental and predicted data align well. Additionally, the residuals are randomly scattered around the zero line, indicating that the model is consistent.
In <cit.> the QSPR analysis of benzenoid hydrocarbons is done using the second-degree based entropy, ve-degree irregularity index, Albertson index, first and second status connectivity indices, first and second eccentric connectivity indices, Wiener index, Sombor index, reduced Sombor index and 𝒜ℬ𝒮 index. It is observed that the |r| value obtained for BP using ℰ_𝒜ℬ𝒮 is better than that of |r| value obtained from these indices. Further, ℰ_𝒜ℬ𝒮 have high pi-electron energy predictive ability compared to second-degree based entropy and 𝒜ℬ𝒮 index. With the smaller standard error and higher F-value of the proposed models, we can conclude that the performance of the models is better than that of the models discussed in <cit.> using 𝒜ℬ𝒮 index.
§ CONCLUSIONS
In this work, we have determined all connected graphs with η_n>-1. As a result, graphs with two distinct 𝒜ℬ𝒮 eigenvalues are classified. Also, bipartite graphs with three distinct 𝒜ℬ𝒮 eigenvalues are determined. Further, some bounds on the spectral radius and energy of the matrix 𝒜𝒮(G) are obtained. The problem of characterizing non-bipartite graphs with three distinct 𝒜ℬ𝒮 eigenvalues remains open. Also, the chemical importance of 𝒜ℬ𝒮 energy is demonstrated. As a future work, the problem of obtaining sharp bounds for the spectral radius and energy of the matrix 𝒜𝒮(G) in terms of graph parameters would be interesting.
99
akbariarelations S. Akbari, M. Habibi, S. Rabizadeh, Relations between Energy and Sombor Index,
MATCH Commun. Math. Comput. Chem. 92(2024) 425–435.
alawiah2018new N. Alawiah, N. J. Rad, A. Jahanbani, H. Kamarulhaili. New upper bounds on the energy of a graph, MATCH Commun. Math. Comput. Chem. 79(2018) 287–301.
ali2020symmetric A. Ali, S. Elumalai, T. Mansour, On the symmetric division deg index of molecular graphs, MATCH Commun. Math. Comput. Chem. 83 (2020) 205–220.
ali2022atom A. Ali, B. Furtula, I.Redžepović, I. Gutman, Atom-bond sum-connectivity
index, Journal of Mathematical Chemistry. 60 (2022) 2081–2093.
ali2024extremal A. Ali, I. Gutman, B. Furtula, I. Redžepović, T. Došlić, Z. Raza, Extremal
results and bounds for atom-bond sum-connectivity index, MATCH Commun. Math. Comput. Chem. 92(2024) 271–314.
ali2023atom A. Ali, I. Gutman, and I.Redžepović, Atom-bond sum-connectivity index of unicyclic
graphs and some applications, Electron. J. Math. 5(2023) 1–7.
brouwer2011spectra A. E. Brouwer, W. H. Haemers, Spectra of graphs, Springer, New York, 2011.
cvetkovic1980spectra D. M. Cvetković, M. Doob, H. Sachs, Spectra of graphs: theory and applications, Academic Press, New York, 1980.
tavakoli2024energy K. C. Das, G. Ali, M. Tavakoli, On the Energy and Spread of the Adjacency, Laplacian and Signless Laplacian Matrices of Graphs, MATCH Commun. Math. Comput. Chem. 92(2024) 545–566.
das2018degree K. C. Das, I. Gutman, I. Milovanović, E. Milovanović, B. Furtula, Degree-based
energies of graphs, Linear algebra and its applications. 554(2018) 185–204.
das2022ve K. C. Das, S. Mondal, On ve-degree irregularity index of graphs and its applications as molecular descriptor, Symmetry. 14(2022) 2406.
das2023neighborhood K. C. Das, S. Mondal, On neighborhood inverse sum indeg index of molecular
graphs with chemical significance, Information Sciences. 623(2023) 112–131.
espinalgraph C. Espinal, J. Rada, Graph energy change due to vertex deletion, MATCH Commun. Math. Comput. Chem. 92(2024) 89–103.
estrada2017meaningE. Estrada, M. Benzi, What is the meaning of the graph energy after all?, Discrete
Applied Mathematics. 230(2017) 71–77.
gutman2013degreeI. Gutman, Degree-based topological indices, Croatica chemica acta. 86(2013) 351–361.
gutman2021geometric I. Gutman, Geometric approach to degree-based topological indices: Sombor indices,
MATCH Commun. Math. Comput. Chem. 86(2021) 11–16.
gutman2020research I. Gutman, H. Ramane, Research on graph energies in 2019,MATCH Commun. Math. Comput. Chem. 84(2020) 277–292.
horn2012matrixR. A. Horn, C. R. Johnson, Matrix analysis, Cambridge university press, 2012.
li2012graph X. Li, Y. Shi, I. Gutman, Graph energy, Springer Science & Business Media, 2012.
lin2024abs Z. Lin, T. Zhou,Y. Liu, On ABS Estrada index of trees, Journal of Applied
Mathematics and Computing.(2024) 1–13.
lin2024abstrees Z. Lin, T. Zhou,Y. Liu, On the atom-bond sum-connectivity spectral radius of trees, Discrete Mathematics Letters.(2024) 122–127.
liu2021moreH. Liu, H. Chen, Q. Xiao, X. Fang, Z. Tang, More on Sombor indices of chemical graphs and their applications to the boiling point of benzenoid hydrocarbons,
International Journal of Quantum Chemistry. 121(2021) 26689.
merikoski2003characterizationsJ. K. Merikoski, R. Kumar, Characterizations and lower bounds for the spread
of a normal matrix, Linear Algebra and its Applications. 364(2003) 13–31.
mondal2023degree S. Mondal, K. C. Das, Degree-Based Graph Entropy in Structure–Property Modeling, Entropy. 25(2023) 1092.
nithya2023smallest P. Nithya, S. Elumalai, S. Balachandran, S. Mondal, Smallest abs index of
unicyclic graphs with given girth, Journal of Applied Mathematics and Computing. 69(2023) 3675–3692.
oboudi2019new M. R. Oboudi, A new lower bound for the energy of graphs, Linear Algebra and its
Applications. 580(2019) 384–395.
ramane2017status H. S. Ramane, A. S. Yalnaik, Status connectivity indices of graphs and its applications to the boiling point of benzenoid hydrocarbons, Journal of Applied
Mathematics and Computing. 55 (2017) 609–627.
zhou2010sum B. Zhou, N. Trinajstic, On sum-connectivity matrix and sum-connectivity energy of (molecular) graphs, Acta Chim. Slov. 57 (2010) 518–523.
|
http://arxiv.org/abs/2409.02622v1 | 20240904113148 | Renormalization Group Equations for the Dimension-7 SMEFT Operators | [
"Di Zhang"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
Accurate calibration spectra for precision radial velocities
A. Reiners1
M. Debus1
S. Schäfer1
E. Tiemann2
M. Zechmeister1
September 9, 2024
=========================================================================================
§ INTRODUCTION
The Standard Model (SM) of particle physics is very successful in describing strong, weak and electromagnetic interactions and has passed a plethora of precision tests <cit.>, but it is unable to explain neutrino masses, dark matter and the matter-antimatter asymmetry of the Universe, and hence believed to be incomplete <cit.>. In the case where the energy scale Λ of new physics is much higher than the electroweak scale, one can make use of the SM effective field theory (SMEFT) <cit.>
ℒ^_ SMEFT = ℒ^_ SM + 1/2( C^αβ_5 ^(5)_αβ + h.c.) + ∑_i C^i_6 ^(6)_i + ∑_j C^j_7 ^(7)_j + … ,
to model-independently describe and study indirect low-energy consequences of new physics that are encoded in Wilson coefficients C^_d of non-renormalizable operators ^(d) with d>4 being the mass dimension. In the SMEFT, operators ^(d) consist of the SM fields and preserve the SM gauge symmetry and Lorentz invariance, and their Wilson coefficients C^_d are suppressed by 1/Λ^d-4.
To investigate low-energy consequences of new physics within the SMEFT, one has to construct an operator basis that contains a set of independent operators. The number of independent operators at each mass dimension is definite and can be worked out by the Hilbert series <cit.>, but the specific operators in a physical basis are usually ambiguous <cit.>. The dim-5 operator is unique and known as the Weinberg operator <cit.>. For dim-6 operators, the most popular basis is the so-called Warsaw basis <cit.>. The physical basis for dim-7 operators has been constructed in Refs. <cit.> (see the latest review <cit.> and references therein for the higher-dimensional-operator basis). In general, those operators are generated at the cut-off scale Λ where new physics decouples, and their Wilson coefficients are obtained via matching conditions at Λ. Then, one may make use of the SMEFT renormalization group equations (RGEs) to run those Wilson coefficients from the cut-off scale Λ down to the electroweak scale so as to encounter them with precision observables <cit.>. Due to the RG mixing among different operators, an operator not generated by matching can achieve a non-vanishing Wilson coefficient through running. Such non-trivial relations among operators need to be carefully considered in a phenomenological analysis <cit.>. Therefore, the SMEFT RGEs have been extensively discussed in literature, e.g., see Ref. <cit.> and references therein.
In this talk, we focus on RGEs of dim-5 and dim-7 operators up to 𝒪( Λ^-3), whose forms are
^_5 = γ^(5,5) C^_5 + γ̂^(5,5) C^_5 C^_5 C^_5+ γ^(5,6)_i C^_5 C^i_6 + γ^(5,7)_i C^i_7 ,
^i_7 = γ^(7,7)_ij C^j_7 + γ^(7,5)_i C^_5 C^_5 C^_5 + γ^(7,6)_ij C^_5 C^j_6
with ^⋯_⋯≡μ d C^⋯_⋯ / dμ and γ^…_… being the anomalous dimension matrix. γ^(5,5) and γ^(7,7) in Eq. (<ref>) have been taken into consideration in Refs. <cit.> and <cit.>, respectively, and others have been partially discussed with some approximations in Ref. <cit.>. We attempt to complete the RGEs for all dim-5 and dim-7 operators up to 𝒪( Λ^-3) without any approximation. To achieve our goal, we first construct a new physical basis for dim-7 operators, which is more suitable for calculations and to get compact results compared with the one proposed in Ref. <cit.>. Moreover, the so-called Green's basis <cit.> that is directly related to 1PI Green's functions and needed in intermediate calculations with off-shell scheme is also constructed for dim-7 operators, together with the reduction relations converting the Green's basis to the physical one. With those two bases and the reduction relations between them, we derive the complete one-loop RGEs of dim-5 and dim-7 operators up to 𝒪( Λ^-3).
§ OPERATOR BASES FOR DIM-7 OPERATORS
The operator basis for dim-7 operators was first discussed in Ref. <cit.> and two redundant operators were removed from this basis later <cit.>. However, there were still some redundancies in this basis due to nontrivial flavor relations among operators induced by equations of motion (EoMs) <cit.>. Those redundancies were got rid of and a physical basis was put forward in Ref. <cit.>. Here, we propose a new physical basis for dim-7 operators in Table <ref> <cit.>. The difference between this basis and the one in Ref. <cit.> is the way to determine the non-redundant degrees of freedom in operators ^_eℓℓℓ H and ^_ℓdddH. We decompose those operators with flavor relations by means of SU(3) tensor decomposition since each fermion field can be regarded as the fundamental representation of a SU(3) flavor symmetry. For instance, ^_eℓℓℓ H can be decomposed into one totally symmetric, one totally anti-symmetric and two mixed-symmetric combinations, namely
^αβγλ_eℓℓℓ H = ^(S)αβγλ_eℓℓℓ H + ^(A)αβγλ_eℓℓℓ H + ^(M)αβγλ_eℓℓℓ H + ^(M^')αβγλ_eℓℓℓ H .
The last combination ^(M^')αβγλ_eℓℓℓ H = ( ^αβγλ_eℓℓℓ H + ^αλγβ_eℓℓℓ H - ^αλβγ_eℓℓℓ H - ^αγβλ_eℓℓℓ H)/3 is automatically vanishing due to the flavor relation ^αβγλ_eℓℓℓ H + ^αλγβ_eℓℓℓ H - ^αλβγ_eℓℓℓ H - ^αγβλ_eℓℓℓ H = 0
whereas the other three explicitly given in Table <ref> are free from the flavor relation and unlike those in Ref. <cit.>, their flavor indices get no constraints and run over all flavors. Therefore, the basis in Table <ref> is more suitable for calculations and to organize results in a compact form. Similarly, one can construct a Green's basis for dim-7 operators, where there are eight extra operators, i.e., <cit.>
^(S)αβ_ℓ HD3 = 1/2( ^αβ_ℓ HD3 + ^βα_ℓ HD3) with ^αβ_ℓ HD3 = ϵ^adϵ^be( ℓ^a_α L C ℓ^b_β L) D^μ H^d D^_μ H^e ,
^(S)αβ_ℓ HD4 = 1/2( ^αβ_ℓ HD4 + ^βα_ℓ HD4) with ^αβ_ℓ HD4 = ϵ^adϵ^be( D^μℓ^a_α L C D^_μℓ^b_β L) H^d H^e ,
^αβ_ℓ HD5 = ϵ^abϵ^de( ℓ^a_α L C σ^_μν D^μℓ^b_β L) H^d D^ν H^e ,
^(S)αβ_ℓ HD6 = 1/2( ^αβ_ℓ HD6 + ^βα_ℓ HD6) with ^αβ_ℓ HD6 = ϵ^adϵ^be( D^μℓ^a_α L C σ^_μν D^νℓ^b_β L) H^d H^e ,
^αβγλ_dℓℓ D u = ϵ^ab( D^_α Rℓ^a_β L) ( ℓ^b_γ L C γ^_μ iD^μ U^_λ R ) , ^αβγλ_d D ℓℓ u = ϵ^ab( D^_α R iD^μℓ^a_β L) ( ℓ^b_γ L C γ^_μ U^_λ R) ,
^αβγλ_ℓdDqd = ( ℓ^_α L D^_β R) ( iD^μ Q^_γ L C γ^_μ D^_λ R) , ^αβγλ_ℓ d q D d = ( ℓ^_α L D^_β R) ( Q^_γ L C γ^_μ iD^μ D^_λ R) ,
apart from the operators in Table <ref> with ^(S)_ℓ HD1, ^(S)_ℓ HD2, ^(S)_edddD, ^(S)_duℓℓ D and ^(S)_ℓqddD replaced by ^_ℓ HD1, ^_ℓ HD2, ^_edddD, ^_duℓℓ D and ^_ℓqddD. By mean of the SM fields' EoMs, one can obtain the reduction relations to convert operators in the Green's basis to those in the physical basis, e.g.,
C^αβ_ℓ eHD = G^αβ_ℓ eHD + 1/2( G^†_5 )^αγ[ G^γβ_eHD2 - G^γβ_eHD4 - 2 G^γλ_ℓ D( Y^_l )^_λβ] - 1/4( G^αγ_ℓ HD2 - G^γα_ℓ HD2)( Y^_l )^_γβ
+ ( G^αγ_ℓ HD5 + G^(S)αγ_ℓ HD6 - G^(S)αγ_ℓ H D 4) ( Y^_l )^_γβ ,
where some redundant dim-6 operators are involved due to the existence of ^(5). More details for basis constructions for dim-7 operators and the full reduction relations can be found in Ref. <cit.>.
§ RGES OF DIM-5 AND DIM-7 OPERATORS IN THE SMEFT
Starting with the physical basis in Table <ref>, we calculate a set of 1PI diagrams to extract counterterms in the Green's basis with the modified minimal subtraction scheme, and then with the help of reduction relations, we obtain all counterterms in the physical basis, from which one can derive RGEs for Wilson coefficients of all operators. Since the calculations and results are pretty lengthy (see Refs. <cit.>), we only show two examples for the coefficients of ^(5) and ^(S)_ℓ H here:
^αβ_5 = 1/2( -3g^2_2 + 4λ +2T ) C^αβ_5 - 3/2( Y^_l Y^†_l C^_5 )^αβ + m^2 ( 8C^_H□ - C^_HD) C^αβ_5 + m^2 { 8 C^(S)∗αβ_ℓ H.
+ 3/2 g^2_2 ( 2 C^(S)∗αβ_ℓ HD1 + C^(S)∗αβ_ℓ HD2) + ( Y^_l Y^†_l C^(S)†_ℓ HD1)^αβ - 1/2( Y^_l Y^†_l C^(S)†_ℓ HD2)^αβ + 2( Y^_l C^†_ℓ e HD)^αβ
- . ( Y^†_l )^_γλ( 3 C^(S)∗γλαβ_eℓℓℓ H + 2 C^(M)∗γλαβ_eℓℓℓ H) - 3 ( Y^†_ d)^_γλ C^∗γαλβ_dℓ qℓ H1 + 6 ( Y^_ u)^_λγ C^∗λγαβ_quℓℓ H} + α↔β ,
^(S)αβ_ℓ H = 1/2C^_5 C^†_5 C^∗αβ_5 + 5/4( C^†_5 C^_5 C^†_5 )^αβ + C^∗αβ_5 { - 3/4( g^2_1 - g^2_2 + 4λ) C^_HD + ( 16 λ - 5/3 g^2_2 ) C^_H □.
-3 C^_H - 3g^2_2 C^_HW + 3/2( g^2_1 C^_HB + 3g^2_2 C^_HW + g^_1g^_2 C^_HWB) - Tr[ 2g^2_2 ( C^(3)_Hq + 1/3 C^(3)_Hℓ)+ C^_eH Y^†_l .
+ .. 3C^_dH Y^†_ d + 3Y^_ u C^†_uH - 2 ( Y^†_l C^(3)_Hℓ Y^_l + 3 Y^†_ d C^(3)_Hq Y^_ d + 3 Y^†_ u C^(3)_Hq Y^_ u) + 3 ( Y^_ u C^_Hud Y^†_ d + Y^_ d C^†_Hud Y^†_ u) ] }
- 3 g^_2 ( C^†_5 Y^_l C^†_eW)^αβ + 3/2( g^2_1 + g^2_2 ) [ ( C^†_5 C^(3)_Hℓ)^αβ - ( C^†_5 C^(1)_Hℓ)^αβ] + 1/2( C^†_5 Y^_l C^†_eH)^αβ
+ ( C^†_5 C^_eH Y^†_l )^αβ - 3 ( C^†_5 Y^_l Y^†_l C^(3)_Hℓ)^αβ - 1/4( 3g^2_1 + 15g^2_2 - 80 λ - 8 T ) C^(S)αβ_ℓ H - 3/2( C^(S)_ℓ H Y^_l Y^†_l )^αβ
+ ( 2λ - 3/2 g^2_2 ) ( C^_ℓ e HD Y^†_l )^αβ + ( C^_ℓ e HD Y^†_l Y^_l Y^†_l )^αβ - 3/4 g^2_2 ( g^2_2 - 4 λ) C^(S)αβ_ℓ HD1 + λ( C^(S)_ℓ HD1 Y^_l Y^†_l )^αβ
- ( C^(S)_ℓ HD1 Y^_l Y^†_l Y^_l Y^†_l )^αβ - 3/8( g^4_1 + 2 g^2_1 g^2_2 + 3g^4_2 - 4g^2_2 λ) C^(S)αβ_ℓ HD2 - 1/2λ( C^(S)_ℓ HD2 Y^_l Y^†_l )^αβ
- ( C^(S)_ℓ HD2 Y^_l Y^†_l Y^_l Y^†_l )^αβ - 3 g^3_2 C^αβ_ℓ HW - 6 g^_2 ( C^_ℓ HW Y^_l Y^†_l )^αβ - 3 C^(S)γλαβ_eℓℓℓ H[ λ( Y^_l )^_λγ - ( Y^_l Y^†_l Y^_l )^_λγ]
- 2C^(M)γλαβ_eℓℓℓ H[ λ( Y^_l )^_λγ - ( Y^_l Y^†_l Y^_l )^_λγ] - 3 C^γαλβ_dℓ qℓ H1[ λ( Y^_ d)^_λγ - ( Y^_ d Y^†_ d Y^_ d)^_λγ]
+ 6 C^γλαβ_quℓℓ H[ λ( Y^†_ u)^_λγ - ( Y^†_ u Y^_ u Y^†_ u)^_λγ] + α↔β ,
which satisfy the general form in Eq. (<ref>). The nonrenormalization theorem <cit.> can predict the zero entries in the anomalous dimension matrix for mixing among the same dimensional operators. Based on our results, we show the anomalous dimension matrix γ^(7,7)_ij for baryon-number-conserving operators in Table <ref>, where the zero entries in light grey cells and the non-vanishing entries in dark grey cells having Yukawa couplings of nonholomorphic forms are fully consistent with the nonrenormalization theorem. One may also check γ^(7,7)_ij for baryon-number-violating operators and find they are coincident with the nonrenormalization theorem as well <cit.>. Moreover, our results for mixing among different dimensional operators, e.g., γ^(7,6)_ij, may give an insight into a non-linear version of the theorem. One may refer to Refs. <cit.> for more discussions about the results.
§ SUMMARY
We have proposed a new physical basis and a Green's basis for dim-7 operators in the SMEFT, where there are no constraints on operators' flavor indices and they can run over all flavors. Therefore, those bases are suitable for matching and derivation of RGEs, and can also keep results in a compact form. Based on those two bases and the reduction relations among them, we have derived the complete one-loop RGEs of dim-5 and dim-7 operators up to 𝒪( Λ^-3) in the SMEFT. With those obtained results, one can discuss full RG running effects on some appealing lepton- or baryon-number-violating observables or processes up to 𝒪( Λ^-3) in the SMEFT, such as neutrino masses, neutrinoless double beta decay, meson and nucleon decays (see, e.g., Refs. <cit.> and references therein).
This work is supported by the Alexander von Humboldt Foundation.
99
ParticleDataGroup:2024cfk
S. Navas et al. [Particle Data Group],
Phys. Rev. D 110 (2024) no.3, 030001.
Xing:2020ijf
Z. z. Xing,
Phys. Rept. 854 (2020), 1-147
[arXiv:1909.09610 [hep-ph]].
Buchmuller:1985jz
W. Buchmuller and D. Wyler,
Nucl. Phys. B 268 (1986), 621-653.
Grzadkowski:2010es
B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek,
JHEP 10 (2010), 085
[arXiv:1008.4884 [hep-ph]].
Henning:2015alf
B. Henning, X. Lu, T. Melia and H. Murayama,
JHEP 08 (2017), 016
[erratum: JHEP 09 (2019), 019]
[arXiv:1512.03433 [hep-ph]].
Brivio:2017vri
I. Brivio and M. Trott,
Phys. Rept. 793 (2019), 1-98
[arXiv:1706.08945 [hep-ph]].
Isidori:2023pyp
G. Isidori, F. Wilsch and D. Wyler,
Rev. Mod. Phys. 96 (2024) no.1, 015006
[arXiv:2303.16922 [hep-ph]].
Weinberg:1979sa
S. Weinberg,
Phys. Rev. Lett. 43 (1979), 1566-1570.
Lehman:2014jma
L. Lehman,
Phys. Rev. D 90 (2014) no.12, 125023
[arXiv:1410.4193 [hep-ph]].
Liao:2016hru
Y. Liao and X. D. Ma,
JHEP 11 (2016), 043
[arXiv:1607.07309 [hep-ph]].
Liao:2019tep
Y. Liao and X. D. Ma,
JHEP 03 (2019), 179
[arXiv:1901.10302 [hep-ph]].
Henning:2014wua
B. Henning, X. Lu and H. Murayama,
JHEP 01 (2016), 023
[arXiv:1412.1837 [hep-ph]].
Babu:1993qv
K. S. Babu, C. N. Leung and J. T. Pantaleone,
Phys. Lett. B 319 (1993), 191-198
[arXiv:hep-ph/9309223 [hep-ph]].
Chankowski:1993tx
P. H. Chankowski and Z. Pluciennik,
Phys. Lett. B 316 (1993), 312-317
[arXiv:hep-ph/9306333 [hep-ph]].
Antusch:2001ck
S. Antusch, M. Drees, J. Kersten, M. Lindner and M. Ratz,
Phys. Lett. B 519 (2001), 238-242
[arXiv:hep-ph/0108005 [hep-ph]].
Chala:2021juk
M. Chala and A. Titov,
Phys. Rev. D 104 (2021) no.3, 035002
[arXiv:2104.08248 [hep-ph]].
Jiang:2018pbd
M. Jiang, N. Craig, Y. Y. Li and D. Sutherland,
JHEP 02 (2019), 031
[erratum: JHEP 01 (2021), 135]
[arXiv:1811.08878 [hep-ph]].
Zhang:2023kvw
D. Zhang,
JHEP 10 (2023), 148
[arXiv:2306.03008 [hep-ph]].
Zhang:2023ndw
D. Zhang,
JHEP 02 (2024), 133
[arXiv:2310.11055 [hep-ph]].
Cheung:2015aba
C. Cheung and C. H. Shen,
Phys. Rev. Lett. 115 (2015) no.7, 071601
[arXiv:1505.01844 [hep-ph]].
|
http://arxiv.org/abs/2409.03710v1 | 20240904103135 | Inverse decision-making using neural amortized Bayesian actors | [
"Dominik Straub",
"Tobias F. Niehues",
"Jan Peters",
"Constantin A. Rothkopf"
] | cs.LG | [
"cs.LG",
"q-bio.NC",
"stat.ML"
] |
GoT-CQA: Graph-of-Thought Guided Compositional Reasoning for
Chart Question Answering
Lingling Zhang1# Muye Huang1# Qianying Wang2Corresponding author. # These authors contributed to the work equally. Yaxian Wang1 Wenjun Wu1 Jun Liu1
Xi’an Jiaotong University1 Lenovo Research2
{huangmuye, wyx1566, nickjun98}@stu.xjtu.edu.cn [email protected]
{zhanglling, liukeen}@xjtu.edu.cn
September 9, 2024
=================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Bayesian observer and actor models have provided normative explanations for many behavioral phenomena in perception, sensorimotor control, and other areas of cognitive science and neuroscience. They attribute behavioral variability and biases to different interpretable entities such as perceptual and motor uncertainty, prior beliefs, and behavioral costs. However, when extending these models to more complex tasks with continuous actions, solving the Bayesian decision-making problem is often analytically intractable. Moreover, inverting such models to perform inference over their parameters given behavioral data is computationally even more difficult. Therefore, researchers typically constrain their models to easily tractable components, such as Gaussian distributions or quadratic cost functions, or resort to numerical methods. To overcome these limitations, we amortize the Bayesian actor using a neural network trained on a wide range of different parameter settings in an unsupervised fashion. Using the pre-trained neural network enables performing gradient-based Bayesian inference of the Bayesian actor model's parameters. We show on synthetic data that the inferred posterior distributions are in close alignment with those obtained using analytical solutions where they exist. Where no analytical solution is available, we recover posterior distributions close to the ground truth. We then show that identifiability problems between priors and costs can arise in more complex cost functions. Finally, we apply our method to empirical data and show that it explains systematic individual differences of behavioral patterns.
§ INTRODUCTION
Explanations of human behavior based on Bayesian observer and actor models
have been widely successful,
because they structure the factors influencing behavior into interpretable components
<cit.>. They have explained a wide range of phenomena in perception <cit.>, motor control <cit.>, other domains of cognitive science <cit.>, and neuroscience <cit.>.
Bayesian actor models assume that an actor receives uncertain sensory information about the world. This sensory information is uncertain because of ambiguity of the sensory input or noise in neural responses <cit.>. To obtain a belief about the state of the world, the actor fuses this information with prior knowledge according to Bayes' rule. But humans do not only form beliefs about the world, they also act in it. In Bayesian actor models, this is expressed as the minimization of a cost function, which expresses the actor's goals and constraints. An optimal actor should minimize this cost function while taking their belief about the state of the world into account, i.e. minimize the posterior expected cost. However, there is not only uncertainty in perception, but also in action outcomes, due to the inherent variability of the motor system <cit.>. If an actor wants to perform the best action possible <cit.>, this uncertainty should also be incorporated into the decision-making process by integrating over the distribution of action outcomes.
One problem, which makes solving the Bayesian decision-making problem hard in practice, is that the expected cost is often not analytically tractable, because it involves integrals over the posterior distribution and the action distribution. Consequently, optimization of the expected cost is also often intractable.
Certain special cases, especially Gaussian distributions and quadratic cost functions, admit analytical solutions. Applications of Bayesian models have typically made use of these assumptions.
However, empirical evidence shows that the human sensorimotor system does not conform to these assumptions.
Noise in the motor system depends on the force produced <cit.> and variability in the sensory system follows a similar signal-dependence known as Weber's law <cit.>. Cost functions other than quadratic costs have been shown to be required in sensorimotor tasks <cit.>.
If we want to include in a model what is known about the human sensorimotor system or treat more naturalistic task settings, we need to incorporate these non-Gaussian distributions and non-quadratic costs.
Another challenge is that Bayesian actor models often have free parameters. It is common practice to set these parameters by hand, e.g. by choosing parameters of the prior distribution to match the statistics of the experiment or the natural environment or by choosing the cost function to express task goals defined by the experimenter. Predictions of the Bayesian model are then compared to behavioral data to assess optimality.
This practice is problematic because priors and costs might not be known beforehand and can be idiosyncratic to individual participants.
For example, an actor's cost function might not only contain task goals, e.g. hitting a target, but also internal factors. These can include cognitive factors such as computational resources <cit.>, but also more generally cognitive and physiological factors including biomechanical effort <cit.>. An actor's prior distribution might neither match the statistics of the task at hand nor those of the natural environment perfectly <cit.>.
In the spirit of rational analysis <cit.>, this has motivated researchers to invert Bayesian actor models, i.e. to use Bayesian actor models as statistical models of behavior and infer the free parameters from behavior.
This approach has come to be known in different application areas under different names, including doubly-Bayesian analysis <cit.>, cognitive tomography <cit.>, inverse reinforcement learning <cit.> and inverse optimal control <cit.>.
Because the forward problem of computing optimal Bayesian actions for a given perception and decision-making problem is already quite computationally expensive, the inverse decision-making problem, i.e.
performing inference about the parameters of such a Bayesian actor model given behavioral data, is an even more demanding task. For every evaluation of the probability of observed data given the actor's parameters,
one typically needs to solve the Bayesian actor problem. If only numerical solutions are available, this can be prohibitively expensive and makes computing gradients with respect to the parameters for efficient optimization or sampling difficult.
Here, we address these issues by providing a new method for inverse decision-making, i.e. Bayesian inference about the parameters of Bayesian actor models in sensorimotor tasks with continuous actions. Such tasks are widespread in cognitive science, psychology, and neuroscience
and include so called production, reproduction, magnitude estimation and adjustment tasks.
First, we formalize such tasks with Bayesian networks, both from the perspective of the researcher and from the perspective of the participant.
Secondly, we approximate the solution of the Bayesian decision-making problem with a neural network, which is trained in an unsupervised fashion using the decision problem's cost function as a stochastic training objective. Third, using the pre-trained neural network as a stand-in for the Bayesian actor within a statistical model enables efficient Bayesian inference of the Bayesian actor model's parameters given a dataset of observed behavior.
Fourth, we show on simulated datasets that the posterior distributions obtained using the neural network recover the ground truth parameters very closely to those obtained using the analytical solution for various typical response patterns like undershoots or regression to the mean behavior. Fifth, the identifiability between priors and costs of Bayesian actor models is investigated, which is now possible based on our proposed method.
Finally, we apply our method to human behavioral data from a bean-bag throwing experiment and show that the inferred cost functions explain the previously mentioned typical behavioral patterns not only in synthetically generated but also empirically observed data.
§ RELATED WORK
Inferring priors and costs from behavior has been a problem of interest in cognitive science for many decades. In psychophysics, for example, signal detection theory is an early example of an application of a Bayesian observer model used
to estimate sensory uncertainty and a criterion, which encompasses prior beliefs and a particular cost function <cit.>. Psychologists and behavioral economists have developed methods to measure the subjective utility function from economic decisions <cit.>. More recently, Bayesian actor models have been used within statistical models
to infer parameters of the observer's likelihood function <cit.>, the prior <cit.>, and the cost function <cit.>. The inference methods are often bespoke tools for the specific model considered in a study. They are also typically limited to discrete decisions and cannot be applied to continuous actions.
There are two notable exceptions.
<cit.> presented an inference framework for
Bayesian observer models using mixtures of Gaussians. While their approach only takes perceptual uncertainty into account in the decision-making process, here we assume that the agent also considers action variability.
Furthermore, instead of inverted Gaussian mixture cost functions, our method allows for arbitrary, parametric cost functions that are easily interpretable.
<cit.> introduced the idea of
approximating Bayesian decision-making with neural networks. They
trained neural networks in a supervised fashion
using a dataset of numerically optimized actions.
We extend this approach in three ways. First, we train the neural networks directly on the cost function of the Bayesian decision-making problem without supervision, overcoming the necessity for computationally expensive numerical solutions.
Second, in addition to cost function parameters and motor variability, we also infer priors and sensory uncertainty. Finally, we leverage the differentiability of neural networks in order to apply efficient gradient-based Bayesian inference methods, allowing us to draw thousands of samples from the posterior over parameters in a few seconds.
Our method is related to amortized inference <cit.>. A conceptual difference is that we amortize the solution of a Bayesian decision-making problem faced by a subject. This allows us to solve the Bayesian inference problem from the perspective of a researcher using efficient gradient-based inference techniques, without the need to use amortized likelihood-free inference.
§ BACKGROUND: BAYESIAN DECISION-MAKING
We start with the standard formulation of Bayesian decision theory <cit.>, illustrated in <ref> A.
An actor receives a stochastic observation generated from a latent state , according to a generative model p(). Since the actor has no direct access to the true value of the state , they need to infer it by combining a prior distribution p() with the likelihood p() using Bayes' rule
p() ∝ p() p().
This Bayesian inference process describes the actor's perception. Based on their perception, the actor's goal is to perform an optimal action ^*, which represents the intended motor response. This is commonly framed as a decision-making problem
based on a cost function ℓ(, ). The optimal action ^* for the actor is the action that minimizes the expected cost under the posterior distribution
^* = __p()[(, )] = _a∫(, ) p() d.
Often, the loss is not defined directly in terms of the performed action , but in terms of the expectation over some stochastic version of it ∼ p(), modeling variability in responses with the same intended action .
Taking the expectation over the posterior distribution and the response distribution lets us expand <ref> as
^* = __p()[ _p()[ (, ) ] ] = _∫∫(, ) p() p() d d,
which changes the problem conceptually from computing an estimate of a latent variable to performing an action that is subject to motor noise.
§ METHOD
The Bayesian decision-making problem in <ref> describes the situation faced by a subject performing a task.
For example, consider a subject in an experiment throwing a ball at a target. There is uncertainty in the perception of the target, which increases with the distance to the target. The subject has a prior belief about target locations, which might bias their perception. Additionally, there is uncertainty in the motor actions, such that repeated throws aimed at the same location will be variable. The task can be described by a cost function. For example, this cost function could consist of two parts. One part could be a function of the distance between the ball and the target. The other part could relate to motor effort, with throws at a larger distance being more costly. This is only an example and our framework in general allows for arbitrary parametric families of cost functions. As a Bayesian actor, the subject then chooses the action that maximizes the expected cost, taking into account the uncertainty in perception and action.
From the researcher's perspective, want to infer the parameters of the subject's perception-action system. In the ball throwing example, we would be interested in the subject's perceptual uncertainty and their prior belief about the target location, their action variability, and the effort cost of throwing the ball. These parameters constitute a parameter vector . Formally, we want to compute the posterior p(𝒟) for a set 𝒟 of behavioral data, as shown in <ref> B.
Our proposed method consists of two parts. First, we approximate the optimal solution of the Bayesian decision-making problem with a neural network (<ref>). A forward-pass of the neural network is very fast compared to the computation of numerical solutions to the original Bayesian decision-making problem, and gradients of the optimal action w.r.t. the parameters of the model (uncertainties, priors, costs) can be efficiently computed. In the second part of our method, this allows us to utilize the neural network within a statistical model of an actor's behavior to perform inference about model parameters (<ref>).
§.§ Amortizing Bayesian decision-making using neural networks
Because the Bayesian decision-making problem stated in <ref> is intractable for general cost functions, we approximate it using a neural network
^* ≈_(, ), which
takes the parameters of the Bayesian model and the observed variable as input and is parameterized by learnable parameters ψ.
It can then be used as a stand-in for the computation of the optimal action ^* in down-stream applications of the Bayesian actor model, in our case to perform inference about the Bayesian actor's parameters.
§.§.§ Unsupervised training
We train the neural network in an unsupervised fashion by using the cost function of the decision-making problem as an unsupervised stochastic training objective.
After training, the neural network implicitly solves the Bayesian decision-making problem.
More specifically, we use the expected posterior loss
ℒ() =
_p()[ _p(_(, ))[ (, ) ] ]
as a training objective.
Because the inner expectation depends on the parameters of the neural network, w.r.t. which we want to compute the gradient, we need to apply the reparameterization trick <cit.> and instead take the expectation over a distribution that does not depend on
ℒ() = _p()[ _p(ϵ)[ (, ) ] ],
where r = g(_(, ), ϵ) with some appropriate transformation g. For example, in the perceptual decision-making model used later (<ref>), p() is log-normal with scale parameter σ_, so we can sample ϵ∼𝒩(0, 1) and use the reparameterization g(ϵ, ) = exp(log() + ϵσ_).
Now, we can easily evaluate the gradient of the objective using a Monte Carlo approximation of the two expectations,
∇_ℒ() ≈1/K1/N∑_k=1^K∑_n=1^N∇__(_n, _k),
which makes it possible to train the network using any variant of stochastic gradient descent.
In other words, the loss function used to train the neural network that approximates the optimal Bayesian decision-maker is simply the loss function of the underlying Bayesian decision-making problem. All that we require is a model in which it is possible to draw samples from the posterior distribution _k ∼ p() and the response distribution _n ∼ p().
This allows us to train the network in an unsupervised fashion, i.e. we only need a training data set consisting of parameters and inputs m, without the need to solve for the optimal actions beforehand. The procedure is summarized in <ref>. See <ref> for the prior distributions used to generate parameters during training.
We used the RMSProp optimizer with a learning rate of 10^-4, batch size of 256, and N = M = 128 Monte Carlo samples per evaluation of the stochastic training objective. The networks were trained for 500,000 steps, and we assessed convergence using an evaluation set of analytically or numerically solved optimal actions (see <ref>).
§.§.§ Network architecture
We used a multi-layer perceptron with 4 hidden layers and 16, 64, 16, 8 nodes in the hidden layers, respectively.
We used swish activation functions at the hidden layers <cit.>. As the final layer, we used a linear function with an output 𝐲∈ℝ^3, followed by a non-linearity
^* = softplus(y_1 ^ y_2 + y_3),
where is the observation received by the subject. This particular non-linearity is motivated by the functional form of the analytical solution of the Bayesian decision-making problem for the quadratic cost function as a function of and (<ref>) and serves as an inductive bias (<ref>).
§.§ Bayesian inference of model parameters
The graphical model in <ref> B illustrates the generative model of behavior in an experiment. Our goal is to infer p(𝒟), where 𝒟 = {_i, _i: i = 1, …, n} is a dataset of stimuli and responses.
We assume that for every stimulus presented to the subject, they receive a stochastic measurement ∼ p(). They then solve the Bayesian decision problem given above, i.e. they decide on an action . We used the neural network ^* ≈(, ) to approximate the optimal action.
The chosen action is then corrupted by action variability to yield a response ∼ p().
To sample from the researcher's posterior distribution over the subject's model parameters p(𝒟), we use the Hamiltonian Monte Carlo algorithm NUTS <cit.>. This gradient-based inference algorithm can be used because the neural network is differentiable with respect to the parameters and sensory input . The procedure is summarized in <ref>.
§.§ Implementation
The method was implemented in <cit.>, using the packages <cit.> for neural networks and <cit.> for probabilistic models and sampling. Our implementation is provided in the supplementary material and will be made available on GitHub upon publication. Our software package enables the user to define new, arbitrary parametric families cost functions and train neural networks to approximate the decision-making problem.
No special high-performance compute resources were needed, since all evaluations for this paper were run on standard laptop computers (e.g. with Intel Core i7-8565U CPU). Training a neural network for 500,000 steps took 10 minutes, and drawing 20,000 posterior samples for a typical dataset with 60 trials took 10 seconds.
§ RESULTS
We evaluate our method on a perceptual decision-making task with log-normal prior, likelihood and action distribution (<ref>), which we later combine with several cost functions.
For certain cost functions, this Bayesian decision-making problem is analytically solvable. We evaluate our method's posterior distributions against those obtained when using the analytical solution for the optimal action (<ref>).
Our evaluations allowed us to find possible identifiability problems between prior and cost parameters inherent to Bayesian actor models, which we analyze in more detail (<ref>).
Finally, we apply our method to real data from a sensorimotor task performed by humans and show that it explains the variability and biases in the data (<ref>).
§.§ Perceptual decision-making model
We now make the decision-making problem more concrete by introducing a log-normal model for the perceptual and action uncertainties, which captures the ball throwing example sketched in <ref> and shown in <ref>.
From the actor's perspective, we assume that sensory measurements are generated from a log-normal distribution ∼(, σ),
or equivalently that ln∼𝒩(ln, σ). This assumption is motivated by Weber's law, i.e. that the variability scales linearly with the mean <cit.>, and by Fechner's law, i.e. that perception takes place on an internal logarithmic scale <cit.>. Thus, this model can be applied to a wide range of stimuli, such as time <cit.>, space <cit.>,
sound <cit.>, numerosity <cit.>, and others.
Posterior distribution
Assuming a log-normal prior ∼(μ_0, σ_0) and a log-normal likelihood ∼(, σ), the posterior is
p() = (μ_post, σ_post)
with σ_post^2 = (1/σ_0^2 + 1/σ^2)^-1 and μ_post = exp(σ_post^2 (lnμ_0/σ_0^2 + ln/σ^2)). This can be shown by using the equations for Gaussian conjugate priors for a Gaussian likelihood in logarithmic space and then converting back to the original space.
Response distribution
Motivated by the idea of signal-dependent noise in actions <cit.>, we again use a log-normal distribution ∼(, σ_r)
as a probability distribution describing the variability of responses r given an intended action . This assumption results in a linear scaling of response variability with the mean, which has been observed empirically <cit.>.
Analytical solution for quadratic costs
For certain cost functions, this formulation allows analytical solutions for the optimal action. Specifically, for quadratic costs (, ) = ( - )^2, the optimal action is
^* = μ_post exp(1/2(σ_post^2 - 3 σ_r^2)).
For a derivation, see <ref>. This allows us to validate our method against a special case in which the optimal action can be computed analytically.
§.§ Evaluation using synthetic data
We first evaluate our method on a case for which we know the analytical solution for the optimal action: the quadratic cost function (<ref>).
We generated a dataset of 60 pairs of stimuli s_i and responses r_i using the analytical solution for the optimal action with ground truth parameters μ_0 = 1.5, σ_0=0.15, σ=0.2, σ_r=0.1. The simulated data are shown in <ref> A, with a characteristic pattern of signal-dependent increase in variability, an overshot for low stimulus values and an undershot for higher stimulus values due to the prior.
We then computed posterior distributions for the model parameters using the analytical solution for the optimal action and using the neural network. We drew 20,000 samples from the posterior distribution in 4 chains, each warmed up with 5,000 burn-in steps. Note that, because the concrete values of σ and σ_0 are unidentifiable even when using the analytical solution for the optimal action (see <ref>), we kept σ fixed to its true value during inference. This was done to evaluate our method on a version of the model, for which the analytical solution as a gold standard produces reliable results.
<ref> B shows that both versions recover the true parameters, and the contours of both posteriors align well.
The posterior predictive distribution generated from the neural network posterior reproduces the pattern of variability and bias in the data (shaded region in <ref> A).
To ensure and quantitatively assess that the method works for a wide range of parameter settings, we then simulated 100 sets of parameters sampled uniformly (see <ref> for the choice of prior distributions). For each set of parameters, we simulated a dataset consisting of 60 trials. We then computed posterior distributions for each dataset in two ways: using the analytical solution for the optimal action and using the neural network to approximate the optimal action.
In both cases, we drew 20,000 samples (after 5,000 warm-up steps) from the posterior distribution in 4 chains and assessed convergence by checking that the R-hat statistic <cit.> was below 1.05. The mean squared errors between the posterior mean and the ground truth parameter value in <ref> C show that the inference method using the neural network recovers the ground truth parameters just as well as the analytical version. <ref> A&B additionally show the error as a function of the ground truth parameter value and <ref> shows the results from <ref> C numerically.
§.§ Limits of identifiability of costs and priors
We now apply our method to new cost functions, for which analytical solutions are not (readily) available. For example, we consider cost functions that incorporate the cost of the effort of actions. It is more costly to throw a ball at a longer distance due to the force needed to produce the movement, or it is more effortful to press a button for a longer duration. This can be achieved using a weighted sum of the squared distance to the target stimulus and the square of the response:
(, ) = β ( - )^2 + (1-β) ^2.
This cost function introduces another source of biases besides perceptual priors: people might undershoot stimuli at larger distances more (see <ref>).
Using our method, we can turn to investigating whether we can tease these different sources of biases apart.
<ref> A shows a pattern of behavior with an undershot with
ground truth parameters μ_0 = 2.95, σ_0=0.19, σ=0.14, σ_r=0.23, β=0.72.
The posterior distribution <ref> B shows that the undershot can either be attributed to a subject trying to avoid the mental or physical strain of larger effort, or to biased perception due to a low prior mean. This is difficult to disentangle, and shows in a correlated posterior for the effort cost parameter β and the prior mean μ_0. Over 100 simulated datasets with a range of different ground truth parameter values, the MSE between the inferred posterior mean and the ground truth is high when both μ_0 and β are unknown. Once we fix one of the confounding parameters at their true value and exclude them from the set of inferred parameters, we observe a considerable increase in accuracy of the inferred parameters (see <ref> C). Again, <ref> C&D show the error as a function of the ground truth parameter value and <ref> shows the results from <ref> C numerically.
Nevertheless, this unidentifiability is a property of the model itself and not a shortcoming of the inference method. In fact, our method opens up the possibility to investigate these properties of Bayesian actor models in the first place.
To demonstrate this, we derived an analytical solution for the quadratic cost with quadratic effort (see <ref>) and showed that the posteriors obtained using the neural network match those obtained with the analytical solution (see <ref> B).
We performed the same analysis for an asymmetric quadratic cost function, which can penalize overshots more than undershots, or vice versa:
(, ) = 2 |α - 1( - )| ( - )^2 with 1(x) =
1 if x ≥ 0
0 else
.
<ref> D shows an example of behavior generated using this cost function with μ_0 = 4.47, σ_0=0.17, σ=0.22, σ_r=0.23 and α = 0.29, which exhibits an undershot.
There is no analytical solution for the Bayesian decision-making problem with this cost function known to us. Nevertheless, ground truth parameters are accurately inferred (see <ref> E) and the same pattern of identifiability issues as for the previous cost function is observed (see <ref> F and <ref>).
§.§ Inference of human costs and priors
We applied our method to data of human participants from a previously published study of a bean bag throwing task <cit.>. 20 participants were asked to throw a bean bag at five different target distances from 3 to 11 feet (0.9 to 3.4 meters) with 2 feet increments (0.6 meters). The subjects received neither visual nor verbal feedback.
<ref> A shows data from two example participants from the study. The first participant's behavior is characterized by a tendency to undershoot far targets.
The second participant hits the target accurately on average, but has a higher variability.
We fit the model using the quadratic cost function with quadratic effort to the data by drawing 20,000 samples (after 5,000 warm-up steps) from the posterior distribution for both participants. The posterior distributions (<ref> B) show that the first participant's behavior can be either explained by an effort cost or a prior belief about targets (either low β or low μ_0),
but we cannot conclusively say which of the two is the case (cf. <ref>).
The second participant's relatively unbiased behavior is attributed to a comparatively small effort cost, with β being very close to 1. The action variability σ_r is higher compared to the first participant.
The posterior mean cost function is visualized in the two-dimensional target-response space in <ref> C, where the color indicates the cost of performing a response r when the target is s. For the first participant, the cost function exhibits an asymmetry: for targets that are further away, the minimum of the cost function is shifted towards shorter responses. The second participant's cost function, on the other hand, is symmetric around the target.
§ DISCUSSION
First, we have presented a new framework for Bayesian inference about the parameters of Bayesian actor models. Computing optimal actions in these models is intractable for general cost functions and, therefore, previous work has often focused on cost functions with analytical solutions, or derived custom tools for specific tasks. We propose an unsupervised training scheme for neural networks to approximate Bayesian actor models with general parametric cost functions.
The approach is extensible to cost functions with other functional forms suited to particular tasks.
Second, performing inference about the parameters of Bayesian actor models given behavioral data is computationally very expensive because each evaluation of the likelihood requires solving the decision-making problem. By plugging in the neural network approximation, we can perform efficient inference.
Third, a very large number of tasks involve continuous responses, including economic decision-making (‘how much would you wager in a bet?’), psychophysical production (‘hit the target’), magnitude reproduction (‘reproduce the duration of a tone’), sensorimotor tasks (‘reproduce the force you felt’), and cross-modality matching (‘adjust a sound to appear as loud as the brightness of this light’). Therefore we see very broad applicability of our method in the behavioral sciences including cognitive science, psychology, neuroscience, sensorimotor control, and behavioral economics.
Fourth, over the last few years, a greater appreciation of the necessity for experiments with continuous responses has developed <cit.>. Such decision problems are closer to natural environments than discrete forced-choice decisions, which have historically been the dominant approach. We provide a statistical method for analyzing continuous responses.
Finally, by inferring what a subject’s decisions were optimal for instead of postulating optimality, we conceptually reconcile normative and descriptive models of decision-making. Thus, we model the researcher’s uncertainty about the subject’s decision-making parameters explicitly.
Once we move beyond quadratic cost functions, identifiability issues between prior and cost parameters can arise. As recognized in other work as well <cit.>, these identifiability issues in Bayesian models have implications for how experiments should be designed. We have shown that, when either priors or costs are known, the identifiability issue vanishes. Based on our results, we recommend experiments with multiple conditions, between which one can assume either priors or costs to stay fixed, in order to disentangle their effects.
Our methodology should prove particularly useful for investigating task configurations that lead to or avoid such unidentifiabilities.
Limitations The strongest limitation we see with the present approach is that we require a model of the perceptual problem that allows drawing samples from the observer's posterior distribution, which works for stimuli that can be described well by these assumptions, e.g. magnitude-like stimuli with log-normal distributions considered in our experiments. Ideally, we would like a method that can be applied to perceptual stimuli for which these assumptions do not hold (e.g. circular variables) or more complex cognitive reasoning tasks. We see a potential to extend our method by learning an approximate posterior distribution together with the action network. This idea is closely related to loss-calibrated inference <cit.>, an approach that learns variational approximations to posterior distribution adapted to loss functions. We will explore this connection more in future work.
Broader impacts Methods for inferring beliefs and costs from behavioral data have potential for both positive and negative societal impact. We see a great benefit for behavioral research from methods that provide estimates of people's perceptual, motor, and cognitive properties from continuous action data, particularly in clinical applications where such tasks are easier for patients than classical forced-choice tasks. On the other hand, methods to infer people's uncertainties and costs could potentially be used in harmful ways, especially if they are used without the subjects' consent. The inference methods presented here are applicable to controlled experiments,
and are far from scenarios with harmful societal outcomes. Still, applications of these methods should of course take place with the subjects' informed consent and the oversight of ethical review boards.
We thank Nils Neupärtl for initial work on this project idea.
We acknowledge the suggestion by an anonymous NeurIPS reviewer to try unsupervised training.
We thank Zili Liu and Chéla Willey for sharing their beanbag throwing data.
This research was supported by the European Research Council (ERC; Consolidator Award “ACTOR”-project number ERC-CoG-101045783)
and
by the 'The Adaptive Mind', funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art.
We gratefully acknowledge the computing time provided to us on the high-performance computer Lichtenberg at the NHR Centers NHR4CES at TU Darmstadt.
unsrtnat
figuresection
tablesection
§ ALGORITHM
§ DERIVATIONS OF OPTIMAL ACTIONS
§.§ Quadratic cost
One cost function for which the Bayesian decision-making problem under a log-normal observation model and a log-normal response model (<ref>) can be solved in closed form is the quadratic function. In that case, we can write the expected loss as
[(, )] = [( - )^2]
= [^2] - 2[][] + [^2],
assuming independence between and .
Using the moment-generating function of the log-normal distribution, we can evaluate these expectations as
[(, )] = e^2 lnμ_post + 2 σ_post^2
- 2 e^lnμ_post + σ_post^2/2 e^ln + σ_^2/2 + e^2 ln + 2 σ_^2.
By differentiating with respect to
∂/∂[(, )] = 2 e^2 σ_^2 - 2 μ_post e^1/2 (σ_^2 + σ_post^2)
and setting to zero, we obtain the optimal action
^* = μ_post exp(1/2(σ_post^2 - 3 σ_r^2)).
Inserting the posterior mean (<ref>), we obtain
^* = exp(σ_post^2 (lnμ_0/σ_0^2 + ln/σ^2)) exp(1/2(σ_post^2 - 3 σ_r^2))
= μ_0^σ_post^2/σ_0^2exp(1/2(σ_post^2 - 3 σ_r^2)) m^σ_post^2/σ^2
§.§ Quadratic cost with quadratic action effort
We now consider a class of cost functions of the form
(, ) = β ( - )^2 + (1 -β) ^2.
This allows us to write the expected cost as
[(, )] = [β( - )^2 + (1-β) ^2]
= β[( - )^2] + (1-β) [^2]
Using the previously obtained result of <ref>, we can write this as
[(, )] = β(e^2 lnμ_post + 2 σ_post^2
- 2 e^lnμ_post + σ_post^2/2 e^ln a + σ_r^2/2 + e^2 ln a + 2 σ_r^2) + (1-β) e^2 ln a + 2σ_r^2.
Differentiating
∂/∂ a[(, )] = 2 e^2σ_r^2a - 2 βμ_post e^1/2 (σ_r^2 + σ_post^2)
and setting to zero as well as inserting the posterior mean (<ref>) yields
a = βμ_postexp(1/2(σ_post^2 - 3 σ_r^2))
= βexp(w_0 lnμ_0 + σ_post^2/2 - 3σ_r^2/2) m^w_m
with w_0 = σ_post^2/σ_0^2 and w_m = σ_post^2/σ^2.
§ HYPERPARAMETERS AND OTHER METHODS DETAILS
§.§ Parameter prior distributions
We use relatively wide priors to generate the training data for the neural networks to ensure that they accurately approximate the optimal action over a wide range of possible parameter values:
* σ∼𝒰(0.01, 0.5)
* σ_r ∼𝒰(0.01, 0.5)
* σ_0 ∼𝒰(0.01, 0.5)
* μ_0 ∼𝒰(0.1, 7.0)
During inference, we use narrower, but still relatively uninformed prior distributions, to avoid the regions of the parameter space on which the neural network has not been trained:
* σ∼Half-Normal(.25)
* σ_r ∼Half-Normal(.25)
* σ_0 ∼Half-Normal(.25)
* μ_0 ∼𝒰(0.1, 5.0)
To evaluate the inference procedure, we use parameters sampled from priors, which correspond to the actual parameter values that we would in expect in a behavioral experiment:
* σ∼𝒰(0.1, 0,25)
* σ_r ∼𝒰(0.1, 0,25)
* σ_0 ∼𝒰(0.1, 0,25)
* μ_0 ∼𝒰(2.0, 5.0)
In contrast to the sensorimotor parameters, we kept the same priors for cost parameters during training of the neural network, inference and evaluation:
* Quadratic cost with linear effort: β∼𝒰(0.5, 1.0)
* Quadratic cost with quadratic effort: β∼𝒰(0.5, 1.0)
* Asymmetric quadratic cost: α∼𝒰(0.1, 0.9)
§.§ Evaluation dataset
To assess convergence of the neural networks, we generated an evaluation dataset consisting of 100,000 parameter sets and optimal solutions of the Bayesian decision-making problem for each cost function. If an analytical solution was available (e.g. quadratic cost), we computed the optimal action analytically. If there was no analytical solution known to us, we solved the Bayesian decision-making problem numerically. Specifically, we computed Monte Carlo approximations of the posterior expected loss
ℒ() =
_p()[ _p())[ (, ) ] ]
≈1/K1/N∑_k=1^K∑_n=1^N(_n, _k)
with N = K = 10,000 and used the BFGS optimizer (implemented in ) to solve for the optimal action ^*.
§.§ Inductive bias for the neural network
We know the closed-form solution for the optimal action ^* as a function of the parameters for the quadratic cost function (<ref>). Therefore, we assume that the parametric form of the optimal action as a function of the sensory measurement will not be substantially different for other cost functions, although the specific dependence on the parameters will vary.
We rewrite the optimal action for the quadratic loss (<ref>) as
^* = f_1() m^f_2() + f_3().
To allow for additive biases as well, we have added an additive term, which is zero for the quadratic cost function.
§ ADDITIONAL RESULTS
§.§ Inference with both perceptual and prior uncertainty as free parameters
During evaluation of our method, we found identifiability issues inherent to Bayesian actor models between the prior uncertainty σ_0 and the sensory variability σ.
In order to produce the same behavior as the one observed, only the ratio σ_0/σ needs to be inferred correctly (see diagonal correlation in the joint posterior of the two parameters in <ref> A) – the absolute magnitude of each of the two parameters does not substantially influence the resulting behavior and thus the solution to the true values conditioned on the observed behavior is an underdetermined problem. This intuitively makes sense, since the resulting behavior is largely shaped by how much influence prior and sensory information have, which is weighted by σ_0 and σ, respectively.
We were able to considerably improve accuracy in the inference of these two confounding parameters by fixing one of them at their true value (see <ref> B). Therefore, we decided to keep the perceptual uncertainty σ fixed when probing our method since this corresponds to psychophysically measuring σ prior to applying our method and fixing it at the measure value.
§.§ Accuracy of inference with amortized optimal actions
§.§.§ Comparison of posterior means and standard deviations
To assess the accuracy of the inference with amortized optimal actions, we used the data sets also shown in <ref> C and <ref> C. We compared means and standard deviations of the posteriors obtained with the analytical solutions for the optimal actions ^* or with the neural network as approximation for the subject's decision-making. The neurally amortized inference accurately produces very similar posteriors to the ones obtained using the analytical solution, as shown in <ref>.
§.§.§ Mean squared errors
We additionally summarize the mean squared errors visualized in <ref> C (<ref>), <ref> C (<ref>), and <ref> F (<ref>).
|
http://arxiv.org/abs/2409.03204v1 | 20240905025211 | Pricing American Options using Machine Learning Algorithms | [
"Prudence Djagba",
"Callixte Ndizihiwe"
] | cs.LG | [
"cs.LG",
"q-fin.CP"
] |
[
Amy X. Zhang
September 5, 2024
=====================
§ ABSTRACT
This study investigates the application of machine learning algorithms, particularly in the context of pricing American options using Monte Carlo simulations. Traditional models, such as the Black-Scholes-Merton framework, often fail to adequately address the complexities of American options, which include the ability for early exercise and non-linear payoff structures. By leveraging Monte Carlo methods in conjunction Least Square Method machine learning was used. This research aims to improve the accuracy and efficiency of option pricing. The study evaluates several machine learning models, including neural networks and decision trees, highlighting their potential to outperform traditional approaches. The results from applying machine learning algorithm in LSM indicate that integrating machine learning with Monte Carlo simulations can enhance pricing accuracy and provide more robust predictions, offering significant insights into quantitative finance by merging classical financial theories with modern computational techniques. The dataset was split into features and the target variable representing bid prices, with an 80-20 train-validation split. LSTM and GRU models were constructed using TensorFlow's Keras API, each with four hidden layers of 200 neurons and an output layer for bid price prediction, optimized with the Adam optimizer and MSE loss function. The GRU model outperformed the LSTM model across all evaluated metrics, demonstrating lower mean absolute error, mean squared error, and root mean squared error, along with greater stability and efficiency in training.
keywords: Machine Learning, American Options, Monte Carlo Simulations, Least Square Method, Neural Networks, Option Pricing, LSTM, GRU.
§ INTRODUCTION
The pricing of American options is a complex task due to their early exercise feature. This study explores the use of machine learning models to enhance the Least Squares Monte Carlo (LSM) method for pricing American options. An option is a financial derivative that gives the buyer the right to buy or sell an underlying asset upon paying a premium. A call option gives the buyer the right to buy an asset while a put option gives the buyer the right to sell an asset <cit.>. Option pricing is a crucial aspect of financial markets and has undergone extensive study and development. Accurate options valuation is essential for investors, traders, and financial institutions to make informed decisions and effectively manage risk. In recent years, advances in computational techniques and the availability of large datasets have paved the way for applying machine learning algorithms in option pricing. Machine learning techniques, particularly deep learning, can enhance option pricing models by capturing complex patterns and relationships in market data <cit.>. Unlike traditional models, which rely on predetermined formulas and assumptions, machine learning algorithms can adapt to changing market conditions and incorporate a wider range of input variables, leading to more accurate and robust pricing predictions <cit.>. By leveraging historical market data, machine learning algorithms can learn from past pricing dynamics and adapt to changing market conditions, thereby enhancing option pricing accuracy. The pricing of financial derivatives, particularly options, has been the subject of significant research and development in the field of quantitative finance. Traditional option pricing models, such as the Black-Scholes model <cit.>, have provided valuable insights into the valuation of European options. However, pricing American options, which allow for early exercise, presents unique challenges due to their non-linear payoff structure.
Accurate option pricing is crucial for the stability and efficiency of financial markets. However, investors, traders, and financial institutions often face challenges in determining precise prices for financial derivatives. These challenges can result in suboptimal decision-making and increased financial risk. The pricing of American options is particularly problematic due to their allowance for early exercise, which adds a layer of complexity that traditional models struggle to address. This complexity necessitates the exploration of alternative approaches that can provide more accurate and reliable pricing. To address these challenges, this research aims to employ Least Square Monte Carlo (LSM) methods combined with machine learning models. By leveraging these advanced techniques, we aim to better understand complex market patterns and improve the accuracy of American option pricing. The study of machine learning applications in option pricing is significant for several reasons. Firstly, it has the potential to enhance the development of financial markets by providing more accurate pricing models. Improved pricing models can lead to better decision-making and risk management for financial professionals, thereby contributing to the overall stability and efficiency of the financial system. Secondly, this research addresses the limitations of traditional pricing models, such as their reliance on predetermined assumptions and inability to adapt to changing market conditions. By integrating machine learning with Monte Carlo simulations, this study aims to develop models that are more flexible and capable of capturing the complexities of financial markets. Furthermore, the findings of this study could provide valuable insights into the factors that influence option prices, thereby advancing our understanding of financial markets and improving the tools available for quantitative finance.
§ LITERATURE REVIEW
Previous studies have applied various techniques to price American options. The LSM method, introduced by Longstaff and Schwartz (2001), is widely used due to its flexibility and accuracy. Recent advancements in machine learning have shown promise in improving the estimation of continuation values in the LSM algorithm.
§.§ Risk Neutral Pricing
Risk neutral pricing is a fundamental concept in financial mathematics used to evaluate the fair value of derivatives contracts, such as options, in a manner that accounts for risk without introducing arbitrage opportunities <cit.>. This approach relies on the assumption that investors are indifferent to risk when pricing financial assets, allowing for a simplified valuation framework <cit.>.
Considering a model economy with risky assets n_S, risk-free assets β, and risky assets S_i. An equation for a differential is followed by the risk-free asset β:
dβ(t) = β(t)r(t)dt,
where the risk-free interest rate is denoted by r(t). The stochastic differential equation governing the hazardous assets S_i in the real-world measure P is as follows:
dS_i(t) = S_i(t)(μ_i(t)dt + σ_i(t)dW^P_i),
where dW^P_i is a n_S-dimensional standard Brownian motion, σ_i(t) represents the asset's volatility, and μ_i(t) represents the asset-dependent drift term <cit.>.
The price V(S(t_0), t_0) of a European type contract in a complete market is determined by the expected value of the future price V(S(t_1), t_1) in relation to the risk-neutral measure Q <cit.>, given as follows:
V(S(t_0), t_0) = E^Q [ V(S(t_1), t_1)/β(t_0) β(t_1)],
The equation (<ref>) ensures that there are no arbitrage opportunities in the market. The measure Q is unique under the assumption of an arbitrage-free market. In the risk-neutral measure Q, all the drift terms of the risky assets S_i are given by the risk-free interest rate r(t):
dS_i(t) = S_i(t) ( r(t)dt + σ_i dW^Q_i(t) ),
where σ_i is the asset's volatility, and W^Q_i(t) is a standard Brownian motion in the Q measure.
The relationship between the Brownian motions W^Q_i(t) and W^P_i(t) is given by:
dW^Q_i(t) = dW^P_i(t) + ν_i(t) dt,
where ν_i(t) satisfies μ(t) = r(t) + σ_i ν_i(t). Notably, the volatilities σ_i remain the same under this change of measure, allowing for the estimation of σ_i from real-world observations of the asset price processes S_i.
It is instantly evident from combining the differential equations for the risky and risk-free assets that the ratio S_i/β is a drift-free process <cit.>:
d( S_i/β(t)) = ( S_i/β(t)) σ_i dW^Q_i(t).
Risk-neutral pricing provides a powerful and wide framework for valuing financial derivatives, offering simplicity and tractability in complex market environments. By incorporating the principles of risk neutrality, financial analysts can derive fair prices for derivatives contracts, facilitating informed investment decisions and risk management strategies.
§.§ European options
A European option is a form of options agreement where execution is restricted solely to its expiry date <cit.>. This means that investors cannot exercise the option prematurely regardless of any fluctuations in the price of the underlying security such as a stock. The execution of the call or put option is only permitted on the date of the option's maturity.
At time t, the cost of a European option is provided by
V(S(t), t) = 𝔼^Q [ h(S(T), T) exp( - ∫_t^T r(s)ds ) ],
where h(S, T) is the payoff of the option at maturity, e.g.,
h(S, T) = {[ max[S - K, 0], for a call option,; max[K - S, 0], for a put option, ].
with K the strike of the option.
§.§ American options
Options with an extra right for the contract holder are known as American options. Anytime prior to or on the day of expiration, the option may be exercised. Due to this additional right, an American choice may be worth more than a European option. The European option will always have a higher payoff if it is exercised before it expires, so the American option can never be worth less than the European option. However, in certain situations, the extra right to exercise it early may allow for a higher payoff <cit.>.
An American option holder may exercise their right to do so at any time up until and including maturity, unlike holders of European options. Because of this qualitative difference, in the American context as compared to the European case, the option holder has more rights. An American option's pricing must be at least equal to that of a comparable European option.
An American option holder must continually check the price of the underlying asset throughout the option's lifetime and determine if the option's price exceeds the instant payout they would get if they exercised the option at this particular moment.
It can be shown that the price V(S, t) of an American option is provided by
V(S(t), t) = sup_τ∈[t,T]𝔼^Q [ exp( -∫_t^τ r(s)ds ) h(S(τ)) ],
where the ideal stopping time supremum is attained
τ^*
τ^* = inf_t≥ 0{τ: V(S(t), t) ≤ h(S(t)) },
For the first time, the option's price is below what the holder would receive if they exercised it at this particular moment.
§ METHODS
The LSM method involves simulating paths of the underlying asset prices and performing a backward induction to estimate option values. In this study, machine learning models, including XGBoost, LightGBM, and logistic regression, are integrated into the LSM framework to estimate the continuation value more accurately.
§.§ Least-Squares Monte Carlo Method
In the work <cit.>, Monte Carlo simulation is introduced into the financial domain and Monte Carlo techniques are used to price structured goods.
These techniques provide an effective approximation of the option price, especially for multidimensional issues like derivatives with numerous underlying assets. European options in particular benefit from this, but American options can also be priced rather effectively. The following is the fundamental notion underlying the Monte Carlo Simulations technique for an option with payoff h. The price of the derivative is determined by explicitly computing the expected value of the discounted payoff, as it is in equation (<ref>), using these paths to create a (large) sample of random processes of the equation (<ref>) for the underlying stochastic processes.
§.§ Simulating random paths
Solving the following system of coupled stochastic differential equations is necessary in order to simulate the paths of the underlying assets.
dS_i(t) = S_i(t) ( r(t)dt + σ_i dW_i^Q(t) ),
where r(t) is the deterministic interest rate and 𝐖 is a n-dimensional Q-Brownian motion with correlation matrix ρ_ij.
This can be accomplished most conveniently with the Euler-Maruyama approach. <cit.> has a thorough introduction. For a time mesh t_j = t_0 + jdt, j = 1, …, nstep, step size dt, essentially, simulating random pathways means computing
S_i(t + dt) = S_i(t) ( 1 + r(t)dt + √(dt)∑_j B_ij Z_j(t) ),
where Z_j(t) are independent standard normal random variables, and B_ij is derived from the Cholesky decomposition of the correlation matrix ρ_ij.
§.§ LSM for European Options
With a European option, the Black-Scholes model may precisely sample the end-point of the pathways, which is the sole point that matters for the payout and doesn't require Euler-Maruyama time-stepping. We can calculate the value of the payout at maturity for each path after generating an ensemble of random values for the underlying assets' value at maturity, S^j_i(T). The current option price can be obtained by averaging the discounted payoffs of all simulated paths, as illustrated in Figure <ref>. The standard deviation can be used to calculate the degree of price uncertainty.
§.§ The Least Squares Monte Carlo (LSM) Algorithm for American Options
To price an American option using Monte Carlo simulation for n underlying assets, a backward iteration algorithm is employed. The Least Squares Monte Carlo (LSM) algorithm is a method for pricing options by simulating potential future paths of the underlying asset's price and recursively working backward through time. It begins by generating random paths for the asset's price and setting the option's payoff at maturity based on the payoff function. Then, starting at maturity, it discounts future payoffs, performs regression analysis to estimate continuation values, and compares them to immediate exercise values to determine whether to exercise the option. This process iterates backward through time until the present. At the final time step, option values are discounted back to the present, and the option price is computed by averaging over all paths. LSM is particularly effective for pricing American-style options due to its ability to account for early exercise opportunities through regression analysis, making it a versatile and accurate approach for pricing complex derivatives.
Expand the continuation value c(𝐒) (as a function of the underlying asset price) at each time step t_i in terms of a function basis ψ_j.
Be aware that it costs a lot of money to compute an option's continuation value.
Consequently, using a least squares regression to approximate the continuation value at each time across all pathways,
c(𝐱, t_i) = 𝔼[ V(S(t_i+1)) |𝐒(t_i) = 𝐱] = ∑_k=0^n_orderβ_k ψ_k(𝐱),
where the expansion coefficients β_k are obtained by a least squares fit to the (discounted) values of the option at the next time step:
β = ( B_ψψ)^-1 B V_ψ,
where B_ψψ, B V_ψ at time step t_i are given by
(B_ψψ)_ℓ = 𝔼[ V(S(t_i+1)) ψ_ℓ(S(t_i)) ],
(B_ψψ)_kℓ = 𝔼[ ψ_k(S(t_i)) ψ_ℓ(S(t_i)) ],
and the expectation is over the ensemble of paths.
Unlike when the price is chosen to replicate the early exercise decision
V_j(t_i) = h(S_j(t_i)),
for all paths j where h(S_j(t_i)) > e^- ∫_t_i^t_i+1 r(s)ds V_j(t_i+1) in step 6b in Figure <ref>, in some papers like <cit.> assume that the non exercised value is the continuation value
V_j(t_i) = max{ h(c(S_j(t_i))), h(S_j(t_i)) }.
Its disadvantage is that the sampling error is compounded by the difference between c and h,
where
* t_i: Represents each time step in the option pricing process.
* c(𝐱, t_i): The continuation value of the option at time t_i, where 𝐱 denotes the underlying asset price.
* h: The payoff function of the option.
* V(S(t_i+1)): The value of the option at the next time step t_i+1 given the asset price S(t_i+1).
* ψ_j: A function basis used for expansion.
* β_k: Expansion coefficients obtained through a least squares fit.
* n_order: The order of the expansion.
* B_ψψ: The matrix of expectations of the product of basis functions.
* B V_ψ: The vector of expectations of the product of the option value and basis functions.
Figure <ref> shows the American option pricing and detailing algorithm of the Longstaff-Schwartz method for using least squares regression in step 6 to calculate the continuation value c. The solution of equation (<ref>) to obtain equation (<ref>) is explained in detail in steps 6a and 6b in the purple boxes <cit.>.
§.§ Machine Learning Methods based on LSM
This section details how various machine learning models can be integrated into the Least Squares Monte Carlo (LSM) algorithm to enhance the pricing of American options. The machine learning models discussed include XGBoost, LightGBM, logistic regression, k-nearest neighbors (kNN), decision tree, and random forest.
In the LSM algorithm in Figure (<ref>) for pricing American options, machine learning models are primarily involved in Step 6a, where the continuation value is estimated through regression. Traditionally, this step uses linear regression to estimate the relationship between the current state variables and the future payoffs. However, integrating machine learning models such as XGBoost, LightGBM, logistic regression, k-nearest neighbors (kNN), decision trees, and random forests can significantly enhance this process. These models are trained on the simulated paths and their corresponding discounted payoffs V_j(t_i+1), allowing them to capture complex, non-linear relationships in the data. By doing so, they can provide more accurate predictions of the continuation value c_j for each path j. For example, XGBoost and LightGBM are gradient-boosting models that can handle large datasets with intricate interactions, while decision trees and random forests can model non-linear relationships effectively.
Once the machine learning model is trained, it replaces the traditional linear regression formula B_VVβ = B_Vy used to calculate the regression coefficients β. In this enhanced approach, the model predicts the continuation value c_j for each path based on the state variables S_j(t_i). These predicted continuation values are then used to determine whether to exercise the option or to continue holding it (Step 7). If the payoff from exercising the option is greater than the predicted continuation value, the option is exercised; otherwise, it is not. By leveraging machine learning models in this critical step, the LSM algorithm can achieve more accurate and robust option pricing, as these models can generalize better to complex and high-dimensional state spaces than traditional linear regression.
§.§ Recurrent Neural Networks (RNNs)
Inspired by the architecture and operation of the human brain, neural networks (NNs) are a fundamental idea in machine learning. NNs are fundamentally made up of linked nodes arranged in layers. Data is received by input layers, information is processed by hidden levels, and output layers generate output. The capacity of NNs to learn from data and modify internal parameters (weights) during training to maximize performance is what gives them their strength <cit.>.
RNNs are a specific type of NN made to work with sequential data. They provide the idea of memory, which allows the network to remember data from earlier inputs. For jobs like pricing American options, where historical prices and market conditions might affect future decisions, this memory is essential <cit.>.
Detailed working principals of Recurrent Neural Networks (RNNs) are presented as follows:
* Sequential Methodology: RNNs, in contrast to conventional neural networks, are made to handle data sequences. They accomplish this by sequentially accepting inputs one at a time.
* Repeated Relationships: An RNN's recurrent connections are its primary characteristic. The network can maintain some sort of "memory" thanks to these links. The RNN processes the current input and a "hidden state" from the previous phase at each step in a sequence. Information gleaned from earlier inputs is contained in this hidden state.
* Secret State: At every time step t, the hidden state h_t is updated using the prior hidden state h_t-1 as well as the new input x_t. In mathematics, this is commonly expressed as:
h_t = ϕ(W_hx x_t + W_hh h_t-1 + b_h),
where ϕ is a non-linear activation function, W_hx and W_hh are weight matrices, and b_h is a bias vector.
* Compared Weights: All time steps in an RNN use the same weights, or parameters. This increases the efficiency of the model and lowers the number of parameters since the same weights are applied to each input in the sequence.
§.§ Long Short-Term Memory (LSTM).
In the realm of Recurrent Neural Networks, Long Short-Term Memory (LSTM) networks are an advanced evolution designed to overcome the drawbacks of conventional RNNs, especially when addressing long-term dependencies <cit.>.
Therefore, the detailed processes throughout Long Short-Term Memory are explained as follows:
* Higher Level Memory Management: The LSTM unit, a sophisticated memory cell, is the distinguishing characteristic of LSTM. This unit's distinct structure, which consists of several gates, allows it to retain information for lengthy periods of time.
* Gating System: Three different kinds of gates are included in LSTMs, and each is essential to the network's memory management.
* Input Gate: Shows which values from the input should be used to modify the memory. Mathematically, the input gate i_t is defined as:
i_t = σ(W_ix x_t + W_ih h_t-1 + b_i),
where σ is the sigmoid activation function, and W_ix, W_ih, and b_i are the weights and biases for the input gate.
* Forget Gate: Decides what portions of the existing memory should be discarded. The forget gate f_t is given by:
f_t = σ(W_fx x_t + W_fh h_t-1 + b_f),
where W_fx, W_fh, and b_f are the weights and biases for the forget gate.
* Output Gate: Controls the output flow of the memory content to the next layer in the network. The output gate o_t is represented as:
o_t = σ(W_ox x_t + W_oh h_t-1 + b_o),
where W_ox, W_oh, and b_o are the weights and biases for the output gate.
* Cell State: The cell state C_t, which functions as a kind of conveyor belt running straight down the length of the network chain, is the fundamental component of LSTM. It guarantees that the network efficiently stores and retrieves significant long-term information while permitting information to flow essentially unaltered. The following updates the cell state:
C_t = f_t * C_t-1 + i_t * C̃_t,
where C̃_t is the candidate cell state, calculated as:
C̃_t = tanh(W_cx x_t + W_ch h_t-1 + b_c),
and tanh is the hyperbolic tangent activation function <cit.>.
§.§ Gated Recurrent Unit (GRU)
GRUs are a cutting-edge variant of recurrent neural networks that aim to enhance and streamline LSTM architecture. They provide a more efficient method of managing sequential data, and they work especially well in situations where long-term dependencies are essential <cit.>. Therefore, the detailed processes throughout the Gated Recurrent Unit are explained as follows:
* Architecture Simplified: In terms of processing resources, the GRU is more efficient because to its simpler structure than the LSTM. Its lower gate count accounts for this efficiency.
* Two gates are used by GRUs:
* Update Gate: The degree to which data from the previous state should be transferred to the present state is determined by this gate. It combines the forget and input gates that are present in LSTMs. We define the update gate z_t as follows:
z_t = σ(W_zx x_t + W_zh h_t-1 + b_z),
where W_zx, W_zh, and b_z are the weights and biases for the update gate.
* Reset Gate: It basically lets the model select how much of the past is useful for the current prediction by deciding how much of the past to ignore. Given is the reset gate r_t:
r_t = σ(W_rx x_t + W_rh h_t-1 + b_r),
where W_rx, W_rh, and b_r are the weights and biases for the reset gate.
* No Separate State for Cells: There is no distinct cell state in GRUs, in contrast to LSTMs. In doing so, they streamline the information flow and facilitate modeling and training by merging the cell state and concealed state into a single structure.
The hidden state h_t in a GRU is updated as follows:
h_t = (1 - z_t) * h_t-1 + z_t * h̃_t,
where h̃_t is the candidate hidden state, calculated as:
h̃_t = tanh(W_hx x_t + r_t * (W_hh h_t-1) + b_h).
§.§ Description of Dataset
The data source used in this experimental investigation is a collection of historical data on all symbols in the U.S equities markets from January to June 2013
( https://optiondata.org https://optiondata.org).
The given dataset provides detailed information on options contracts, encompassing various attributes crucial for options trading analysis. Bellow are explanations of each column:
* Contract: A unique identifier for each options contract, likely containing information about the underlying asset, expiration date, type (call or put), and strike price.
* Underlying: Indicates the underlying asset associated with the options contract.
* Expiration: The expiration date of the options contract.
* Type: Specifies whether the option is a call or a put.
* Strike: The strike price of the options contract.
* Style: Refers to the style of the options contract (e.g., American or European).
* Bid: The bid price of the options contract, representing the highest price a buyer is willing to pay.
* Bid Size: The size of the bid, indicating the quantity of contracts being bid for.
* Ask: The ask price of the options contract, representing the lowest price a seller is willing to accept.
* Ask Size: The size of the ask, indicating the quantity of contracts being offered.
* Volume: The trading volume of the options contract.
* Open Interest: The total number of outstanding options contracts.
* Quote Date: The date when the quote for the options contract was made.
* Delta: Delta measures the rate of change of the option's price in response to changes in the price of the underlying asset.
* Gamma: Gamma measures the rate of change in delta in response to changes in the price of the underlying asset.
* Theta: Theta measures the rate of decline in the value of the option over time.
* Vega: Vega measures the sensitivity of the option's price to changes in implied volatility.
* Implied Volatility: Implied volatility is the market's estimate of the future volatility of the underlying asset, as implied by the options prices.
In the analysis of our dataset, we focused on the numerical features to understand the relationships between them. To achieve this, we computed the correlation matrix, which quantifies the linear relationships between pairs of numerical variables. We visualized this correlation matrix using a heatmap, a powerful tool for identifying patterns and correlations within the data. The heatmap, annotated for clarity, uses a 'coolwarm' color palette to indicate the strength and direction of the correlations, with positive correlations shown in warm tones and negative correlations in cool tones. This visual representation helps in quickly identifying strong correlations, both positive and negative, among the numerical features, thereby providing insights that can guide further analysis and decision-making processes. The heatmap underscores the importance of certain variables and their interdependencies, which can be critical for predictive modeling and other statistical analyses.
The heatmap in Figure <ref>, illustrates the correlation matrix of numerical features in the dataset, using the 'coolwarm' color palette to depict the strength and direction of correlations. Warm tones (red) indicate positive correlations, while cool tones (blue) indicate negative correlations. The diagonal elements show a perfect correlation of 1, as each feature is perfectly correlated with itself. Notable observations include a strong positive correlation (0.61) between 'strike' and 'vega', and a moderate positive correlation (0.37) between 'volume' and 'open_interest'. Conversely, 'strike' and 'theta' exhibit a moderate negative correlation (-0.25). Most feature pairs exhibit weak or no correlations, suggesting distinct underlying factors. This visualization aids in quickly identifying significant linear relationships, which is valuable for further analysis and decision-making.
§ RESULTS AND ANALYSIS
Table <ref> and <ref> present the option prices and standard errors predicted by different machine learning models. The results indicate that models such as LightGBM and logistic regression outperform traditional linear regression in estimating continuation values.
§.§ Result of LSM with Different Machine Learning Model
The results presented in Table <ref> and <ref> offer valuable insights into the performance of LSM with machine learning algorithms in pricing American options.
To see the model performance we used the numerical example presented in work <cit.>.
In this numerical example, Table <ref> and <ref> report a range of numerical values for different parameter choices for different machine learning models. Throughout, we use K = 100, r = 0.04, T = 1, and take the same volatilities of both assets to be 0.2 or 0.4, with 10,000 paths and one basis function.
§.§ Impact of Volatility and Time to Maturity
In Table <ref> and <ref>, we can observe the influence of volatility (σ) and time to maturity (T) on option prices.
* Volatility (σ): Higher volatility generally leads to higher option prices across all models. For instance, when S(0) = 80 and T = 1, the KNN price increases from 24.37 (when σ = 0.2) to 30.87 (when σ = 0.4). The standard errors tend to increase with higher volatility, reflecting the increased uncertainty and complexity in pricing under these conditions.
* Time to Maturity (T): Longer maturities also result in higher option prices. For example, with S(0) = 80 and σ = 0.2, the KNN price increases from 24.37 (when T = 1) to 25.36 (when T = 2). The standard errors typically increase with longer maturities, indicating greater variability in the predictions as the time horizon extends.
Figure <ref> provides a visual comparison of the fit of the six approaches. Each model here uses 5 basis functions (polynomials up to degree 5). The x-axis shows the simulated stock price at a step in time and the y-axis shows the discounted option value from the next step. Figure <ref>, we can see that polynomial fits for the following machine learning applied to LSM; KNN, Decision Tree, XGBoost, LightGBM, Logistic Regression, and Random Forest. Some of these methods provide a better visual fit for example here we can see that Logistic Regression provides a better visual fit.
Here let's compare the pricing of an example American put option with the following details: S_0 = 100, K = 100, T = 1.0, r = 0.02, and σ = 0.4
Monte Carlo simulation was performed with 25 time intervals and 10000 paths. Table <ref> shows the results.
From Table <ref>, we can see computational times reveal that simpler models like Decision Trees and Logistic Regression are the quickest, with times of 2.033 and 2.133 seconds respectively, indicating their suitability for scenarios requiring rapid computations. KNN also performs moderately well at 2.593 seconds. In contrast, ensemble methods such as Random Forest take significantly longer, with a computational time of 167.095 seconds, reflecting their higher complexity and resource demands. Gradient boosting methods, XGBoost and LightGBM, offer a balance between speed and performance, with times of 6.689 and 5.506 seconds respectively, making them efficient yet powerful options. This highlights a trade-off between computational efficiency and model complexity, guiding the choice of method based on specific needs for speed versus predictive accuracy.
Below we are going to see the performance of various machine learning models used within the Long staff-Schwartz Method (LSM) framework for pricing American options. The models evaluated are K-Nearest Neighbors (KNN), Decision Tree, XGBoost, LightGBM, Logistic Regression, and Random Forest. Each model is analyzed based on the estimated option price, standard error, execution time, and classification performance metrics (confusion matrix, precision, recall, F1-score, and ROC AUC). Table <ref> presents the results of six different models evaluated using several performance indicators for the in-time sample. These metrics include accuracy, AUC, PR-AUC, precision, recall, and F1-score. Among the six models evaluated, Logistic Regression has the highest scores across all metrics, achieving an accuracy score of 0.9995, an AUC score of 1.0000, a PR-AUC score of 1.0000, a precision score of 0.9990, a recall score of 1.0000, and an F1-score of 0.9995. This indicates that Logistic Regression is highly effective in pricing American options within the in-time sample, showing exceptional performance without apparent overfitting or class imbalance issues.
LightGBM follows Logistic Regression to have better performance with an AUC score of 0.8640. However, its accuracy score of 0.5078 and PR-AUC score of 0.5078, along with a precision score of 0.4870 and a perfect recall score of 1.0000, indicate a trade-off between precision and recall. This suggests that while LightGBM is good at identifying positive cases, it may produce more false positives compared to Logistic Regression. XGBoost provides robust performance with an AUC score of 0.8416 and a PR-AUC score of 0.8425. It has an accuracy score of 0.5078 and a precision score of 0.5078, coupled with a perfect recall score of 1.0000. The lower precision score compared to its recall indicates a higher likelihood of false positives, but its overall high AUC and PR-AUC scores reflect strong model reliability. Decision Tree offers a balanced performance with an accuracy score of 0.5599, an AUC score of 0.6041, and a PR-AUC score of 0.5813. It has a precision score of 0.5370 and a recall score of 0.9663, leading to an F1-score of 0.6904. This suggests that Decision Tree captures most positive cases effectively, although it has lower discriminative power compared to LightGBM and XGBoost. Random Forest shows moderate performance with an accuracy score of 0.5111, an AUC score of 0.6332, and a PR-AUC score of 0.6174. Its precision score of 0.5095 and recall score of 1.0000 lead to an F1-score of 0.6750. Like the Decision Tree, Random Forest provides balanced but lower overall performance compared to the boosting models. KNN also provides moderate performance with an accuracy score of 0.5101, an AUC score of 0.6677, and a PR-AUC score of 0.6588. It has a precision score of 0.5090 and a recall score of 1.0000, resulting in an F1-score of 0.6746. KNN's performance is similar to Random Forest, with high recall and moderate precision, making it suitable for simpler use cases. Overall, while Logistic Regression is the top performer across all metrics, other models like LightGBM and XGBoost also demonstrate effectiveness, particularly in terms of AUC and PR-AUC, highlighting their robustness in pricing American options.
§.§ ROC-AUC and Precision-Recall Curve Analysis
Figure <ref> shows the ROC-AUC curve which is depicting the ability of each model to distinguish between positive and negative samples. From the curve, it is evident that:
* Logistic Regression has the highest AUC score of 1.00, indicating perfect discriminatory ability.
* LightGBM follows with an AUC score of 0.86.
* XGBoost has an AUC score of 0.84.
* KNN and Random Forest have moderate AUC scores of 0.67 and 0.63, respectively.
* The Decision Tree model has the lowest AUC score of 0.60.
An AUC score above 0.8 generally indicates a good model, and hence, Logistic Regression, LightGBM, and XGBoost are considered better performers in distinguishing between classes.
Figure <ref> shows the Precision-Recall (PR) curve
which provides insights into the trade-off between Precision and Recall for each model. The PR-AUC scores are as follows:
* Logistic Regression again leads with a perfect PR-AUC score of 1.00.
* LightGBM has a PR-AUC score of 0.86.
* XGBoost follows closely with a PR-AUC score of 0.83.
* KNN, Random Forest, and Decision Tree have lower PR-AUC scores of 0.64, 0.61, and 0.58, respectively.
The higher the PR-AUC score, the better the model is at balancing precision and recall, particularly useful for imbalanced datasets.
§.§ Confusion Matrix Analysis
The confusion matrices in Figure <ref> provide a detailed breakdown of True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) for each model. From these matrices:
* Logistic Regression achieves nearly perfect classification with minimal misclassifications.
* LightGBM and XGBoost show a good balance with high TP and TN counts.
* Decision Tree and Random Forest exhibit more misclassifications compared to the top performers.
* KNN shows the highest number of misclassifications, indicating it may not be as effective for this task.
Overall, Logistic Regression emerges as the top-performing model across most metrics, making it highly suitable for pricing American options. LightGBM and XGBoost also demonstrate strong performance, while KNN and Decision Tree are less effective.
§.§ Result of Recurrent Neural Network (RNN) on Dataset
The dataset containing various features related to financial options was loaded and split into features (X) and the target variable (y), representing bid prices. A train-validation split of 80-20 was employed to partition the dataset into training and validation sets.
§.§ Model Architecture
Using TensorFlow's Keras API, LSTM and GRU models were constructed with four hidden fully connected layers, each comprising 200 neurons, followed by an output layer with a single neuron for bid price prediction. Rectified Linear Unit (ReLU) activation functions were applied in all layers. The mean squared error (MSE) loss function was chosen to quantify prediction errors. Both models were optimized using the Adam optimizer with a learning rate of 0.001.
The models underwent training for 200 epochs with a batch size of 64. During training, validation set performance was monitored to prevent over fitting. Evaluation of the models was conducted using the mean squared error (MSE) metric on the validation set to assess predictive accuracy which are summarised in Table <ref>.
§.§ Training and Validation Loss Analysis
From the training and validation loss Figure <ref> and <ref>, it is evident that both the GRU and LSTM models demonstrate significant reductions in loss over the training epochs. The GRU model shows more stability with fewer spikes in the validation loss compared to the LSTM model, which has a noticeable spike around the 35th epoch.
The GRU model's validation loss fluctuates less and maintains a relatively lower loss towards the end of the training, suggesting better generalization and stability. On the other hand, the LSTM model, while initially reducing loss effectively, exhibits more variability and occasional higher spikes, indicating potential overfitting or sensitivity to certain epochs.
§.§ Error Metrics Comparison
Looking at the error metrics provided in Table <ref>, the GRU model outperforms the LSTM model across all evaluated metrics. The GRU model achieves a lower Mean Absolute Error (MAE) of 0.49075 compared to 0.5919 for the LSTM model.
Similarly, the Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) are significantly lower for the GRU model, with values of 0.84277 and 0.9180 respectively, as opposed to the LSTM model's 1.7017 and 1.3045. This indicates that the GRU model not only fits the training data better but also predicts more accurately on the test data, making it a more reliable choice for this specific application. Additionally, the training time per epoch is shorter for the GRU model (6ms/step) compared to the LSTM model (7ms/step), indicating a more efficient training process. Therefore, the GRU model demonstrates superior performance in terms of both stability and predictive accuracy, making it a preferable choice over the LSTM model.
§ CONCLUSIONS
The study demonstrates that machine learning algorithms integrated with Monte Carlo simulations can effectively price American options, offering significant improvements over traditional methods. Through extensive experimentation, we found that models such as neural networks and other machine learning techniques provide more accurate pricing, especially in complex market conditions where traditional models struggle. Moreover, the study demonstrates the effectiveness of LSTM and GRU models in predicting bid prices for financial options, with the GRU model exhibiting superior performance. These results pave the way for further exploration and optimization of deep learning methods in financial forecasting, suggesting that such models can potentially outperform traditional approaches like LSM in specific applications. The comprehensive analysis and comparison provide valuable insights into the strengths and limitations of deep learning models, advocating for their integration into financial forecasting and option pricing frameworks. The findings suggest several future directions for research. One promising area is the integration of more sophisticated machine learning models, such as deep reinforcement learning and advanced neural network architectures, to further enhance pricing accuracy. Additionally, the development of hybrid models that combine the strengths of traditional financial theories and machine learning could provide even more robust solutions. Future work should also focus on improving model interpretability and addressing data scarcity issues by leveraging techniques such as transfer learning and data augmentation. Finally, real-world testing and validation of these models in live trading environments will be crucial for assessing their practical applicability and robustness. Again future research could focus on optimizing hyperparameters, exploring other deep learning architectures like Transformers, and incorporating additional features such as macroeconomic indicators. Data augmentation and synthetic data generation techniques can enhance model training, while real-time prediction systems would validate practical performance. By advancing the intersection of machine learning and financial modeling, this research opens up new possibilities for more accurate and efficient option pricing, ultimately contributing to better risk management and decision-making in financial markets.
|
http://arxiv.org/abs/2409.03212v1 | 20240905031641 | Bi-capacity Choquet Integral for Sensor Fusion with Label Uncertainty | [
"Hersh Vakharia",
"Xiaoxiao Du"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Bi-capacity Choquet Integral for Sensor Fusion
with Label Uncertainty
This material is based upon work supported by the National Science
Foundation under Grant IIS-2153171-CRII: III: Explainable Multi-Source Data Integration with Uncertainty.
Hersh Vakharia
University of Michigan
Ann Arbor, MI
[email protected]
Xiaoxiao Du
University of Michigan
Ann Arbor, MI
[email protected]
Accepted XXX. Received YYY; in original form ZZZ
========================================================================================================================================================================================================================================================
§ ABSTRACT
Sensor fusion combines data from multiple sensor sources to improve reliability, robustness, and accuracy of data interpretation. The Fuzzy Integral (FI), in particular, the Choquet integral (ChI), is often used as a powerful nonlinear aggregator for fusion across multiple sensors. However, existing supervised ChI learning algorithms typically require precise training labels for each input data point, which can be difficult or impossible to obtain. Additionally, prior work on ChI fusion is often based only on the normalized fuzzy measures, which bounds the fuzzy measure values between [0, 1]. This can be limiting in cases where the underlying scales of input data sources are bipolar (i.e., between [-1, 1]). To address these challenges, this paper proposes a novel Choquet integral-based fusion framework, named Bi-MIChI (pronounced “bi-mi-kee”), which uses bi-capacities to represent the interactions between pairs of subsets of the input sensor sources on a bi-polar scale. This allows for extended non-linear interactions between the sensor sources and can lead to interesting fusion results. Bi-MIChI also addresses label uncertainty through Multiple Instance Learning, where training labels are applied to “bags’’ (sets) of data instead of per-instance. Our proposed Bi-MIChI framework shows effective classification and detection performance on both synthetic and real-world experiments for sensor fusion with label uncertainty. We also provide detailed analyses on the behavior of the fuzzy measures to demonstrate
our fusion process.
bi-capacity, choquet integral, fuzzy measures, sensor fusion, label uncertainty, classification
§ INTRODUCTION
Sensors are all around us collecting data. Each sensor may provide complementary or reinforcing information that supports tasks such as target detection, classification, or scene understanding. The fuzzy integral (FI), in particular, the Choquet integral (ChI), has been used widely as a powerful non-linear aggregator for sensor fusion <cit.>. The ChI is based on fuzzy measures, or “capacities”, which are a set of real-valued parameters that represent the interactions between input sources and help determine the decision-making process.
Suppose we are fusing m sensor data sources, C={c_1,c_2,⋯,c_m}. The power set of C is denoted as 2^C, which contains all possible crisp subsets of C. The Choquet integral fusion relies on a set of fuzzy measures (length 2^C), where each fuzzy measure element, or “capacity”, correspond to each subset of the sensor source combinations. As an example, if the fuzzy measure is denoted as 𝐠, the notation 𝐠_{1,2} indicates the the contributions from the intersection of sensor input source 1 and source 2.
Two challenges exist for Choquet integral-based fusion methods in the literature. First, most prior work on ChI fusion adopt the normalized fuzzy measures <cit.>, which bounds the fuzzy measures between 0 and 1. To put it mathematically, the normalized fuzzy measure, 𝐠, is a real-valued function that maps 2^C → [0, 1].
This means, any combinations of the input sources are always weighted non-negatively. This can be troublesome as the normalized fuzzy measure may overlook background information provided by the sensor sources. Here is an example to illustrate this effect. Consider a binary classification application where the ChI fusion is performed to detect humans/pedestrians in a scene. We want the human/pedestrian pixels to have target label “1” and everything else (background trees, buildings, and shadows, etc.) to have non-target label “0”. Assume one of the input sensors is a shadow sensor that can detect shadows well (therefore, this sensor source has high values on shadow region and low values on “human” pixels). If the normalized fuzzy measure is used, it will assign value 0 to the measure element associated with the shadow sensor (in other words, ignore the shadow sensor input completely), as it did not contribute to the positive target (human) class. However, we argue that the shadow sensor also carries useful information that should not be ignored. If we know where the shadows are, we can apply a negative weight on the shadows to mark the “non-human” regions, and potentially improve the accuracy and remove shadow artifact from the pedestrian detection results. Thus, it is important to explore extended fuzzy measures to accommodate and incorporate useful and complementary information from all sensor sources.
In this work, we propose the use of bi-capacities <cit.>, which are a generalization of fuzzy measures/capacities that are useful when data follows a bi-polar scale of [-1,1].
The bi-capacities naturally extend and generalize the ChI fusion to produce fusion results on a bipolar scale and can result in improved detection performance (as shown later in Experiments).
Second, existing supervised ChI fusion algorithms often require precise instance-level labels for training. However, accurate labels can often be difficult, if not impossible, to obtain. Let us consider the pedestrian detection application again. It can be tedious and expensive to label each pixel on an image, but it is very simple to draw a bounding box region around where a person may appear. Based on the Multiple Instance Learning (MIL) framework <cit.>, we name such a region a “bag”, where each bag contains a set of pixels. The MIL framework has been explored in previous literature for fuzzy fusion with bag-level label uncertainties <cit.>. However, none of these work handles fuzzy measures outside the [0,1] range. This work is the first, to our knowledge, that addresses bi-capacity ChI fusion with bipolar sensor inputs and label uncertainty under the MIL framework.
The remainder of the paper is organized as follows. Section <ref> provides a review on related works and discusses prior work in ChI fusion and MIL formulation. Section <ref> presents the definition and properties of the bi-capacity ChI formulations. Section <ref> describes the proposed Bi-MIChI algorithm, its objective functions, and optimization strategies. Section <ref> presents experimental results of the proposed
algorithm on both simulated and real datasets for multi-sensor fusion. Section <ref> discusses the main findings, conclusions, and future work.
§ RELATED WORK
§.§ Bi-capacity & Choquet Integral Applications
Bi-capacities and bipolar capacities were developed in decision theory where the underlying scales are bipolar <cit.>. They extend the concept of capacities, or fuzzy measures, by representing interactions of pairs of subsets on a bipolar scale, which can model multidimensional and complex problems. The bi-capacities and bipolar ChI has been explored previously in <cit.>. However, these works mainly use hand-crafted, synthetic examples as an illustration and did not handle data and label uncertainties in real sensing data. To our knowledge, no prior experiments using bi-capacities have been conducted on real data integration applications considering imprecise labels.
Bi-capacities have successfully been applied in multi-criteria decision making under uncertainty, where they can be used to aggregate positive and negative preferences <cit.>. Bi-capacities have also been used to perform complex, dynamic evaluations, such as evaluating regional eco-efficiency in <cit.>. However, they have yet to be applied in the realm of sensor fusion as a way to represent the complex relationship between the input sources to be fused.
The Choquet Integral, on the other hand, has been used as an effective nonlinear aggregation operation for many applications, such as landmine detection <cit.>, soil-type classification <cit.>, gesture recognition <cit.>, and pedestrian detection <cit.>. However, these applications do not utilize bi-capacities for modelling the complementary relationships between sources on a bipolar scale. This work will extend the ChI fusion to a bipolar scale using bi-capacities.
§.§ Multiple Instance Learning
The Multiple Instance Learning (MIL) framework was created to address the problem of label ambiguity and uncertainty in supervised learning <cit.>. Instead of requiring instance-level labels for supervised learning, MIL allows for uncertain labels for “bags” of instances. Bags are labeled positive if it contains at least one positive (“target”) instance and negative if all instances are negative (“non-target”). It has been successfully applied to a variety of applications, such as medical image and video analysis <cit.>, landmine detection <cit.>, scene segmentation <cit.>, and other remote sensing tasks <cit.>. This work aims to extend MIL for bi-capacity ChI sensor fusion to handle label uncertainties, where the input labels are at bag-level, instead of instance-level (per pixel) labels.
§.§ Multiple Instance Choquet Integral
This work extends the work of Multiple Instance Choquet Integral (MICI) for classification and regression <cit.>, which is a supervised ChI fusion framework under the MIL framework. One limitation of MICI is that it utilizes normalized fuzzy measures bounded between [0,1] and typically requires the input sensor data to be normalized between [0,1] as well. While MICI's normalized fuzzy measures can model some interactions between sources, the use of bi-capacities in this paper introduces new functionalities where some sensor source combinations can now be weighed negatively against others (which was not done before). This also leads to improved classification performance and enhanced interpretability. We also developed a new objective function to train the proposed Bi-MIChI algorithm and learn the bi-capacity values.
§ BI-CAPACITY CHI DEFINITIONS
Bi-capacities are a variety of capacities, or fuzzy measures, that are useful when data follows a bipolar scale <cit.>. When applied to fusion, they allow for interesting results where some sources, or combinations of sources, can be weighed negatively against others. Let C = {c_1, c_2, ..., c_m} denote the m sensor sources to be fused. A monotonic bi-capacity, 𝐠, is a real valued function that maps S(C) → [-1, 1], where S(C) = {(A,B): A ⊆ C, B ⊆ C, A ∩ B = ∅}. S(C) represents pairs of disjoint, crisp, subsets of C. It satisfies the following properties <cit.>:
* Boundary Conditions:
𝐠(∅, ∅) = 0, 𝐠(C, ∅) = 1, 𝐠(∅, C) = -1
* Monotonicity:
If A ⊆ E and B ⊇ F, then 𝐠(A, B) ≥𝐠(E, F)
If we compare these properties to those of the normalized fuzzy measure (e.g., see Definition 2.1 of <cit.>), we note the following specific properties for bi-capacities. First, the bi-capacities capture information of ternary importance from multi-source data and have the potential of extracting useful and complementary information on bipolar scales.
The sensor set C can be configured into 3^m disjoint subset pairs, with 3^m-3 non-boundary, learnable elements. Each element of the bi-capacity represents a weighting of the first subset “against” the second subset. This property allows for interesting fusion results, where some sources, or sets of sources, can be weighed negatively against other sources. Additionally, sources that have negative information can still be used to reinforce the final fusion result. In this paper, bi-capacity elements will be denoted with a subscript of its corresponding pair of subsets. For example, 𝐠_1,23 corresponds to subset pair ({c_1}, {c_2, c_3}). In this example, the value of bi-capacity element 𝐠_1,23 weighs c_1 against c_2 and c_3.
Second, to satisfy monotonicity, a bi-capacity increases with inclusion in the first subset and decreases with inclusion in the second subset. Fig. <ref> illustrates the monotonicity property of a bi-capacity for an example with three sources. As shown, bi-capacity elements 𝐠_123,∅ and 𝐠_∅,123 are the uppermost and lowermost bounds of +1 and -1, respectively. There are various paths between the bounds that satisfy monotonicity; one possible path is highlighted in red. As an example, take bi-capacity elements of 𝐠_12,3 and 𝐠_2,3, which correspond to subset pairs ({c_1,c_2}, {c_3}) and ({c_2}, {c_3}), respectively. The monotonicity property shows that 𝐠_12,3 is an upper bound of 𝐠_2,3, as {c_1,c_2}⊆{c_2} and {c_3}⊇{c_3}.
Bi-capacities with three sources can also be represented using a three-dimensional lattice structure, as shown in Fig. 1 in <cit.>. In this paper, the values of the bi-capacities are represented in matrix form, as shown in tables <ref> and <ref> in the experiments section. As seen in the tables, the bi-capacities create a repeating fractal pattern within the matrix <cit.>.
Given a bi-capacity, 𝐠, instance-level fusion is computed with a using the discrete Choquet Integral, which is well-defined for bi-capacities <cit.>. The bi-capacity Choquet Integral, 𝒞, on instance 𝐱_n can be computed as
𝒞_𝐠(𝐱_n) = ∑_k=1^m[ (|h(c_k; 𝐱_n)| - |h(c_k-1; 𝐱_n)|)
·𝐠(A_k ∩ C^+, A_k ∩ C^-) ],
where h(c_k; 𝐱_n) denotes the output of the k^th sensor source input, c_k, on the n^th data instance, 𝐱_n. C is sorted such that |h(c_1; 𝐱_n)| ≤ |h(c_2; 𝐱_n)| ≤ ... ≤ |h(c_m; 𝐱_n)|. Since there is m classifiers, h(c_0; 𝐱_n) is defined to be 0. When using the bi-capacity 𝐠 with the Choquet Integral, it is defined that A_k = {c_k, ..., c_m}, where C is sorted. Additionally, C^+ = {i ∈ C | h(c_i; 𝐱_n) ≥ 0} and C^- = C ∖ C^+.
§ THE PROPOSED BI-MICHI
To utilize the Choquet Integral successfully for fusion, the non-boundary elements of the bi-capacity must be learned from a set of training data. In the MIL framework, data is organized in “bags” with uncertain and imprecise labels. Based on the MIL assumptions, bags are labeled positive if at least one instance in the bag is positive and negative if all instances in that bag are negative <cit.>.
Including or excluding the 𝐠_∅, ∅ = 0 bound can provide different results for the bi-capacity ChI. If this bound is removed, elements of 𝐠_∅, · can be positive and 𝐠_·, ∅ can be negative, which allows for an inversion effect on the data. In the following, we present two variations on the objective functions, one without and one with the 𝐠_∅, ∅ term, and describe our optimization strategies to learn the bi-capacities from the training data and bag-level labels.
§.§ Objective Function 1: without 𝐠_∅, ∅=0
The objective function for the negative and positive bags can be written as
J^- = ∑_a=1^B^-max_∀𝐱_ai^- ∈ℬ_a^-(𝒞_𝐠(𝐱_ai^-) + 1 )^2,
J^+ = ∑_b=1^B^+min_∀𝐱_bj^+ ∈ℬ_b^+(𝒞_𝐠(𝐱_bj^+) -1)^2,
where B^+ is the number of positive bags, B^- is the number of negative bags, 𝐱_ai^- is the i^th instance in the a^th negative bag, ℬ_a^-, and 𝐱_bj^+ is the j^th instance in the b^th positive bag, ℬ_b^+. 𝒞_𝐠 is the Choquet Integral output for a bi-capacity, 𝐠.
This objective function is parallel to the min-max model for normalized fuzzy measures <cit.>. However, the bi-capacities adds additional ternary information (as defined in Section <ref>) and the computation of bi-capacity ChI is different compared with prior work and now has the range between [-1, 1]. The bi-capacity, 𝐠, is obtained by minimizing the objective function in equation <ref>. The J^- term in Eq.(<ref>) encourages the Choquet integral of instances in the negative bags to be “-1”, while the J^+ term in Eq.(<ref>) encourages the instances in the positive bags to “+1”. Therefore, given training data with positively and negatively labeled bags, minimizing J results in a bi-capacity that will produce a high confidence on the target and low confidence on the background. The overall objective can be written as
min_𝐠 J = J^- + J^+ .
§.§ Objective Function 2: with 𝐠_∅, ∅=0
Since the 𝐠_∅, ∅=0 bound enforces 𝐠_·, ∅≥ 0 and 𝐠_∅, ·≤ 0, the ChI is unable to produce an inversion effect on negative data. Therefore, a different objective function must be used to properly represent the negative data in the fused result.
We can define a new set of J^+ and J^- as
J^- = ∑_a=1^B^-max_∀𝐱_ai^- ∈ℬ_a^-(𝒞_𝐠(𝐱_ai^-) - 0 )^2,
J^+ = ∑_b=1^B^+min_∀𝐱_bj^+ ∈ℬ_b^+(1 - |𝒞_𝐠(𝐱_bj^+)| )^2.
Instead of having J^- encourage negative bags to have a ChI output of -1, we instead encourage them to the neutral value of 0. Additionally, J^+ now encourages data in the positive bags to be close to the poles of -1 and +1. The idea is that the sources to be fused may detect the target class negatively or positively. Thus, instead of trying to invert the negative detection, we can represent the target as values that are on either pole.
Then, after fusion, the absolute value of the result can be taken to reflect high confidence in the target. The overall objective is the sum of these two terms, as defined in Eq. (<ref>).
§.§ Optimization Algorithm
The optimization is achieved through the use of an evolutionary algorithm following that of MICI <cit.>, with several modifications based on bi-capacities. Algorithm <ref> shows the pseudo-code of the optimization and fusion steps for the proposed Bi-MIChI algorithm, and Table <ref> explains the parameters used. Fig. <ref> shows a flowchart of the algorithm. The algorithm requires training data (sensor inputs from multiple sources) and bag-level labels for the training bags as inputs. The output of the algorithm is the learned bi-capacities, which will be used to generate instance-level fusion results given multi-sensor sources.
First, a population (size P) of bi-capacities is initialized. The experiments in this paper used P=36. Bi-capacities are initialized starting at the 𝐠_∅, ∅ = 0 boundary and moving outward toward the upper and lower bounds. This process is completed similarly to breadth-first-search exploration <cit.>, where the neighbor nodes are the upper and lower bounds of the current node. As an example, when 𝐠_∅, ∅ is included in Objective Function 2, after starting at 𝐠_∅, ∅, the lower bound elements 𝐠_1, ∅, 𝐠_2, ∅, and 𝐠_3, ∅ and upper bound elements 𝐠_∅, 1, 𝐠_∅, 2, and 𝐠_∅, 3 are the “neighbors” that are to be explored and sampled next. The current element is sampled uniformly randomly between the minimum of all the previously sampled upper bounds and the maximum of all the previously sampled lower bounds. This same methodology is used for if the 𝐠_∅, ∅=0 boundary is not used (Objective Function 1); the 0 bound is simply not enforced during sampling.
In the main optimization loop, bi-capacities are updated using a combination of small-scale and large-scale mutations. In the small-scale mutation, only one of the bi-capacity elements is re-sampled. The element to be sampled is determined through a multinomial distribution based one the number of times each element of the bi-capacity is used by the Choquet Integral during training. Therefore, the element that is used the most during training will have the highest probability to be resampled. Eq. (<ref>) shows the formula for the probability for resampling, where v_A,B is the number of times bi-capacity element 𝐠_A,B is used during training. The selected element is resampled uniformly between the associated upper and lower bounds. In the large-scale mutation, an entirely new bi-capacity is sampled. The probability of small and large mutations is determined by a user-defined parameter η. Small mutations occur with a probability of η, and large mutations occur with a probability of 1-η. The value η=0.8 was found to be effective empirically during experimentation.
P(𝐠_A,B) = v_A, B/∑_𝐠_E,F∈𝐠 v_E,F
After mutation, the fitness of each bi-capacity is computed using the objective function (<ref>). Bi-capacities with improved fitness are saved. This process of mutation and fitness calculation continues for a maximum of I iterations (I=5000 in this paper's experiments) or until certain stopping criteria is met. In our experiments, the stopping criteria is set to be if the new best fitness is within a threshold, 𝐉_T, of the previous best fitness (in other words, if the fitness does not change any more than the 𝐉_T threshold). We chose a small threshold value 𝐉_T = 0.001 as it yielded effective convergence results in our experiments.
§ EXPERIMENTS
The proposed Bi-MIChI algorithm is applied to a synthetic and a real-world
classification task. Experimental results are presented to illustrate the fusion performance of the proposed algorithm.
§.§ Synthetic Dataset
The first experiment showcases the fusion abilities on a synthetic dataset. This dataset was constructed to highlight the proposed framework’s ability to utilize negative information using bi-capacities, as well as its ability to handle bag-level imprecise labels. Two sets of tests are run on this dataset, the first without the 𝐠_∅, ∅=0 bound using objective functions (<ref>) and (<ref>) (Section <ref>), and the second uses the 𝐠_∅, ∅=0 bound and objective functions (<ref>) and (<ref>) (Section <ref>).
To generate this simulated example, suppose the data contains a letter “U” and a letter “M” in the scene. We chose these two letter shapes as they are symmetric and easy to visualize. The ground truth is shown in Fig. <ref>. Assume we are fusing three sensor inputs to detect both shapes. The first sensor source only detects the letter shape “U” (as shown in Fig. <ref>). The second sensor source highlights all background except the letter shape “M” (Fig. <ref>). The third sensor source detects all background (non-letter) pixels (Fig. <ref>). This example is inspired by real-world target detection applications, where the “U” and “M”-shaped targets consist of different materials, say, the “U”-shape is metal and “M”-shape is plastic, and the goal is to detect both targets. The source 1 detector may only be able to detect the metal target shape “U”, whereas sensor 2 and 3 can only detect vegetation and background (non-target) pixels in the scene.
To generate bag-level data, the SLIC superpixel algorithm <cit.> is used to group the pixels in the image into bags (superpixels). The generated bags are shown in Fig. <ref>. Bags that contains any part of the “U” or “M” targets are labeled positive (target class, highlighted as green), and the remaining bags are labeled as negative (non-target class, shown as red).
§.§.§ Simulated Experiment 1 with Bipolar Labels
The first test case is shown in Fig. <ref>. The sources for fusion are shown in Fig. <ref>-<ref>. The goal is to learn a bi-capacity that is able to detect both “U” and “M” shapes with high confidence. In this experiment, bipolar-scale labels are used, where the target class (“U” and “M”) are pushed to label +1 and the non-target class (background) is pushed to label -1, as shown in the ground truth figure (Fig. <ref>).
Fig. <ref> shows the proposed Bi-MIChI fusion result using Objective Function 1 (bipolar case). We observed some interesting behaviors for the bi-capacities. As shown in Fig.<ref>, the “U” shape was detected perfectly (has a positive fusion label of +1), but the “M” shape in the final fusion result was forced to the value -1. This is because the “M” shape in all three sources are not detected, i.e., the input sensor source values take the form [-1, -1, -1]. In this case, the Choquet integral pushes all negatives [-1, -1, -1] to the -1 boundary, which resulted in a missed detection for the “M” letter.
Table <ref> shows the learned bi-capacity values. The bi-capacity is notated using the form 𝐠_A,B, which represents a weighting of the sensor source subset A “against” the second subset B (see the bi-capacity definition in Section <ref>). The bold elements mark the measure element values that were actually used. Note that not all measure elements were updated based on how the sensor input data is sorted in the ChI computation (see Eq. <ref>). We observed that, in this experiment, the bi-capacity element 𝐠_23, 1 = -0.85, which represents the combination of sources 2 & 3 both negatively detect some parts of the target, while source 1 positively detects the target. The bi-capacity measure element 𝐠_23, 1 has a high magnitude, which shows that the negative contribution of the intersection between sources 2 & 3 against the positive contribution of source 1 are weighted highly to help detect the target shapes. This makes sense, as the combination of source 1, the negative of source 2, and the negative of source 3 all contribute to detect both “U” and “M” shapes.
§.§.§ Simulated Experiment 2 with [0,1] Labels
The second test case is shown in Fig. <ref>, <ref>, and <ref>. We implement the Object Function 2 with 𝐠_∅, ∅=0 and ran Bi-MIChI for fusion. The input sources and bags are the same as the previous experiment. In this test case, however, the ground truth non-target bags are now pushed to a neutral value of 0 (instead of -1 in the previous experiment). The target region (the “U” and “M”) are now pushed to either -1 or +1 (in other words, the absolute value of the target class is pushed to 1). The resulting ground truth map is shown in Fig. <ref>, where the “U” and “M” shapes are still highlighted. The ChI fusion result is shown in Fig. <ref>, and the absolute value result is shown in Fig. <ref>. As seen in the figure, the fused result is able to detect the “UM” with high confidence, showing that pushing positive bags to the -1 and +1 poles can be more successful.
Table <ref> shows the learned bi-capacity values for UM experiment 2. In contrast to experiment 1, we observe that the bi-capacity element 𝐠_23, 1 = 0.07. The magnitude of this element is close to 0, which has the effect of zeroing out the negative contribution of the intersection of sources 2 & 3 against the positive contribution of source 1. This makes sense, as it contributes to the effect of pushing background information to 0. Furthermore, as previously stated, the “M” data of the three sources takes the form [-1, -1, -1], which the ChI pushes to -1. However, taking the absolute value allows for a high a confidence detection on the “M” as well.
§.§ Low-Light Pedestrian Detection
We conduct additional experiments on a real-world classification task using the KAIST Multispectral Pedestrian Detection Benchmark <cit.>. The KAIST dataset contains RGB and thermal sensors and the goal is to detect pedestrians in the scene. We selected a low-light setting to illustrate the necesscity for sensor fusion, as the thermal camera will be able to highlight pedestrians in low-light whereas the RGB camera will contribute negatively to the detection (RGB cameras will show dark backgrounds with some spots from light sources such as street lamps). Fig. <ref> shows an illustration of an image from the KAIST dataset. As shown, under the low light conditions, pedestrians are very difficult to detect, and it is necessary to fuse information from both RGB and thermal cameras. Additionally, since pedestrians are very difficult to see, it is almost impossible to annotate and produce precise pixel-level training labels. However, it is possible to generate bounding boxes (as shown in green in Fig. <ref>) to indicate possible pedestrian locations. In this experiment, we use the green bounding boxes as the bag-level labels to train our Bi-MIChI algorithm for RGB and thermal fusion.
The low-light conditions also motivate the use of the bi-capacity's negative weighting. While the thermal image is able to produce high confidence on the pedestrians, there are false positive readings on the background buildings as well as lights. Conversely, the RGB image shows low confidence on the pedestrians, but a high confidence on well-lit areas, like the lights, foreground, and street cones. Negatively weighing the RGB image can contribute to a high confidence on the pedestrians, without relying too heavily on the thermal data.
The SLIC algorithm <cit.> is used to generate superpixels (“bags”) for training. Since the dataset provides ground truth in the form of bounding boxes, bags that contain points that intersect with any of the bounding boxes are considered positive, as shown in Fig. <ref>. Fig. <ref> shows the pixel-level ground truth labels. Note that the pixel-level ground truth labels were not used during training, and were only used to evaluate our final fusion result.
Fig. <ref> shows the three sensor sources that are used as fusion inputs, as well as comparison results for fusion. Source 1 (Fig. <ref>) utilizes thresholding on a grayscaled version of the RGB image to negatively detect bright areas the image on the visual spectrum. Source 2 (Fig. <ref>) is a grayscaled version of the RGB image with histogram equalization. This shows low, negative confidence on pedestrians and tree, and high confidence on the lights, traffic cones, shadows, and other road elements. Source 3 (<ref>) comes from the thermal data with gamma correction. This source shows high confidence on pedestrians, but there are a few false positives in the background that must be reduced.
The proposed Bi-MIChI was used to perform fusion on this dataset using both Objective Function 1 and Objective Function 2. We compare the fusion results to a variety of comparison methods. First, the min, max, and mean is taken across three sources as a simple fusion baseline. A weighted mean was calculated using weights determined by minimizing the mean squared error between the fusion result and the ground truth. Then, K-nearest neighbors (KNN) with k=100 and Support Vector Machine (SVM) approaches were used to perform pixel-level fusion. We also compare it to a pre-trained Mask R-CNN <cit.>, which is a neural network-based object detector. Mask R-CNN was run on the low-light RGB image and the segmentation mask is of detected elements of the “person” class. While Mask R-CNN has seen great success in computer vision applications, pre-trained models can exhibit poor performance in low-light applications where object details are unclear without significant image processing. We also ran comparisons with other Choquet Integral-based methods. CI-QP <cit.> learns a fuzzy measure for fusion by formulating the problem as a quadratic program with instance-level labels. MICI is a previous MIL method for fusion but uses normalized fuzzy measures with elements values in the [0,1] range. MICI-BFM <cit.> is a version of the MICI that uses binary fuzzy measures, which are fuzzy measure with elements that only take values {0,1}. KNN, SVM, and CI-QP all require instance level labels, while MICI, MICI-BFM, and Bi-MIChI operate on bag-level labels.
Visual results for all methods are shown in Fig. <ref>. It is important to note that all results are the direct output of the fusion algorithm - some output a confidence map on the [-1,1] scale, while others output on a [0,1] scale. Naively taking the min, max, and mean of the sources produces poor qualitative results, as the sources highlight conflicting information (background vs. person). The weighted mean produced a result that is very similar to source 3 - this is because the resulting weights heavily favored the third source. KNN highlighted the pedestrians fairly well, but there is a considerable amount of background noise. The KNN method is also highly sensitive to the parameter choice of k. When we reduced the k value (e.g., when k=5), the fusion performance deteriorated significantly and the output became much noisier. Similarly, the SVM approach was able to highlight some pedestrians, but also showed some false positives that occur in the source images. Mask R-CNN falsely identifies a traffic cone as a pedestrian, showing its limitations in low-light conditions. CI-QP, MICI, and MICI-BFM all produce similar results, all of which captured background information with fairly high confidence. Qualitatively, we can see that the proposed Bi-MIChI with objective function 1 is effective at eliminating background information, while still detecting the pedestrians correctly. Bi-MIChI with objective function 2 produces similar results, but contains more false positives compared to objective function 1.
The Bi-MIChI bi-capacities learned with objective function 1 and 2 are shown in Tables <ref> and <ref>, respectively. We can observe that many of elements in the bi-capacity learned with objective function 1 are set to -1. This occurs to encourage the background elements to have an output of -1. The positive detections are the result of the upper boundary element, 𝐠_123, ∅. For the objective function 2 bi-capacities in Table <ref>, we can observe some elements that help push the background to 0 (Recall that Objective 2 pushes the non-target class to a neutral label of 0). For example, elements 𝐠_1, ∅, 𝐠_2, ∅, and 𝐠_12, ∅, which are the weights in favor of sources 1 and 2, are closer to 0. This makes sense, as those sources show high confidence in background elements. This is in contrast to the previous bi-capacities, in which there is an inversion effect.
The normalized fuzzy measures learned by CI-QP, MICI, and MICI-BFM are shown in Table <ref>. The measures learned by these methods are similar as they all rely primarily on the element 𝐠_123, while the other measure element values are close to 0. This suggests that, unlike bi-capacities, normal fuzzy measures are unable to properly take advantage of the negative or background information in the sources, particularly from sources 1 and 2 in this experiment.
To quantitatively compare the different fusion methods, the Receiver Operating Characteristic (ROC) curve's Area Under Curve (AUC) and Root Mean Square Error (RMSE) metrics are computed between the result and the ground truth shown in Fig.<ref>. A higher AUC signifies better target detection, while a lower RMSE signifies that the result is closer to the ground truth. Mask R-CNN, CI-QP, MICI, MICI-BFM, and our Bi-MIChI with objective function 2 produce confidence maps that are bounded from [0,1]. These methods were normalized from [-1,1] before the metrics were computed to offer a consistent comparison between all methods.
Table <ref> shows the results of these metrics. Interestingly, source 3 from the thermal sensor after gamma correction, along with weighted mean (which closely matches source 3), obtained a very high AUC for pedestrian detection. This is because the thermal sensor is able to sense the heat signature from humans even under low-light conditions. However, all input sources (including source 3) have a high RMSE value, which means all of them contain a lot of noise and false positives, especially in the non-pedestrian areas. Among the fusion methods, KNN and SVM performed really well on AUC, but both of these methods require pixel-level training labels and are highly sensitive to parameter choices. The Mask R-CNN method produced very low RMSE, but based on the visual result (Fig. <ref>), Mask R-CNN hardly detected any pedestrians and simply classified almost everything to background (thus, a low AUC). Among the Choquet integral-based methods (bottom five rows), our proposed Bi-MIChI achieves superior performance with higher AUC and lower RMSE compared to other Choquet integral-based methods using normalized fuzzy measures. This also shows that the bi-capacities proposed in this work are effective for detecting target classes (humans) while removing background noise, given input sensor sources that carry complementary information. Additionally, our proposed Bi-MIChI are trained with only bag-level labels, which allows the fusion process to take label uncertainty into consideration.
§.§ Discussions
This paper tests two objective functions with the proposed Bi-MIChI algorithm. The first does not enforce the g_∅, ∅ = 0 bound when constructing the bi-capacities, while the second does. In our experiments, we find that they are both useful for different applications. Objective function 2 works well if separate sources highlight different parts of the target. For example, in the synthetic “U”“M” experiment that uses objective function 2, the “U” is detected strongly positively in source 1 and the “M” is detected strongly negatively in source 2. Instead of trying to push both to 1 like objective function 1 would do, each letter can be pushed to the closest pole (+1 for “U” and -1 for “M”).
On the other hand, objective function 1 can be useful in situations where multiple sources detect the target, but on different ends of the [-1,1] scale. For example, in the pedestrian detection experiment, pedestrians appear negatively in source 2 and positively in source 3. The “inversion” created by the objective function 2 allows the background to be pushed to -1 while still encouraging the pedestrians to label +1.
The use of bi-capacities and the Choquet Integral offers an explainable representation of the interactions between the sources to be fused. By examining the element values of the bi-capacities, users can interpret what role a sensor source plays in the final fusion result. Other machine learning techniques such as KNN, SVM, and Mask R-CNN, while effective in some cases, could be considered a “black box” in the sense that they provide predictions without a clear, interpretable reason a decision was made. The Choquet Integral, on the other hand, provides a clear, analytical model for complex, nonlinear fusion, and the bi-capacities provide an interpretable and explainable representation of how fusion sources are weighed against each other.
§ CONCLUSION
This paper presents Bi-MIChI, a novel Choquet integral-based method for explainable and interpretable data fusion with uncertain labels using bi-capacities. Through both synthetic and real-world experiments, this framework showed promising results in its ability to utilize negative and complementary information from input sensor sources for effective multi-sensor fusion.
Bi-MIChI aims to provide a general framework of data fusion that can be applied to various modalities and applications. To further explore the effectiveness of this method, additional experiments can be conducted applying Bi-MIChI to other fusion applications and sensor modalities. For example, this method can be used for multi-temporal and multi-view fusion <cit.>, where a sequence of images can be fused for target detection and classification.
IEEEbib
|
http://arxiv.org/abs/2409.02351v1 | 20240904004203 | Sobolev Metrics on Spaces of Discrete Regular Curves | [
"Jonathan Cerqueira",
"Emmanuel Hartman",
"Eric Klassen",
"Martin Bauer"
] | math.DG | [
"math.DG"
] |
A Comprehensive Study of Open Cluster Chemical Homogeneity
using APOGEE and Milky Way Mapper Abundances
[
=========================================================================================================
§ ABSTRACT
Reparametrization invariant Sobolev metrics on spaces of regular curves have been shown to be of importance in the field of mathematical shape analysis. For practical applications, one usually discretizes the space of smooth curves and considers the induced Riemannian metric on a finite dimensional approximation space. Surprisingly, the theoretical properties of the corresponding finite dimensional Riemannian manifolds have not yet been studied in detail, which is the content of the present article. Our main theorem concerns metric and geodesic completeness and mirrors the results of the infinite dimensional setting as obtained by Bruveris, Michor and Mumford.
§ INTRODUCTION
Motivation and Background:
Reparametrization invariant Sobolev metrics on spaces of regular curves play a central role in the field of mathematical shape analysis. Due to their reparametrization invariance, these metrics descend to Riemannian metrics on spaces of unparametrized curves, which are of relevance in mathematical shape analysis and data analysis. Examples include applications where one is interested in the shape of planar objects (represented by their boundary curves); see <cit.> and the references therein. Motivated by their appearance in these applications there has been a large interest in studying their mathematical properties. Michor and Mumford <cit.> showed a surprising degeneracy of the simplest such metric; namely they proved that the reparametrization invariant L^2-metric, i.e., the Sobolev metric of order zero, induces a degenerate distance function. This purely infinite dimensional phenomenon renders this metric unsuited for mathematical shape analysis, as it assigns a zero distance between any two curves and thus cannot distinguish between different shapes. For higher order metrics this degeneracy disappears and it has been shown in many applications that they lead to meaningful notions of distance <cit.>. As a result, these metrics can be used to define a mathematical framework for statistical shape analysis on these spaces of curves <cit.>. A natural question that arises in this context concerns the existence of minimizing geodesics, i.e., whether the space of regular curves equipped with these Riemannian metrics is a geodesically complete and/or geodesically convex space. For metrics of order two or higher a positive answer to this question has been found by Bruveris, Michor and Mumford <cit.>; more recently, it has been shown by one of the authors and collaborators <cit.> that 3/2 is actually the critical index for this property, i.e., for a Sobolev metric of order greater than 3/2 the resulting space is geodesically complete, whereas there always exists geodesics that leave the space in finite time if the order is smaller than 3/2. The behavior at the critical value 3/2 is still open.
Main Contributions:
For practical applications, one usually discretizes the space of smooth curves and considers the induced Riemannian metric in a corresponding finite dimensional approximation space. Discretizations that have been considered include approximating curves via piecewise linear functions <cit.>,
B-spline discretizations <cit.> or finite Fourier series approximations <cit.>. In this paper, we will discretize a curve as a finite sequence of points in Euclidean space, so that the space of curves is the space of these sequences.
Using methods of discrete differential geometry, we will define a class of metrics on this finite dimensional space that are motivated by and analogous to the class of reparametrization invariant Sobolev metrics mentioned above for the infinite dimensional space of smooth curves. To our surprise, these rather natural finite dimensional Riemannian manifolds have not yet been studied in much detail, with the only exception being the case of the homogenous Sobolev metric of order one, where the space of PL curves can be viewed as a totally geodesic submanifold of the infinite dimensional setting <cit.>; note that a similar result for more general Sobolev metrics is not true.
Considering these finite dimensional approximation manifolds leads to a natural question, which is the starting point of the present article:
Which properties of the infinite dimensional geometry are mirrored in these finite dimensional geometries?
As vanishing geodesic distance is a purely infinite dimensional phenomenon — every finite dimensional Riemannian manifold admits a non-degenerate geodesic distance function <cit.> — one cannot hope to observe the analogue of this result in our setting. The main result of the present article shows, however, that some of these discretizations do indeed capture the aforementioned completeness properties, cf. Theorem <ref>. In addition to these theoretical results, we present in Section <ref> selected numerical examples showcasing the effects of the order of the metric on the resulting geodesics. Finally, for the special case of triangles in the plane, we study the Riemannian curvature of the space of triangles and observe that it explodes near the singularities of the space, i.e., where two points of the triangle come together.
Conclusions and future work: In this article we studied discrete Sobolev type metrics on the space of discrete regular curves in Euclidean space (where we identified this space as a space of sequences of points) and showed that these geometries mirror several properties of their infinite dimensional counterparts. In future work we envision several distinct research directions: first, we have restricted ourselves in the present study to integer order Sobolev metrics. In future work it would be interesting to perform a similar analysis also for the class of fractional (real) order Sobolev metrics, such as those studied in <cit.>.
Secondly, we aim to study stochastic completeness of these geometries. For extrinsic metrics on the two-landmark space it has recently been shown by Habermann, Harms and Sommer <cit.> that the resulting space is stochastically complete, assuming certain conditions on the kernel function. We believe that a similar approach could be applied successfully to the geometries of the present article, which would be of interest in several applications where stochastic processes on shape spaces play a central role. Finally, we would like to study similar questions in the context of reparametrization invariant metrics on the space of surfaces: in this case geodesic completeness in the smooth category, i.e., in the infinite dimensional setting, is wide open and we hope to get new insights for this extremely difficult open problem by studying its finite dimensional counterpart.
Acknowledgements: M.B was partially
supported by NSF grants DMS–2324962 and DMS-1953244. M.B and J.C. were partially supported by the BSF under grant 2022076. E.H. was partially supported by NSF grant DMS-2402555.
§ REPARAMETRIZATION INVARIANT SOBOLEV METRICS ON THE SPACE OF SMOOTH CURVES
In this section we will recall some basic definitions and results regarding the class of reparametrization invariant Sobolev metrics on the space of smooth, regular (immersed) curves.
We begin by defining
the set of smooth immersions of the circle S^1 into the space ^d:
(S^1,^d)={c∈ C^∞(S^1, ^d) : |c'(θ)| 0, ∀θ∈ S^1}.
Here we identify S^1 with the interval [0,1] with its ends identified. The set (S^1,^d) is an open subset of the Fréchet space C^∞(S^1,^d) and thus can be considered as an infinite dimensional Fréchet manifold using a single chart.
We let h,k denote our tangent vectors, which belong to
T_cImm(S^1,^d)≅ C^∞(S^1,^d).
Next, we consider the space of orientation-preserving smooth self-diffeomorphisms of the circle
Diff(S^1)={φ∈ C^∞(S^1,S^1):φ is bijective and φ'(θ)>0, ∀θ∈ S^1},
which is an infinite dimensional Fréchet Lie group and acts on
(S^1,^d) from the right
via the map (c,φ)↦ c∘φ.
The principal goal of the present article is to study Riemannian geometries on (S^1,^d). To introduce our class of Riemannian metrics we will first need to introduce some additional notation: we denote by D_θ the derivative with respect to θ and let D_s=1/|c'(θ)|D_θ and ds=|c'(θ)|dθ be arc-length differentiation and integration respectively. Furthermore, we let ℓ(c):=∫_S^1|c'(θ)|dθ denote the length of a curve c. Using these notations, we are ready to define the m-th order, reparametrization invariant Sobolev metric on (S^1,^d):
Let c∈(S^1,^d), m∈ℤ_≥ 0, and h,k∈ T_c(S^1,^d). The m-th order Sobolev metric on (S^1,^d) is then given by
^m_c(h,k):=∫_S^1⟨ h,k⟩/ℓ(c)^3+ℓ(c)^-3+2m⟨ D_s^m h,D_s^m k⟩ ds.
In the above definition we used length dependent weights which allowed us to define a scale-invariant version of the Sobolev metric of order m. Alternatively we could have considered the constant coefficient Sobolev metric
^m_c(h,k):=∫_S^1⟨ h,k⟩+⟨ D_s^m h,D_s^m k⟩ ds.
and its discrete counterpart.
Almost all of the results of the present article hold also for this class of metrics, albeit with minor adaptions in the proof of the main completeness result, cf. Appendix <ref>.
We start by collecting several useful properties of the above defined family of Riemannian metrics:
Let m≥ 0 and let ^m be the Riemannian metric as defined in (<ref>). Let c∈(S^1,^d) and h,k∈ T_c(S^1,^d). Then we have:
* The metric ^m is invariant with respect to reparametrizations, rescalings, rotations, and translations; i.e. for φ∈Diff(S^1), λ∈^+, R∈SO(^d) and 𝐯∈^d we have
_c^m(h,k) =_c∘φ^m(h∘φ,k∘φ)=_λ c^m(λ h,λ k)=_c+𝐯^m(h,k)=_Rc^m(Rh,Rk).
We note in the above the action of translation (+𝐯) only affects the tangent space in which h,k lie, but not the vectors h and k, since the derivative of translation is the identity map.
* The metric ^m is equivalent to any metric of the form
^m(h,k):=∫_S^1∑_j=0^m a_jℓ(c)^-3+2j⟨ D_s^j h,D_s^j k⟩ ds
for a_0,a_m>0 and a_j≥ 0 for j=1,…,m-1; i.e., there exists a C>0 such that for all c∈(S^1,^d) and all h∈ T_c(S^1,^d)
1/C^m_c(h,h)≤^m_c(h,h)≤ C ^m_c(h,h).
The proof of the invariances follows by direct computation. The second statement is implied by <cit.> in a similar way as in Lemma <ref> below.
For a Riemannian metric g on a smooth manifold ℳ one defines the induced path length, i.e., for γ(t):[0,1]→ℳ we let
L_g(γ)=∫_0^1√(g_γ(t)(D_tγ(t),D_tγ(t)))dt.
Using this one can consider the induced geodesic distance, which is defined via
d_g(p,q)=inf L_g(γ)
where the infimum is calculated over the set of piecewise C^∞ paths [0,1]→ℳ with γ(0)=p and γ(1)=q.
In finite dimensions this always defines a true distance function; for infinite dimensional manifolds, however, one may have a degenerate distance function, i.e., there may be distinct points c_0,c_1∈ℳ such that d_g(c_0,c_1)=0, see for example <cit.>. For the class of Sobolev metrics on spaces of curves this phenomenon has been studied by Michor and Mumford and collaborators and a full characterization of the degeneracy has been obtained:
The metric defines a non-degenerate geodesic distance function on the space of curves (S^1,ℝ^d) if and only if m≥ 1.
The main focus on this article concerns geodesic and metric completeness properties of the corresponding space. Recall that
a Riemannian manifold (ℳ,g) is metrically complete if it is complete as a metric space under the distance function given by the Riemannian metric. Furthermore it is called geodesically complete if the geodesic equation has solutions defined for all time for any initial conditions γ(0)=p∈ℳ and γ'(0)=v∈ T_pℳ and it is called geodesically convex if for any two points p,q∈ℳ there exists a length minimizing path connecting them. In finite dimensions the theorem of Hopf-Rinow implies that metric and geodesic completeness are equivalent and either implies geodesic convexity, but this result famously does not hold in infinite dimensions <cit.>.
The following theorem will characterize the metric and geodesic completeness properties of ((S^1,^d),^m).
To state the theorem, we will first need to introduce the space of regular curves of finite (Sobolev) regularity, i.e., for m≥ 2 we let
ℐ^m(S^1,^d)={c∈ H^m(S^1,^d): |c'(θ)|0,∀θ}.
Let m≥ 0, c∈(S^1,^d), h,k∈ T_c(S^1,^d) and G^m as defined above. Then
((S^1,^d),^m) is a Riemannian manifold and for m≥ 2 the following hold:
* The manifold ((S^1,^d),G^m) is geodesically complete.
* The metric completion of ((S^1,^d),G^m) is (ℐ^m,G^m).
* The metric completion of ((S^1,^d),G^m) is geodesically convex.
Proof of the above three statements can be found in <cit.>, specifically via the proofs of Theorems 5.1, 5.2, and 5.3.
Note that ((S^1,^d),G^m) is not metrically complete for any m despite being geodesically complete for m≥2.
In the exposition of the present article, we have only focused on integer order Sobolev metrics. More recently, the equivalent of these results have been also shown for fractional (real) order Sobolev metrics, where the critical order for positivity of the geodesic distance is 1/2 and the critical order for completeness is 3/2, see <cit.> for more details.
§ DISCRETE SOBOLEV METRICS ON DISCRETE REGULAR CURVES
In this section we will introduce the main concept of the present article: a discrete version of the reparametrization invariant Sobolev metric.
We start this section by first defining a natural discretization of the space of immersed curves and then introducing a discrete analogue of ^m that, under modest assumptions, converges to its smooth counterpart. Let denote the set of ordered n-tuples of points in ^d and consider the subset
={(𝐱_1,…, 𝐱_n)∈: 𝐱_i∈^d and 𝐱_i≠𝐱_i+1, ∀ i∈ℤ/nℤ}.
As is clearly an open subset of it carries the structure of an dn-dimensional manifold.
Furthermore,
and are both acted on from the right by the cyclic group of n elements by cyclically permuting the indices. That is, if j∈ℤ/nℤ then
((𝐱_1,…, 𝐱_n),j)↦ (𝐱_1+j,…, 𝐱_n+j)
where addition is modulo n.
In this remark we will observe that we can identify the above introduced space of discrete regular curves with the set of piecewise linear, regular curves. Therefore we let
PL^n(S^1,^d)={c∈ C(S^1,^d): c|_[i/n,i+1/n] is linear}
be the space of continuous, piece-wise linear curves with n control points from S^1→^d
and
PLImm^n(S^1,^d)={c∈PL^n(S^1,^d): c(i/n) c(i+1/n), ∀ i∈ℤ/nℤ}
the open subset of piecewise linear immersions.
To identify the space of piecewise linear immersion with the previously defined space
, thereby allowing us to visualize as a space of curves, we simply consider the map
c↦(c(0/n),c(1/n),…,c(n-1/n))∈
Likewise we may identify T_cPLImm^n(S^1,^d)≅PL^n(S^1,^d) with T_c≅.
We now wish to define a Riemannian metric ^m on , which can be interpreted as a discretization of ^m. First, we define some notation.
Given a curve c∈ we denote our vertices as c_i and let e_i=c_i+1-c_i denote the edge beginning at c_i and ending at c_i+1. We also denote the average length of the two edges meeting at the vertex c_i as μ_i=|e_i|+|e_i-1|/2. Let h∈ T_c denote a tangent vector to c with h_i denoting the i-th component of h. To define our discrete metric ^m we will also need to define the j-th discrete derivative at the ith vertex D_s^j h_i
using some ideas from discrete differential geometry <cit.>. We define these recursively via
D_s^0 h_i=h_i D_s^j+1 h_i=D_s^jh_i-D_s^jh_i-1/μ_i j∉2ℕ
D_s^jh_i+1-D_s^jh_i/|e_i| j∈2ℕ.
A visualization of this construction can be seen in Figure <ref>.
This allows us to define some useful notation for something analogous to the m-th order component of ^m.
For c∈ and h,k∈ we let
_c^m(h,k):=∑_i=1^nℓ(c)^2m-3⟨ D_s^m h_i,D_s^m k_i⟩μ_i,m where μ_i,m=μ_i m∈ 2ℤ^+
|e_i| m∉ 2ℤ^+
and
_c^m(h,k)=_c^0(h,k)+_c^m(h,k).
In the following lemma we justify the above notation and show the above defines a Riemannian metric on .
Let n≥ 3 and m≥ 0. Then
^m
is a Riemannian metric on .
We need to check non-degeneracy of the inner product and the smooth dependence on the base point c.
As g_c^m(h,h)≥ġ_c^0(h,h) it follows that if g_c^m(h,h)=0 then we also have ġ_c^0(h,h)=0. As each edge length |e_i| and μ_i are nonzero for any c, this can only happen if ⟨ D_s^0 h_i, D_s^0 h_i⟩=⟨ h_i, h_i⟩=0 for all i. That implies h_i=0 for all i. Thus for any m, g_c^m(h,h)=0 implies h≡ 0.
We next check that ^m_c varies smoothly with respect to the base point c. The only suspect term in the definition is |e_i|. Varying c_i in the u direction yields the following:
∂/∂ u|e_i|=-u· e_i/|e_i| ∂/∂ u|e_i-1|=u· e_i-1/|e_i-1|
which implies the smoothness of the metric as |e_i|>0 for each i of any curve in .
In the next Lemma we will show that higher order metrics dominate lower order metrics, which will be of importance in the next section, where we will establish the main completeness results of the present article.
Let m≥ 1, c∈ and h∈, then
^m_c(h,h)≤1/4^m+1_c(h,h).
The proof of this result is rather technical and we postpone it to the appendix.
With these lemmas in hand, we arrive at a result concerning the main properties of the discrete Riemannian metric g^m, which parallels the statements of Lemma <ref>.
The metric ^m_c(h,k) has the following properties:
* For all n≥ 3, m≥0, ^m is invariant with respect to the action of ℤ/nℤ on , as well as rescaling, rotation, and translation. If j∈ℤ/nℤ, λ∈^+, R∈SO(^d) and 𝐯∈^d, then
_c^m(h,k) =_c∘ j^m(h∘ j,k∘ j)=_λ c^m(λ h,λ k)=_𝐯+c^m(h,k)=_Rc^m(Rh,Rk).
* The metric ^m(h,k) is equivalent to any metric of the form
^m(h,k):=∑_j=0^m ∑_i=1^na_jℓ(c)^2j-3⟨ D_s^j h_i,D_s^j k_i⟩μ_i,m
for a_0,a_m>0 and a_j≥ 0 for j=1,…,m-1 in the following sense: for some C∈, all c∈(S^1,^d) and all h∈ T_c≅
1/C^m_c(h,h)≤^m_c(h,h)≤ C ^m_c(h,h).
The proof of statement <ref> is similar to the smooth case and we will not present the proof for translation, rotation, or scale invariance. The key difference is the action by ℤ/nℤ. Note _c^m(h,k) and _c∘ j^m(h∘ j,k∘ j) sum over the exact same terms. Therefore, since |e_i∘ j|=|e_i+j| and μ_i∘ j=μ_i+j, we see
_c∘ j^m(h∘ j,k∘ j)=∑_i=1+j^n+jℓ(c)^2m-3⟨ D_s^m h_i,D_s^mk_i⟩μ_i,m=∑_i=1^nℓ(c)^2m-3⟨ D_s^m h_i,D_s^mk_i⟩μ_i,m=_c^m(h,k)
where the center equality is due to the fact that i belongs to ℤ/nℤ so that the i=1 term of the sum is identical to the i=n+1 term.
Statement <ref> largely follows via Lemma <ref>. Given our ^m, it allows us to collect all our coefficients into ^0 and ^m terms alone to form a ^m with
a_0^0_c(h,h)+a_m^m_c(h,h)≤^m_c(h,h)≤^m_c(h,h).
As ^m_c(h,h)=â_0^0(h,h)+â_m^m(h,h) for some a_0≤â_0 and a_m≤â_m. Note also that
min(â_0,â_m)^m_c(h,h)≤^m_c(h,h)≤max(â_0,â_m)^m_c(h,h)
and similar for a_0^0_c(h,h)+a_m^m_c(h,h). Thus we may write
1/C^m_c(h,h)≤min(a_0,a_m)^m_c(h,h)≤^m_c(h,h)≤max(â_0,â_m)^m_c(h,h)≤ C^m_c(h,h)
for
C=max(1/min(a_0,a_m), max(â_0,â_m)).
Via multiplication by C we find ^m_c(h,h)≤ C^m_c(h,h) and via division we find 1/C^m_c(h,h)≤^m_c(h,h).
Together these give the desired statement
1/C^m_c(h,h)≤^m_c(h,h)≤ C^m_c(h,h).
We have seen that this discrete metric shares several properties with its infinite dimensional counterpart. In the following proposition we show that we are indeed justified in referring to it as a discretization of ^m:
Let n≥ 3 and for i=0,…, n-1 let θ_i=i/n. Let c∈(S^1,^d) and h,k∈ T_c(S^1,^d). Define c̃_n=(c(θ_1),c(θ_2),…,c(θ_n))∈. Similarly define h̃_n and k̃_n using h,k. Then
lim_n→∞^m_c̃_n(h̃_n,k̃_n)=^m_c(h,k)
To prove this statement we will make use of a technical result regarding the approximation of derivatives, which we postponed to the appendix, cf. Lemma <ref>.
We first define several relevant operators and notational items.
Given c∈(S^1,^d) we let μ_c,n,m=|I_n^1(c)| if m is even and μ_c,n,m=|I_n^1(c)|+|I_n^1(c)|/2 if m is odd. Now, for a given c∈(S^1,^d) and n∈ℤ_≥ 3 define 𝒟_c,n,m via the following recursive formula
𝒟_c,n,0(f)=I^0_n(f) 𝒟_c,n,m(f)=I^1_n(D_c,n,m-1(f))/μ_c,n,m m∈ 2ℤ^+
I^1_n(D_c,n,m-1(f))/μ_c,n,m m∉ 2ℤ^+.
Note that all the n cancel in these definitions once expanded as at each step we introduce a factor of n in both the numerator and denominator.
Next define ℓ_n(c) as ∑_i=1^n|c(θ_i+1)-c(θ_i)|. With these in hand we define:
F_m(h,k,c,n,θ)=ℓ_n(c)^2m-3⟨𝒟_c,n,m(h),𝒟_c,n,m(k)⟩μ_c,n,m.
We prove the proposition in two steps
∫_S^1F_m(h,k,c,n,θ)dθ=^m_c_n(h_n,k_n)
We expand to
∫_S^1F_m(h,k,c,n,θ)dθ=∫_S^1ℓ_n(c)^2m-3⟨𝒟_c,n,m(h),𝒟_c,n,m(k)⟩μ_c,n,mdθ
and note that the integrand on the right is constant on each interval [θ_i,θ_i+1) as ℓ_n is constant with respect to θ and each of 𝒟_c,n,m(h),𝒟_c,n,m(k), and μ_c,n,m are constant on such intervals. Thus for some 𝒟_c,n,m,i(h),𝒟_c,n,m,i(k)∈^d we can write either
∫_S^1F_m(h,k,c,n,θ)dθ=∑_i=1^nℓ_n(c)^2m-31/n⟨𝒟_c,n,m,i(h),𝒟_c,n,m,i(k)⟩ |n(c(θ_i+1)-c(θ_i))|
or
=∑_i=1^nℓ_n(c)^2m-31/n⟨𝒟_c,n,m,i(h),𝒟_c,n,m,i(k)⟩|n(c(θ_i+1)-c(θ_i))|+|n(c(θ_i)-c(θ_i-1))|/2
depending on the parity of m. Note all n cancel and we can rewrite further to either
∑_i=1^nℓ_n(c)^2m-3⟨𝒟_c,n,m,i(h),𝒟_c,n,m,i(k)⟩|e_i|
for odd m or
∑_i=1^nℓ_n(c)^2m-3⟨𝒟_c,n,m,i(h),𝒟_c,n,m,i(k)⟩μ_i
for even m. Thus the only thing that remains to be shown for this first claim is that these 𝒟 terms are our discrete D_s derivatives of h_n and k_n. This is clear for m=0 as I^0_n(h) is simply the step function taking on the value h_n,i=h(i/n) on [θ_i,θ_i+1). For m>0 we see either
𝒟_c,n,m,i(h)=𝒟_c,n,m-1,i+1(h)-𝒟_c,n,m-1,i(h)/|e_i|
or
𝒟_c,n,m,i(h)dθ=𝒟_c,n,m-1,i(h)-𝒟_c,n,m-1,i-1(h)/μ_i
so that by induction it follows that these really do correspond to the discrete derivatives as 𝒟_c,n,m,i(h) relates to 𝒟_c,n,m-1,j(h) identically to how D_s^mh_i relates to D_s^m-1 h_j.
The second step is easier by comparison. We have shown that ∫_S^1F_m(…)=_c̃_n^m(…). We next need to show how the limit of F_m is related to ^m_c(h,k)=ℓ^2m-3⟨ D_s^m h,D_s^m k⟩|c'| which will allow us to relate ^m and ^m.
lim_n→∞ F_m(h,k,c,n,θ)L^∞→^m_c(h,k)
By Lemma <ref> it follows that
lim_n→∞𝒟_c,n,m(f)-D_s^m(f)_L^∞→ 0 and lim_n→∞μ̂_c,n,m-|c'|_L^∞→0.
It is also clear that lim_n→∞ℓ_n(c)→ℓ(c) as our curves are rectifiable.
Now
lim_n→∞F_m(h,k,c,n,θ)-^m_c(h,k)_L^∞→ 0.
This implies that lim_n→∞F_m(…) is equal to ^m_c(h,k) almost everywhere, and we may join our claims together via
∫_S^1^m_c(h,k)dθ=∫_S^1lim_n→∞ F_m(…) dθ*=lim_n→∞∫_S^1F_m(…)dθ=lim_n→∞^m_c̃_n(h̃_n,k̃_n),
where * follows by the Lebesgue dominated convergence theorem as we can bound the integrand almost everywhere by the constant function on S^1 equal to ^m_c(h,k)_L^∞. By linearity it follows
^m_c(h,k)=lim_n→∞^m_c̃_n(h̃_n,k̃_n)
which completes the proof.
§ COMPLETENESS RESULTS
In this section we will prove the main theoretical result of the present article, namely we will show that the finite dimensional manifolds (,^m) indeed capture the completeness properties of the space of smooth, regular curves equipped with the class of reparametrization invariant Sobolev metrics ^m, i.e., we will prove the following theorem:
Let m≥ 2, n≥ 3 and d≥ 2. Then the space (,^m) is metrically and geodesically complete. Furthermore, for any two points in there exists a minimizing geodesic, i.e., (,^m) is geodesically convex.
This theorem can readily be seen as a discrete equivalent to the above Theorem <ref> from the smooth case. As is finite dimensional, Hopf-Rinow implies we need only show metric completeness and that both geodesic completeness and geodesic convexity follow automatically. The requirement that d≥ 2 is due to the fact that ^1× n_* is not connected and thus Hopf-Rinow does not apply. We begin by proving a useful lemma
Let (ℳ,g) be a possibly infinite dimensional manifold. (ℳ,g) is metrically incomplete if and only if there exists a path γ:[0,1)→ℳ, such that
* γ(t)∈ℳ for 0≤ t<1;
* lim_t→ 1γ(t) does not exist in (ℳ,g);
* the length of γ (w.r.t. g) is finite, i.e., L_g(γ)<∞.
We first assume incompleteness. As ℳ is metrically incomplete there exists some Cauchy sequence x_n that fails to converge in the space. Without loss of generality we can assume ∑_n^∞ d_g(x_n,x_n+1)<∞. This is since we can always find a Cauchy sub-sequence of our Cauchy sequence with this property. To construct such a sub-sequence we perform the following procedure: if, under our Cauchy conditions, for all n,m≥ N(i) we have d_g(x_n,x_m)<1/10^i then we define our sub-sequence as {x_N(i)}_i∈ℕ⊂{x_n}. The length ∑_i=1^∞ d_g( x_N(i),x_N(i+1)) of this sub-sequence is bounded above by 1/9 by construction as ∑_i=1^∞ d_g( x_N(i),x_N(i+1))<∑_i=1^∞1/10^i=1/9.
While we cannot guarantee paths lying in the space between the elements of our sequence that realize their distances, we can have a collection of paths γ_n going from x_n to x_n+1 with a length <d_g(x_n,x_n+1)+1/2^n lying within the space so that the overall distance of γ=γ_1*γ_2*γ_3*… will have a total length L(γ)< 1+∑_n=1^∞ d_g(x_n,x_n+1). Here * denotes usual path composition. γ is then of finite length with lim_t→ 1γ(t)∉ℳ and γ(t)∈ℳ for 0≤ t< 1.
On the other hand, assuming such a path, we wish to show we have a Cauchy sequence that does not converge with respect to our metric. Without loss of generality assume γ is traversed with constant speed.
If a and b are points on our curve, then let d(a,b) be the length of the subsegment of gamma with endpoints γ(a),γ(b). As γ is traversed with constant speed we see d(t_i,t_j)= L_g(γ)|t_i-t_j|. The distance of γ(t_i) and γ(t_j) along γ is not necessarily equal with the distance of these points in the space, but if d_g is the metric distance, then d_g(γ(t_i),γ(t_j))≤ d(t_i,t_j)= L_g(γ)|t_i-t_j|. Set t_n=1-1/n noting that this sequence (t_n) is a Cauchy sequence in the reals. The distances between a pair of these points is d_g(γ(t_i),γ(t_j))≤ L_g(γ)|t_i-t_j|. Now given any ε>0 we have some N so that for all i,j>N we have |t_i-t_j|<ε/L_g(γ) so that d_g(γ(t_i),γ(t_j))<ε. This makes γ(t_n) a Cauchy sequence in ℳ, but (t_n) converges to 1 so this Cauchy sequence does not converge in our space:
lim_n→∞γ(t_n)=lim_t→ 1γ(t)∉ℳ.
Thus our space is not metrically complete.
Let (ℳ,g) and (ℳ,g^*) be two Riemannian manifolds such that for all p∈ℳ, r∈^+, q∈ B_g^*(p,r) and h∈ T_qℳ we have g_q(h,h)≤ Cg^*_q(h,h) for some constant C potentially dependent on our choice of p and r. Then if (ℳ,g) is metrically complete so is (ℳ,g^*).
Suppose (ℳ,g^*) were incomplete while (ℳ,g) were complete. By Lemma <ref> there exists a path γ(t) such that γ(t)∈ℳ for t∈[0,1) and lim_t→ 1γ(t)∉ℳ with finite length under g^* and no similar finite length path exists under g. Let r_* be such that L_g^*(γ)=r_0<r_*<∞. Note g_γ(t)(h,h)≤ Cg^*_γ(t)(h,h) since γ(t)∈ B(γ_0,r_*) for all t∈[0,1). We thus have
L_g(γ(t)) =∫_0^1√(g_γ(t)(D_t γ(t),D_t γ(t)))dt
<√(C)∫_0^1√(g^*_γ(t)( D_t γ(t),D_t γ(t)))dt=√(C)L_g^*(γ(t))=r_0√(C)<∞
This is a contradiction as it implies γ is of finite length in (ℳ,g).
With this in hand, we are now ready to prove Theorem <ref>.
Corollary <ref> and Lemma <ref> together imply we need only consider the metric completeness of equipped with ^2. Via Lemma <ref> it is sufficient to show any path :[0,1)→ with L_^2()<∞ has lim_t→ 1(t)∈ to show metric completeness of (,^2).
There are three cases to consider, as there are three ways for lim_t→ 1(t) to not exist in :
* Some point escapes to infinity, edge lengths stay greater than 0, and curve length does not become infinite:
∃ j such that lim_t→ 1|_j|=∞, ℓ(γ(t))<∞ and |e_i|>0, ∀ i,∀ t∈[0,1].
* Some edges but not all edges shrink to length 0:
∃ i: lim_t→ 1|e_i|=0 and ∃ j: |e_j|>0, ∀ t∈[0,1].
* All edges shrink to length 0 or curve length is unbounded:
lim_t→ 1ℓ((t))=0 or lim_t→ 1ℓ((t))=∞
Below we make use of the following estimates
|D_t|(t)_i||≤ |D_t (t)_i| |D_t|e_i||≤ |D_t e_i| |D_tℓ((t))|≤∑_i=1^n|D_t e_i|.
The first two of these are derived from the same general pattern:
± D_t |v|=±⟨ D_t v,v⟩/|v|=⟨ D_t v,± v⟩/|v|≤ |D_t v|
as ± v/|v| is a unit vector. The third inequality comes from noting |D_tℓ(γ(t))|=|∑_i=1^nD_t|e_i|| and using the second inequality.
Now we proceed by cases:
(Case <ref>): We appeal to ^0. Let (t)_j be the vertex escaping
Ł_^2((t)) ≥∫_0^1√(∑_i=1^n1/ℓ((t))^3|D_t(t)_i|^2μ_i)dt≥∫_0^11/ℓ((t))^3/2|D_t (t)_j|√(μ_j)dt≥ M∫_0^1|D_t (t)_j|dt
≥ M∫_0^1|D_t(|(t)_j|)|dt≥ M|∫_0^1D_t(|(t)_j|)dt|=M|∞lim_t→∞|(t)_j|-<∞|(0)_j||=∞
where M=min_i,t√(μ_i/ℓ(γ(t))^3).
(Case <ref>): We appeal to ^2. Let e_k be an edge shrinking to length 0 such that e_k-1 is not shrinking to length 0. Such a pair of consecutive edges must exist, as we have edges shrinking to length zero and others not doing so, these subsets of our edges must meet at some vertex γ(t)_k.
Ł_^2(c)≥ ∫_0^1√(ℓ((t)))|D_t e_k/|e_k|-D_t e_k-1/|e_k-1||1/√(μ_i)dt≥ M|∫_0^1|D_t e_k/|e_k||-|D_t e_k-1/|e_k-1||dt|
≥ M|∫_0^1|D_t e_k/|e_k||dt-∫_0^1|D_t e_k-1/|e_k-1||dt|≥ M|∫_0^1|D_t|e_k|/|e_k||dt-∫_0^1|D_t e_k-1/|e_k-1||dt|
≥ M||∫_0^1D_t|e_k|/|e_k|dt|-∫_0^1|D_t e_k-1/|e_k-1||dt|≥ M|∞|[ln(|e_k|)]_0^1|-<∞∫_0^1|D_t e_k-1/|e_k-1||dt|=∞
where M=min_i,t√(ℓ((t))/μ_i).
(Case <ref>): We appeal to ^1.
Ł_^2((t)) ≥ CŁ_^1((t))≥ C∫_0^1√(∑_i=1^n|D_t e_i|^2/ℓ((t))|e_i|)dt≥ C∫_0^1|D_te_i|/ℓ((t))dt for each i
Ł_^2((t)) *≥C/n∑_i=1^n∫_0^1|D_t e_i|/ℓ((t))dt=C/n∫_0^1∑_i=1^n|D_t e_i|/ℓ((t))dt≥C/n∫_0^1|D_tℓ((t))|/ℓ((t))dt
≥C/n|∫_0^1D_tℓ((t))/ℓ((t))dt|=C/n|[ln(ℓ((t)))]_0^1|=∞
The ≥ marked * comes from the fact that the average of several underestimates is itself an underestimate.
§ GEODESICS, CURVATURE AND A COMPARISON TO KENDALL'S SHAPE SPACE
In this section we will further demonstrate the behavior of these geometries by presenting selected examples of geodesics for different choices of metrics. Finally, we will consider the special case of planar triangles, where we will compare the discrete Sobolev metrics defined in this paper with Kendall's shape metric; it is well known that under Kendall's metric the space of planar triangles reduces to a round sphere <cit.>.
All numerical examples were obtained using the programming language , where we implemented both the Riemannian exponential map (and the Riemannian log map) by solving the the geodesic initial value problem (and boundary value problem, resp.).
[12]r0.32
< g r a p h i c s >
Riemannian exponential map w.r.t. to the metric ^2.
The geodesic initial value problem:To approximate the Riemannian exponential map we calculate the Christoffel symbols using the automatic differentiation capabilities of . This in turn allows us to approximate the exponential map using a simple one-step Euler method. In Figure <ref> one can see an approximated geodesic computed with initial conditions consisting of a right triangle (0,0),(0,1),(1,0) and an initial velocity of (0,1) on the third vertex and zero elsewhere. We note that an initial velocity which is non-zero only on a single vertex immediately puts multiple vertices in motion.
The geodesic boundary value problem:To approximate the Riemannian log map we employ a path-straigthening algorithm; i.e., we minimize the Riemannian energy over paths of curves starting at c_0 and ending at c_1, approximated using a fixed (finite) number of intermediate curves. Thereby, we reduce the approximation of the Riemannian log map to a finite dimensional, unconstrained minimization problem, which we tackle using the implementation of L-BFGS-B, where we use a simple linear interpolation as initialization. In particular, for higher order metrics the algorithm can fail if the initialization leaves the space of PL immersions, as the metric is undefined for curves that are not in . To fix this issue it is possible to add a small amount of noise to the vertices of the initialization. We present two different examples of solutions to the geodesic boundary value problem in Figure <ref>.
Gaussian Curvature of the Space of Triangles and Kendall's shape metric:
A prudent comparison of the discrete metrics described in this work is with the Kendall metrics of discrete shapes, which may be defined on the same space. Comparing the geodesics with respect to these metrics also highlights the nature of our completeness result.
We start by restricting ourselves to the space of triangles ℝ^2× 3_* modulo the actions of rotation, translation, and scale. This space can be identified with the surface of a sphere with three punctures (the punctures correspond to degenerate triangles in which two vertices coincide) and when equipped with the Kendall metric <cit.>, this space is famously isometric to the punctured sphere and has constant Gaussian curvature. In Figure <ref>, we display boundary value geodesics with respect to the Kendall metric as well as for the discrete Sobolev metrics for m=0,1,2. While the geodesic with respect to the Kendall metric passes directly through a discrete curve where adjacent points coincide, each of the geodesics with respect to the metrics proposed in this paper does not pass through this point in the space. Moreover, geodesics with respect to the higher-order metrics pass further from this point.
Finally, we calculated the Gaussian curvature for the space of triangles w.r.t to the same four Riemannian metrics: while the Gaussian curvature for the Kendall metric is constant, this is clearly not the case for discrete Sobolev metrics of the present article, cf. Figure <ref>.
Indeed all of the ^m metrics exhibit negative curvature near the points corresponding to triangles with double-points (which do not correspond to elements of ), but only for higher order metrics is this negative curvature strong enough to prevent all geodesics from leaving the space in finite time. More generally, an increase in order leads for a more curved space both for negatively curved regions but also for positively curved regions. Finally, in Figure <ref>, we further visualize the curvature along selected paths to further demonstrate the behavior at key points of interest. One can see again the increase in negative curvature near the punctures of the sphere.
§ CONSTANT COEFFICIENT SOBOLEV METRICS
Our use of the ℓ(c) terms in ^m metrics allowed for scale invariance. These terms may be removed to arrive at the constant coefficient Sobolev metrics
_c^m(h,k):=∫_S^1⟨ h,k⟩+⟨ D_s^m h,D_s^m k⟩ ds.
These lack scale invariance, but the manifolds ((S^1,^d),^m) are otherwise similar to the scale invariant formulation with identical completeness properties <cit.>. We can likewise drop the ℓ(c) terms from our discretized metrics to arrive at constant coefficient discrete metrics ^m. These also converge to the corresponding smooth ^m like the scale invariant formulation, using exactly the same proof as used for Proposition <ref>, but without the ℓ(c) terms in F. Similarly, the parallel of the completeness statement, Theorem <ref> still holds. The removal of the length terms does not impact the proofs of cases <ref> or <ref> beyond changing the relevant M, but the proof for case <ref> no longer works. We make use of the following lemma for an alternative proof, stated here in a much more narrow and simple context than in its original form.
Let (ℳ,g) be a Riemannian manifold with a weakly continuous metric and f:ℳ→ a C^1-function. Assume that for each metric ball B(y,r) in ℳ there exists a constant C, such that
|df/dh|≤ C(1+|f(x)|)h_g
holds for all x∈ B(y,r) and all h∈ T_xℳ. Then the function
f:(M,d)→ (,|·|)
is continuous and Lipschitz continuous on every metric ball. In particular, f is bounded on every metric ball.
With this we can re-prove case <ref> for the constant coefficient case.
Note that it is sufficient to show that 1/ℓ(c) and ℓ(c) are bounded on every metric ball to handle case <ref>. By the above lemma then, it is sufficient to show that
|d(1/ℓ(c))/dh|≤ C(1+|1/ℓ(c)|)h_g^2
We do this directly.
|d/dh1/ℓ(c)|=1/ℓ(c)^2d/dh∑_i=1^n |e_i|=1/ℓ(c)^2∑_i=1^n ⟨ h_i+1-h_i,e_i⟩/|e_i|≤1/ℓ(c)^2∑_i=1^n h_i+1-h_i/√(|e_i|)e_i/√(|e_i|)
≤1/ℓ(c)^2(∑_i=1^n h_i+1-h_i^2/|e_i|)^1/2(∑_i=1^n|e_i|)^1/2= 1/ℓ(c)^3/2(∑_i=1^n h_i+1-h_i^2/|e_i|)^1/2≤1/ℓ(c)^3/2√(^2_c(h,h))
So that at any particular c the constant C=√(ℓ(c)) yields the desired inequality. We can show via similar work that √(ℓ(c)) is globally Lipschitz:
|d/dh√(ℓ(c))|≤1/√(ℓ(c))(∑_i=1^n h_i+1-h_i^2/|e_i|)^1/2(∑_i=1^n|e_i|)^1/2=(∑_i=1^n h_i+1-h_i^2/|e_i|)^1/2≤√(^2_c(h,h)).
Thus we can indeed choose a C for any metric ball as needed for the lemma for 1/ℓ(c). Thus 1/ℓ(c) and ℓ(c) are bounded on all metric balls and therefore no path γ:[0,1)→ such that lim_t→ 1γ(t)=0 or ∞ can be of finite length.
The item most complicated by the removal of the length terms is the relation between ^m_c(h,h) and ^m+1_c(h,h). As ^m_c=^m_c·ℓ(c)^-3+2m the statement of Lemma <ref> here becomes
^m_c(h,h)≤ℓ(c)^2/4^m+1_c(h,h)
which is a parallel to the equivalent statement for the smooth constant coefficient case as seen in the proof of Lemma 2.13 of <cit.>.
§ APPROXIMATING DERIVATIVES IN L^∞
Let f be a bounded function in C^∞(S^1). Let θ_i=i/n and define
I_n^0:C^∞(S^1)→ L^∞(S^1)
so that I_n^0(f)=g_0 where g_0 is the piecewise constant function where g_0(θ)=f(θ_i) for θ∈[θ_i,θ_i+1). Further define I_n^1(f)=g_1 where g_1(θ)=n(f(θ_i+1)-f(θ_i)) for θ∈[θ_i,θ_i+1). Then
lim_n→∞I_n^0(f)-f_L^∞=0 and lim_n→∞I_n^1(f)-f'_L^∞=0.
Note that S^1 is compact so that our smooth, bounded function f is Lipschitz continuous. Now if K is our Lipschitz constant then 0≤sup_θ∈[θ_i,θ_i+1)|g_0(θ)-f(θ)|≤K/n but this does not depend on i. This means that as n→∞ this supremum goes to 0 and thus also
lim_n→∞g(θ)-f(θ)_L^∞=lim_n→∞max_i=1,…,nesssup_θ∈[θ_i,θ_i+1)(g_0(θ)-f(θ))= 0.
For the second operator we choose n Lipschitz constants, one for each i with K_i=sup_θ∈[θ_i,θ_i+1)f'(θ) being the constant we will use for [θ_i,θ_i+1). This means sup f(θ_i+1)-f(θ_i)≤K_i/n. We note immediately as n→∞ K_i→ f'(θ_i) since f is smooth. Leaning on this fact makes the work swift:
I_n^1(f)-f'_L^∞ =g(θ)-f'(θ)_L^∞=max_i=1,…,nn·(f(θ_i+1)-f(θ_i))-f'_L^∞
≤max_i=1,…,nn·(K_i/n)-f'_L^∞=max_i=1,…,nK_i-f'_L^∞→ 0 as n→∞
Define I_n^1(f) analogously to I_n^1(f), but with g_1(θ)=n· (f(θ_i)-f(θ_i-1)) for θ∈[θ_i,θ_i+1) and note that this is also such that lim_n→∞I_n^1(f)-f'_L^∞=0.
§ PROOF OF LEMMA <REF>
Before we are able to present the proof of Lemma <ref> we will need an additional technical Lemma:
Let m≥ 1, let c∈ and h∈ T_c≅, we then have
max_k∈ℤ/nℤ|D_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k||≤1/2∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||
for odd m and
max_k∈ℤ/nℤ|D_s^m-1(h_k)-D_s^m-1(h_k-1)/μ_k|≤1/2∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/μ_i+1-D_s^m-1(h_i)-D_s^m-1(h_i-1)/μ_i|
for even m.
This Lemma follows by direct estimation. Therefore, let k∈ℤ/nℤ and note that ∑_j=1^n(D_s(h_j+1)-D_s(h_j))=0 as each term appears in the sum both in positive form and negative. Then, for odd m,
|D_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k||
= |1/ℓ(c)∑_j=1^n(D_s^m-1(h_j+1)-D_s^m-1(h_j))-D_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k||
=|1/ℓ(c)∑_j=1^n(D_s^m-1(h_j+1)-D_s^m-1(h_j))-1/ℓ(c)∑_j=1^nD_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k||e_j||
≤|1/ℓ(c)∑_j=1^n(D_s^m-1(h_j+1)-D_s^m-1(h_j)/|e_j|-D_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k|)|e_j||
≤|1/ℓ(c)∑_j=1^n1/2(∑_i=k^j-1(D_s^m-1(h_i+i)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1|)..
.. -∑_i=j^k-1(D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1|))|e_j||
≤1/2ℓ(c)∑_j=1^n∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1|||e_j|
≤1/2∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||
The case for even m is similar, swapping |e_i| for μ_i and rotating some indices.
|D_s^m-1(h_k)-D_s^m-1(h_k-1)/μ_k|
= |1/ℓ(c)∑_j=1^n(D_s^m-1(h_j)-D_s^m-1(h_j-1))-D_s^m-1(h_k)-D_s^m-1(h_k-1)/μ_k|
=|1/ℓ(c)∑_j=1^n(D_s^m-1(h_j)-D_s^m-1(h_j-1))-1/ℓ(c)∑_j=1^nD_s^m-1(h_k)-D_s^m-1(h_k-1)/μ_kμ_j|
≤|1/ℓ(c)∑_j=1^n(D_s^m-1(h_j)-D_s^m-1(h_j-1)/μ_j-D_s^m-1(h_k)-D_s^m-1(h_k-1)/μ_k)μ_j|
≤|1/ℓ(c)∑_j=1^n1/2(∑_i=k^j-1(D_s^m-1(h_i+i)-D_s^m-1(h_i)/μ_i+1-D_s^m-1(h_i)-D_s^m-1(h_i-1)/μ_i)..
.. -∑_i=j^k-1(D_s^m-1(h_i+1)-D_s^m-1(h_i)/μ_i+1-D_s^m-1(h_i)-D_s^m-1(h_i-1)/μ_i))μ_j|
≤1/2ℓ(c)∑_j=1^n∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/μ_i-D_s^m-1(h_i)-D_s^m-1(h_i-1)/μ_i-1|μ_j
≤1/2∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/μ_i+1-D_s^m-1(h_i)-D_s^m-1(h_i-1)/μ_i|
This proves the lemma.
We consider the odd case which is not meaningfully different from the even one. Following from Lemma <ref>, we have
max_i∈ℤ/nℤ|D^m-1(h_k+1)-D^m-1(h_k)/|e_k||^2
≤1/4(∑_i=1^n|D_s^m-1h_i+1-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||)^2
≤1/4(∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||^2 1/μ_i)(∑_i=1^nμ_i )
=ℓ(c)/4(∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||^2 1/μ_i).
Then, due to Hölder's inequality and the above, we conclude
_c^m(h,h) =∑_i=1^nℓ(c)^-3+2m|D_s^m(h_i)|^2|e_i|
=∑_i=1^nℓ^-3+2m|D_s^m-1(h_i+1)-D_s^m-1(h_i)|^2/|e_i|
≤ℓ(c)^-3+2m(max_k∈𝒞_n|D_s^m-1(h_k+1)-D_s^m-1(h_k)/|e_k||^2)(∑_k=1^n|e_k|)
≤ℓ(c)^-3+2mℓ(c)^2/4∑_i=1^n|D_s^m-1(h_i+1)-D_s^m-1(h_i)/|e_i|-D_s^m-1(h_i)-D_s^m-1(h_i-1)/|e_i-1||^2 1/μ_i
=ℓ(c)^-3+2(m+1)/4∑_i=1^n|D_s^m+1(h_i)|^2μ_i=1/4_c^m+1(h,h).
This is symmetric with the argument for the odd case, which we omit.
abbrv
|
http://arxiv.org/abs/2409.03407v1 | 20240905104507 | Tight minimum degree condition to guarantee $C_{2k+1}$-free graphs to be $r$-partite | [
"Zilong Yan",
"Yuejian Peng",
"Xiaoli Yuan"
] | math.CO | [
"math.CO"
] |
Tight minimum degree condition to guarantee C_2k+1-free graphs to be r-partite
Zilong Yan [School of Mathematics, Hunan University,
Changsha 410082, P. R. China. E-mail: [email protected].] Yuejian Peng [Corresponding author. School of Mathematics, Hunan University, Changsha 410082, P. R. China. E-mail: [email protected]. Supported in part by National Natural Science Foundation of China (No. 11931002)] Xiaoli Yuan [School of Mathematics, Hunan University,
Changsha 410082, P. R. China. E-mail: [email protected].]
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Erdős and Simonovits asked the following question: For an integer c≥ 2 and a family of graphs ℱ, what is the infimum of α such that any ℱ-free n-vertex graph with minimum degree at least α n has chromatic number at most c? Denote the infimum as δ_χ(ℱ, c). A fundamental result of
Erdős, Stone and Simonovits <cit.> implies that for any c≤ r-1,
δ_χ(ℱ, c)=1-1 r, where r+1=χ(ℱ)=min{χ (F): F∈ℱ}. So the remaining challenge is to determine δ_χ(ℱ, c) for c≥χ (ℱ)-1. Most previous known results are under the condition c= χ (ℱ)-1. When c≥χ (ℱ), the only known exact results are δ_χ(K_3, 3) by Häggkvist and Jin, and δ_χ(K_3, c) for every c≥4 by Brandt and Thomassé, δ_χ(K_r, r) and δ_χ(K_r, r+1) by Nikiforov. In this paper, we focus on odd cycles. Combining results of Thomassen and Ma, we have Ω((c+1)^-8(k+1))=δ_χ(C_2k+1, c)=O(k/c) for c≥ 3. In this paper, we determine δ_χ(C_2k+1, c) for all c≥ 2 and k≥ 3c+4 (k≥ 5 if c=2). We also obtain the following corollary. If G is a non-c-partite graph on n vertices with c≥ 3 and δ(G)> n 2c+2, then C_2k+1⊂ G for all k∈ [3c+4, n 108(c+1)^c].
Keywords: minimum degree; odd cycle; chromatic profile, stability, Turán number.
§ INTRODUCTION
For a graph G, V(G) and E(G) denote the vertex set and the edge set of G respectively.
Denote the minimum degree and the chromatic number of a graph G as δ(G) and χ(G) respectively. For a graph F, what is the minimum f(n, F) such that a graph G on n≥| V(F)| vertices with the minimum degree δ(G)≥ f(n, F) must contain a copy of F?
This question has had a long history (for example, well-known Dirac Theorem) and has always attracted a lot of attentions and has stimulated the development of graph theory. For example, the existence of cycles in graphs with sufficiently large minimum degree has been intensively studied (see <cit.>). In an influential paper, Erdős and Simonovits <cit.> asked what happens if we know some information on the chromatic number χ(G). Precisely, for a graph F and a positive integer c, what is the minimum f(n, F, c) such that a graph G on n vertices with minimum degree δ(G)≥ f(n, F, c) and χ(G)>c must contain a copy of F?
This question has been stimulating the general study of the so-called chromatic profile. Let us define it below.
Let ℱ be a family of graphs, a graph G is ℱ-free if G does not contain any member of ℱ as a subgraph. Call ℱ non-degenerate if χ(F)>2 for each F∈ℱ.
For a positive integer c≥ 2, the chromatic profile of a non-degenerate family ℱ as a function in c is defined to be
δ_χ(ℱ, c)=inf{α| Any ℱ-free graph G on n vertices with δ(G)≥α n must satisfy χ(G)≤ c.}.
If ℱ consists of a single graph F, we simply write δ_χ({F}, c) as δ_χ(F, c), and write {F}-free as F-free.
The study of chromatic profile is also related to the stability of Turán problems. For a family ℱ of k-uniform graphs and a positive integer n, the Turán number ex(n, ℱ) is the maximum number of edges an ℱ-free k-uniform graph on n vertices can have. An averaging argument of Katona, Nemetz and Simonovits <cit.> shows that the sequence ex(n,ℱ) n k is non-increasing. Hence n→∞limex(n, ℱ) n k exists. The Turán density of ℱ is defined as π(ℱ)=n→∞limex(n, ℱ) n k .
If ℱ consists of a single k-uniform graph F, we simply write ex(n, {F}) and π({F}) as ex(n, F) and π(F). Let K_r+1 be the complete graph on r+1 vertices.
Let T_r(n) denote the complete r-partite graph on n vertices where
its part sizes are as equal as possible.
Turán <cit.> obtained that if G is a K_r+1-free graph on n vertices,
then e(G)≤ e(T_r(n)), equality holds if and only if G=T_r(n).
A fundamental result of
Erdős, Stone and Simonovits <cit.> gives the asymptotical value of ex(n, ℱ) for all non-degenerate families of graphs. For a family ℱ of graphs, let χ(ℱ)=min{χ (F): F∈ℱ}.
If ℱ is a family of graphs with
χ (ℱ)=r+1, then
ex(n,ℱ) =e(T_r(n)) + o(n^2)= ( 1-1/r + o(1)) n^2/2.
Erdős <cit.> and Simonovits <cit.> also obtained
a stronger structural theorem of Theorem <ref> and
discovered a certain stability phenomenon.
Let ℱ be a family of graphs with
χ (ℱ)=r+1≥ 3.
For every ε >0,
there exist δ >0 and n_0 such that
if G is a graph on n≥ n_0 vertices,
and G is ℱ-free such that
e(G)≥ (1- 1/r - δ) n^2/2,
then G can be obtained from T_r(n)
by adding or deleting a total number of at most ε n^2 edges.
Liu, Mubayi and Reiher <cit.> also defined the degree-stability of Turán problems for k-uniform graphs which implies the edge-stability (as in Theorem <ref>).
Let ℱ be a non-degenerate family of k-uniform graphs (i.e. π(ℱ)>0). Let 𝒦 be a class of ℱ-free k-uniform graphs. If there exists ϵ >0 and n_0 such that every ℱ-free k-uniform graph G on n≥ n_0 vertices with δ(G)≥ (π(ℱ)-ϵ)n k-1 is a subgraph of some member of 𝒦, then we say that ℱ is degree-stable with respect to 𝒦. It is not hard to see that the degree-stability implies the edge-stability (as in Theorem <ref>) since we can delete vertices with `small' degrees by losing o(n^k-1) edges.
If ℱ is a family of graphs with
χ (ℱ)=r+1≥ 3, then by Theorem <ref>, any ℱ-free graph with n vertices has minimum degree at most (1- 1/r +o(1))n. On the other hand, the Turán graph T_r(n) is ℱ-free with chromatic number r and δ(T_r(n))=(1- 1/r +o(1))n. Therefore, for any c≤ r-1,
δ_χ(ℱ, c)=1-1 r. So the remaining challenge is to determine δ_χ(ℱ, c) for c≥χ (ℱ)-1.
We can define the chromatic profile for a family k-uniform graphs similarly (one can also modify the minimum degree condition to minimum co-degree condition). Following from the definitions, we can see that for a non-degenerate family ℱ of k-uniform graphs (π(ℱ)>0), ℱ is degree-stable with respect to the family of all r-colorable k-uniform graphs if and only if δ_χ(ℱ, r)≤π(ℱ).
The first study of chromatic profile is due to Andrásfai, Erdős and Sós <cit.>, they showed that if G is a K_r+1-free graph on n vertices with δ(G)>3r-4/3r-1n, then χ(G)≤ r. Furthermore they gave an example G with δ(G)=3r-4/3r-1n and χ(G)=r. In other words, they showed that δ_χ(K_r+1, r)=3r-4/3r-1. Let C_k denote a cycle with k vertices. In the same paper, Andrásfai, Erdős and Sós <cit.> showed that δ_χ({C_3,C_5,⋯, C_2k+1}, 2)=2/2k+3. Häggkvist <cit.> showed that δ_χ(C_2k+1, 2)=2/2k+3 for k={1,2,3,4}. Recently, Yuan-Peng<cit.> obtained the chromatic profile for any family consisting of some odd cycles, they showed that for a family 𝒞 of odd cycles in which C_2p+1 is the shortest odd cycle not in 𝒞, and C_2k+1 is the longest odd cycle in 𝒞, we have δ_χ(𝒞, 2)=max{1/2(2p+1), 2/2k+3}. More precisely, the authors showed the following result.
<cit.>
Let n≥ 1000k^8 be positive integers. Let 𝒞 be a family of some odd cycles in which C_2p+1 is the shortest odd cycle not in 𝒞 and C_2k+1 is the longest odd cycle in 𝒞. Let BC_2p+1(n) denote the graph obtained by taking 2p+1 vertex-disjoint copies of K_n/2(2p+1),n/2(2p+1) and selecting a vertex in each of them such that these vertices form a cycle of length 2p+1. Let C_2k+3(n 2k+3) denote the balanced blow up of C_2k+3 with n vertices. If G is an n-vertex 𝒞-free graph with δ(G)>max{n/2(2p+1), 2/2k+3n}, then G is bipartite. Graphs BC_2p+1(n) and C_2k+3(n 2k+3) indicate that the bound is tight. Furthermore, the only n-vertex 𝒞-free non-bipartite graph with minimum degree max{n/2(2p+1), 2/2k+3n}=n/2(2p+1) is BC_2p+1(n), and the only n-vertex 𝒞-free non-bipartite graph with minimum degree max{n/2(2p+1), 2/2k+3n}=2/2k+3n is C_2k+3(n 2k+3).
When c≥χ (ℱ), we know very few results on δ_χ(ℱ, c). The only known exact results are δ_χ(K_3, 3)=10/29 by Häggkvist <cit.> and Jin <cit.>, and δ_χ(K_3, c)=1/3 for every c≥4 by Brandt and Thomassé <cit.>, δ_χ(K_r+1, r+1)=1-19/19r-9 and δ_χ(K_r+1, r+2)=1-2/2r-1 by Nikiforov <cit.>. Thomassen <cit.> showed that δ_χ(C_5, c)≤6/c and gave an upper bound for δ_χ(C_2k+1, c), together with the recent result of Ma <cit.> (it is not the main focus in <cit.>), we have
Ω((c+1)^-8(k+1))=δ_χ(C_2k+1, c)=O(k/c).
Recently, Böttcher, Frankl, Cecchelli, Parzcyk and Skokan <cit.> showed that δ_χ({C_3,C_5,⋯, C_2k-1},3)≤1/2k+t for k≥ 45t+5490. Van Ngoc and Tuza <cit.> showed that δ_χ({C_3,C_5,⋯, C_2k-1},3)≥3/2k^2+k+1. In <cit.>, they gave a very detailed introduction about the development of this question.
In this paper we come up with a different method to determine δ_χ(𝒞, r) for all r≥ 3, 2p≥ r+1 and k≥ 3r+4, where 𝒞 is a family of some odd cycles, C_2p+1 is the shortest odd cycle in 𝒞, and C_2k+1 is the longest odd cycle in 𝒞. In particular, we can determine δ_χ(C_2k+1, r) for all r≥ 3 and k≥ 3r+4.
We also improve the requirement on n in Theorem <ref> when k≥ 4p+1.
Let us state our main results Precisely.
Let G_r+1 denote the graph obtained by taking r+1 vertex-disjoint copies of K_n/2(r+1),n/2(r+1) and selecting a vertex in each of them such that these vertices form a K_r+1. See the figure below.
Recall that Häggkvist <cit.> showed that δ_χ(C_2k+1, 2)=2/2k+3 for k={1,2,3,4}. In the same paper, Häggkvist pointed out that this result can not be extended to k≥ 5 by giving a counterexample which is G_3.
Our main result is the following.
Let r, k and n be integers with r≥ 3, k≥ 3r+4 and n≥ 108(r+1)^rk. Let G be an n-vertex C_2k+1-free graph. If
δ(G)≥n/2r+2,
then G is r-partite, or G=G_r+1.
Theorem <ref> implies the following.
δ_χ(C_2k+1, r)=1/2r+2 for r≥ 3 and k≥ 3r+4.
Let r≥ 3 be an integer. If G≠ G_r+1 is a non r-partite n-vertex graph with
δ(G)≥n/2r+2,
then C_2k+1⊂ G for all k∈ [3r+4, 1/108(r+1)^rn].
Let r≥ 3 and 𝒞 be a family consisting of some odd cycles in which C_2p+1 is the shortest cycle in 𝒞 and C_2k+1 is the longest cycle in 𝒞 satisfying 2p≥ r+1 and k≥ 3r+4. Clearly δ_χ(𝒞, r)≤δ_χ(C_2k+1, r)=1/2r+2. On the other hand, G_r+1 does not contain any odd cycle longer than r+1, and the shortest cycle in 𝒞 has length 2p+1> r+1, so G_r+1 is 𝒞-free. Note that G_r+1 has the minimum degree n/2r+2 and χ(G_r+1)=r+1, thus, δ_χ(𝒞, r)≥1/2r+2. To summarize, we have the following corollary.
Let r≥ 3 and 𝒞 be a family of odd cycles in which C_2p+1 is the shortest cycle in 𝒞 and C_2k+1 is the longest cycle in 𝒞 satisfying 2p≥ r+1 and k≥ 3r+4. Then
δ_χ(𝒞, r)=1/2r+2.
For r=2 and any family consisting of some odd cycles, we obtain an improvement on the condition of n in Theorem <ref> when k is large relative to p.
Let k≥ 4p+1 and n≥ 108(2p+1)^2pk be positive integers. Let 𝒞 be a family of odd cycles in which C_2p+1 is the shortest odd cycle not in 𝒞 and C_2k+1 is the longest odd cycle in 𝒞. Let BC_2p+1(n) denote the graph obtained by taking 2p+1 vertex-disjoint copies of K_n/2(2p+1),n/2(2p+1) and selecting a vertex in each of them such that these vertices form a cycle of length 2p+1. If G is an n-vertex 𝒞-free graph with δ(G)≥n/2(2p+1), then G is bipartite or G=BC_2p+1(n).
Note that max{n/2(2p+1), 2/2k+3n}=n/2(2p+1) if and only if k≥ 4p+1, thus
the condition k≥ 4p+1 in Theorem <ref> is tight.
We follow standard notations through. For S⊆ V(G), G[S] denotes the subgraph of G induced by S. Let N_S(v) denote the set of all neighbors of v in G[S], and d_S(v)=| N_S(v)|. We use e(S) to denote the number of edges in G[S].
For T⊆ V(G), let G-T denote the subgraph induced by V(G)-T, N_G(T) denotes the union of the neighborhoods of all vertices of T in G. For disjoint X,Y⊆ V(G), G[X,Y] denotes the bipartite subgraph of G induced by X and Y, i.e. G[X,Y] consists of all edges incident to one vertex in X and one vertex in Y. Use e(X,Y) to denote the number of edges in G[X,Y].
Throughout the paper, P_xy denotes a path with end points x and y. We call a path P an even (odd) path if |V(P)| is even (odd).
Let H be a subgraph of G. We call H a 2k-core of G if for each pair of vertices x, y∈ V(H) there exists an even path P_xy in H of order (the number of vertices) at most 2k. Furthermore, We call H a strong-2k-core of G if for each pair of vertices x, y∈ V(H), there exists an even path P_xy in H of order at most 2k and there exists an odd path P'_xy in H of order at most 2k.
The concept strong-2k-core introduced in this paper is crucial to the proof of both Theorems <ref> and <ref>. Methods used in <cit.> and other related papers cannot be applied to obtain Theorems <ref> and <ref>.
§ PROOFS OF THEOREMS <REF> AND <REF>
We first prove some crucial lemmas.
Let r≥ 2, k and n be integers with n≥108(r+1)^rk and k≥ f(r), where f(r)=2r+1 if r=2 and f(r)=3r+4 if r≥ 3. Let G be a C_2k+1-free graph on n vertices with δ(G)≥n/2r+2, then for any even path P_xy of order at most 2k, we have |(N(x)∩ N(y))∖ V(P_xy)|≤ 15r.
Proof of Lemma <ref>. We show that if there exists an even path P_xy of order at most 2k satisfying |(N(x)∩ N(y))∖ V(P_xy)|>15r, then there is a C_2k+1 in G. We prove it by induction on 2k-|V(P_xy)|. If |V(P_xy)|=2k, then it is clear that there is a C_2k+1 in G if |(N(x)∩ N(y))∖ V(P_xy)|≥1. Suppose that it holds for 2k≥ |V(P_xy)|≥ 2l+2. Next, we will show that it holds for |V(P_xy)|=2l. Assume that there exist an even path P_xy of order 2l such that |(N(x)∩ N(y))∖ V(P_xy)|> 15r. Let U⊂ (N(x)∩ N(y))∖ V(P_xy) with |U|=15r. If there exists u, v∈ U such that |(N(u)∩ N(v))∖ V(P_xy)|> 15r, then there is an even path P_uv=uP_xyv of order 2l+2 such that |(N(u)∩ N(v))∖ V(P_uv)|> 15r. By induction hypothesis there is a C_2k+1 in G. Therefore we may assume that |(N(u)∩ N(v))∖ V(P_xy)|≤ 15r for any u, v∈ U. Combining with δ(G)≥n/2r+2, we have
| V(G)| ≥ (n/2r+2-|V(P_xy)|)|U|-|U| 215r
≥ (n/2r+2-2k)15r-(15r)^3/2
> n
by k≥ f(r), n≥ 108(r+1)^rk and direct calculation.
This is a contradiction.
□
Let r≥ 2, k and n be integers with n≥108(r+1)^rk and k≥ f(r), where f(r)=2r+1 if r=2 and f(r)=3r+4 if r≥ 3.
Let G be a C_2k+1-free graph on n vertices with δ(G)≥n/2r+2. If H is a 2k-core of G, then |V(H)|≤ 2r+2.
Proof of Lemma <ref>. Suppose that |V(H)|≥ 2r+3, then take U⊂ V(H) such that |U|=2r+3. Since H is a 2k-core of G, by Lemma <ref>, |(N(x)∩ N(y))∖ V(P_xy)|≤ 15r for x, y∈ U, where P_xy is an even path of order at most 2k. Therefore, for r≥ 3, |N(x)∩ N(y)|≤ 15r+2k-2≤ 7k since k≥ 3r+4. Combining with δ(G)≥n/2r+2, we have
| V(G)| ≥ n/2r+2|U|-|U| 27k
= (2r+3)n/2r+2-2r+3 27k=n+n/2r+2-2r+3 27k
> n
by n≥ 108(r+1)^rk and direct calculation.
This is a contradiction.
For r=2, we have |N(x)∩ N(y)|≤ 15r+2k-2=2k+28. Combining with δ(G)≥n/2r+2=n/6, we have
| V(G)| ≥ n/6|U|-|U| 2(2k+28)
= 7n/6-7 2(2k+28)=n+n/6-(42k+588)
> n
by n≥ 108(r+1)^rk, k≥ 5 and direct calculation.
This is a contradiction. □
The following observation is easy to see, but it is very important to our proof.
Let H be a strong-2k-core of G with |V(H)|=l≤ 2k-2. If there exists an even path P_uv⊂ V(G)∖ V(H) such that xu∈ E(G) and xv∈ E(G) for some x∈ V(H) with |V(P_uv)|≤ 2k-l, then V(H)∪ V(P_uv) is a strong-2k-core of G. If there exists a path P_uv⊂ V(G)∖ V(H) (it is possible that u=v) such that xu∈ E(G) and yv∈ E(G) for some x, y∈ V(H) with |V(P_uv)|≤ 2k-l, then V(H)∪ V(P_uv) is a strong-2k-core of G.
Let r≥ 2, k and n be integers with n≥108(r+1)^rk and k≥ f(r), where f(r)=2r+1 if r=2 and f(r)=3r+4 if r≥ 3.
Let G be a C_2k+1-free graph on n vertices with δ(G)≥n/2r+2. If H is a strong-2k-core of G, then |V(H)|≤ r+1.
Proof of Lemma <ref>. Let H be a maximum strong-2k-core of G, i.e., |V(H)| has the maximum cardinality among all strong-2k-cores of G. Suppose on the contrary that V(H)={x_1, x_2,…, x_l} with l≥ r+2. By Lemma <ref>, we have that r+2≤ l≤ 2r+2. Let
N_i=N(x_i)∖ V(H) for 1≤ i≤ l.
By the maximality of |V(H)| and Fact <ref>, we have that
N_i is an independent set for 1≤ i≤ l and N_i∩ N_j=∅ for 1≤ i<j≤ l.
Fix y_i∈ N_i for 1≤ i≤ l and let
N_i^*=N(y_i)∖ V(H).
Claim <ref> says that N_i is an independent set, so we have that
N^*_i∩ N_i =∅ for 1≤ i≤ l.
By the maximality of |V(H)| and Fact <ref>, we have that
N^*_i∩ N_j=∅ and N^*_i∩ N^*_j=∅ for 1≤ i<j≤ l.
Since δ(G)≥n/2r+2, we have that |N_i|≥n/2r+2-l. Claim <ref> says that N_i∩ N_j=∅ for 1≤ i<j≤ l, this implies that |N(y_i)∩ H|=1, so |N^*_i|≥n/2r+2-1 for 1≤ i≤ l. Combining with Claims <ref>, <ref>, <ref>, we have
|V(G)|≥∑_i=1^l(|N_i|+|N^*_i|)≥ (r+2)(n/r+1-(2r+3))>n+n/r+1-(r+2)(2r+3)>n
since n≥ 108(r+1)^rk and k≥ f(r).
A contradiction. □
Note that an odd cycle with length less than 2k+1 in a graph G is a strong-2k-core of G. Next we show that there exists an odd cycle with length less than 2k+1 in a graph G satisfying the conditions in Theorem <ref> or Theorem <ref>.
Let r≥ 2, k and n be positive integers.
If G is a C_2k+1-free non-bipartite graph on n vertices with δ(G)≥n/2r+2, then G contains an odd cycle with length no more than 2(2r+1)+1.
Proof of Lemma <ref>.
Let C_2m+1=v_1v_2⋯ v_2m+1v_1 be a shortest odd cycle of G. Let
G'=G-V(C_2m+1).
For any vertex v∈ V(G'), we have d_C_2m+1(v)≤ 2 m≥ 2.
Proof of Claim <ref>. Let m≥ 2. Suppose on the contrary that there exists a vertex x∈ V(G'), such that d_C_2m+1(x)≥ 3, let {v_i,v_j,v_q}⊆ N_C_2m+1(x), where 1≤ i<j<q≤ 2m+1. We claim that any two vertices of {v_i,v_j,v_q} are not adjacent. Otherwise, without loss of generality, assume that v_iv_j∈ E(G), then vv_iv_jv is a copy of C_3, a contradiction to C_2m+1 being a shortest cycle. Moreover, C_2m+1 is divided into three paths by {v_i,v_j,v_q}, since C_2m+1 is an odd cycle of G, there is at least one even path (so the length is odd). Without loss of generality, assume that v_qv_q+1⋯ v_2m+1v_1⋯ v_i is an even path of C_2m+1. We have shown that any two vertices of {v_i,v_j,v_q} are not adjacent, so v_iv_i+1⋯ v_jv_j+1⋯ v_q is an odd path with at least 5 vertices, then we use the odd path v_ivv_q to replace the odd path v_iv_i+1⋯ v_jv_j+1⋯ v_q of C_2m+1 to get a shorter odd cycle v_ivv_qv_q+1⋯ v_2m+1v_1⋯ v_i, a contradiction. This completes the proof of Claim <ref>. □
Since C_2m+1 is a shortest odd cycle of G, it does not contain any chord. By Claim <ref> and δ(G)≥n/2r+2, for m≥ 2,
(2m+1)· (n/2r+2-2)≤ e(C_2m+1,G')≤ 2· (n-2m-1).
This implies that m< 2r+2. Since m is an integer, m≤ 2r+1. This completes the proof of Lemma <ref>. □
If G is a C_2k+1-free non-bipartite graph on n vertices with δ(G)≥n/2r+2 and k≥ 2r+1, then by Lemma <ref>, G contains an odd cycle with length no more than 2k-1. Therefore, G contains a strong-2k-core with at least 3 vertices. Combining with Lemma <ref>, we have the following corollary.
Let r≥ 2, k and n be positive integers with k≥ 2r+1.
If G is a C_2k+1-free non-bipartite graph on n vertices with δ(G)≥n/2r+2, then G contains a strong-2k-core H with 3≤| V(H)|≤ r+1.
Now we are ready to give structural information on graphs satisfying the conditions in Theorem <ref> or Theorem <ref>.
Let r≥ 2, k and n be integers with n≥108(r+1)^rk and k≥ f(r), where f(r)=2r+1 if r=2 and f(r)=3r+4 if r≥ 3.
Let G be a non-bipartite C_2k+1-free graph on n vertices with δ(G)≥n/2r+2. Let H be a maximum strong-2k-core of G with V(H)={x_1, x_2,…, x_l} and 3≤ l≤ r+1. For each i, 1≤ i≤ l, let
N_i=N(x_i)∖ V(H), N^1_i=N(N_i) and N^2_i=N(N^1_i)∖ V(H).
Then the following holds.
(i) If l=r+1, then for each 1≤ i≤ l, N^1_i and N^2_i are independent sets, |N^1_i|=n/2r+2, |N^2_i|=n/2r+2 and G[N^1_i, N^2_i] forms a complete bipartite graph.
(ii) If l≤ r, then x_i is a cut vertex for each 1≤ i≤ l.
Proof of Lemma <ref>.
By the maximality of |V(H)| and Fact <ref>, we have that
N_i is an independent set and N_i∩ N_j=∅ for 1≤ i<j≤ l.
Since N_i is an independent set, we have that
N^1_i∩ N_i=∅ for 1≤ i≤ l.
By the maximality of |V(H)| and Fact <ref>, we have that
N^1_i∩ N_j=∅, N^1_i∩ N^1_j=∅, N_i∩ N^2_j=∅, and N^1_i∩ N^2_j=∅ for 1≤ i≠ j≤ l.
Since δ(G)≥n/2r+2, we have
|N^1_i|≥n/2r+2 for 1≤ i≤ l.
By the maximality of |V(H)| and Fact <ref>, for any vertex y ∈ N^1_i and y≠ x_i, we have that N(y)∩ H=∅. Combining with δ(G)≥n/2r+2, we have
|N^2_i|≥n/2r+2 for 1≤ i≤ l.
Case (i). l=r+1.
N^1_i is an independent set for 1≤ i≤ l.
Proof of Claim <ref>. Without loss of generality, assume that N^1_1 is not an independent set, and u_1u_2∈ E(G) for some u_1, u_2∈ N^1_1. Let u_3, u_4∈ N_1 such that u_1u_3, u_2u_4∈ E(G). By Fact <ref>, we have u_3=u_4. So {u_1, u_2, u_3} forms a K_3. By Lemma <ref>, |N(u_i)∩ N(u_i+1)|≤ 15r for i∈ [3]. Hence |N_1^1∪ N_1^2|≥ |N(u_1)∪ N(u_2)∪ N(u_3)| ≥3n/2r+2-45r. Combining with (<ref>), (<ref>), and (<ref>), we have
| V(G)| ≥ ∑_i=2^l(|N_i|+|N^1_i|)+|N_1^1∪ N_1^2|
≥ r(n/r+1-r)+3n/2r+2-45r
> n-n/r+1-r^2+3n/2r+2-45r
> n
since k≥ f(r) and n≥ 108(r+1)^rk.
A contradiction.
□
By Claim <ref>, we have
N^1_i∩ N^2_i=∅ for 1≤ i≤ l.
Combining (<ref>), (<ref>), (<ref>), and (<ref>),
we have
n=|V(G)|≥∑_i=1^l(|N^1_i|+|N^2_i|)≥(r+1)2n/2r+2=n.
Therefore,
|N^1_i|=n/2r+2, |N^2_i|=n/2r+2 and G[N^1_i, N^2_i] forms a complete bipartite graph for 1≤ i≤ l. Since G is C_2k+1-free, N^2_i is an independent set.
Case (ii). l≤ r.
Since l≥ 3, in this case, r≥ 3 and k≥ 3r+4.
We show that x_i is a cut vertex for each 1≤ i≤ l. Otherwise,
without loss of generality, suppose that there exists a path P_uv⊆ V(G)∖ V(H) such that x_1u∈ E(G) and x_2v∈ E(G) for x_1, x_2∈ V(H), and assume that |V(P_uv)| has the minimum cardinality. Let A=H∪ V(P_uv). By the maximality of |V(H)| and Fact <ref>, we have |A|≥ 2k+1.
By the minimality of |V(P_uv)| and |H|≤ r, then d_A(x)≤ r for x∈ A. By the minimality of |V(P_uv)| and the maximality of |V(H)| and Fact <ref>, we have that d_A(x)≤ 3 for x∈ V(G)∖ A. Hence
|A|· (n/2r+2-r)≤ e(A,V(G)∖ A)≤ 3(n-|A|),
then |A|≤(6r+6)n/n-2r(r+1)+3. Recall that |A|≥ 2k+1, k≥ 3r+4 and
n≥ 108r^r-1k, then we get n<0, a contradiction. Therefore x_i is a cut vertex for 1≤ i≤ l.
□
Now are are ready to prove Theorems <ref> and <ref>.
Proof of Theorem <ref>. Let k≥ 4p+1 and n≥ 108(2p+1)^2pk be positive integers.
Let 𝒞 be a family of odd cycles in which C_2p+1 is the shortest odd cycle not in 𝒞 and C_2k+1 is the longest odd cycle in 𝒞. Let BC_2p+1(n) denote the graph obtained by taking 2p+1 vertex-disjoint copies of K_n/2(2p+1),n/2(2l+1) and selecting a vertex in each of them such that these vertices form a cycle of length 2p+1. Let G be an n-vertex 𝒞-free non-bipartite graph with δ(G)≥n/2(2p+1). We are going to show that G=BC_2p+1(n).
Take r=2p in Lemma <ref>. Notations used in the proof follow from Lemma <ref>. For example, H is a maximum strong-2k-core of G with V(H)={x_1, x_2,…, x_l} and 3≤ l≤ 2p+1. We claim that l=2p+1. Otherwise, l<2p+1, since H is a maximum strong-2k-core of G, there must be an odd cycle in H and the length of this cycle is no more than l<2p+1. Recall that C_2p+1 is the shortest odd cycle not in 𝒞, in other words, 𝒞 contains all odd cycles with length less than 2p+1. Thus, H is not 𝒞-free, a contradiction. So we have l=2p+1.
Applying Lemma <ref> (i), we obtain that for each 1≤ i≤ l, N^1_i and N^2_i are independent sets, |N^1_i|=n/2r+2, |N^2_i|=n/2r+2 and G[N^1_i, N^2_i] forms a complete bipartite graph. Since H is a strong-2k-core of G with 2p+1 vertices and H does not contain any odd cycle shorter than 2p+1, H must be C_2p+1. Therefore G=BC_2p+1(n). This completes the proof of Theorem <ref>. □
Proof of Theorem <ref>. Let r≥ 2, k and n be integers with n≥108(r+1)^rk and k≥ f(r), where f(r)=2r+1 if r=2 and f(r)=3r+4 if r≥ 3. Let G be a C_2k+1-free graph on n vertices with δ(G)≥n/2r+2, we show that G is r-partite, or G=G_r+1.
We apply induction on r. When r=2, applying Theorem <ref> with 𝒞=C_2k+1 and 2p=2, we are fine. We will show that the conclusion holds for r≥ 3. We will apply Lemma <ref> and notations used in the proof follow from Lemma <ref>. For example, H is a maximum strong-2k-core of G with V(H)={x_1, x_2,…, x_l} and 3≤ l≤ r+1.
Case (i). l=r+1.
By Lemma <ref>, for each 1≤ i≤ l, N^1_i and N^2_i are independent sets, |N^1_i|=n/2r+2, |N^2_i|=n/2r+2 and G[N^1_i, N^2_i] forms a complete bipartite graph. If H=K_r+1, then G=G_r+1. If H≠ K_r+1, then χ(G)≤ r.
Case (ii). l≤ r.
By Lemma <ref>, x_i is a cut vertex for 1≤ i≤ l. Let
H_i={u∈ V(G)∖ V(H)| there exists a path P_ux_i∖{x_i}⊆ V(G)∖ V(H)} for 1≤ i≤ l.
Clearly, V(G) is partitioned into V(G)=(∪_i=1^l H_i)∪ H. Therefore it is sufficient to show that G[H_i] is (r-1)-partite. Note that N_i∪ N_i^1⊂ V(H_i)∪{x_i}, by (<ref>), |H_i|≥2n/2r+2-l≥ 108r^r-1k. Note that l≥ 3, then |H_i|≤ n-4n/2r+2+l=(r-1)n/r+1+l. Note that N(x)⊂ H_i∪{x_i} for any x∈ H_i, hence δ (G[H_i])≥δ(G)-1=n/2r+2-1> |V(H_i)|/2r. By induction hypothesis, G[H_i] is (r-1)-partite.
□
Remarks. Theorems <ref> and <ref> show that for any family 𝒞 consisting of some odd cycles, δ_χ(𝒞, 2) is determined by the length of the shortest odd cycle not in 𝒞, and the length of the longest odd cycle in 𝒞. In other words, when we study the equivalent form of the question of determining the maximum minimum degree a 𝒞-free graph G with χ(G)≥ r+1 can have, there are two possible extremal graphs, one is related to the length of the shortest odd cycle not in 𝒞, another is related to the length of the longest odd cycle in 𝒞. For the case r=2, we have found both extremal graphs (BC_2p+1(n) (see Fig.1) and C_2k+3(n 2k+3) (see Fig.2) in Theorems <ref> and <ref>. For r≥ 3 and a family 𝒞 of odd cycles in which C_2p+1 is the shortest cycle in 𝒞 and C_2k+1 is the longest cycle in 𝒞, is it true that δ_χ(𝒞, r)=max{g(p, r), h(k, r)}, where g(p, r)n is the minimum degree of an extremal graph related to the shortest odd cycle C_2p+1 in 𝒞 and h(k, r)n is the minimum degree of an extremal graph related to the longest odd cycle C_2k+1 in 𝒞. We think that G_r+1 in Theorem <ref> is the extremal graph related to the shortest odd cycle C_2p+1 in 𝒞 if 2p+1>r+1 and the reason why we can determine δ_χ(𝒞, r) for 2p≥ r+1 and k≥ 3r+4 might be because that the extremal graph related to the shortest odd cycle `dominates' even though extremal graphs related to the longest odd cycle are mysterious to us. We think that the remaining challenge for other cases might be to determine possible extremal graphs related to the longest odd cycle C_2k+1 in 𝒞. For the case r=2, it is natural to take C_2k+3(n 2k+3). It seems to be challenge but very interesting to construct a good candidate when r≥ 3.
5
allen
P. Allen, Dense H-free graphs are almost (χ(H)-1)-partite, Elec. J. Combin. 17 (1): Research Paper 21, 1–11, 2010.
alon
N. Alon, B. Sudakov, H-free graphs of large minimum degree, Elec. J. Combin. 13 (2006), R19.
Andrsfi
B. Andrsfái, P. Erdős, V. T. Sós, On the connection between chromatic number, maximal clique and minimal degree of a graph, Discrete Math. 8 (1974), 205–218.
Bal
P. Balister, B. Bollobás, O. Riordan, R. H. Schelp, Graphs with large maximum degree containing no odd cycles of a given length, J. Combin. Theory Ser. B 87 (2003), 366–373.
Brandt
S. Brandt, R. Faudree, W. Goddard,
Weakly pancyclic graphs,
J. Graph Theory 27 (1998), 141–176.
BrTh
S. Brandt and S. Thomassé, Dense triangle-free graphs are four-colourable: A solution to the Erdős Simonovits problem, http://perso.ens-lyon.fr/stephan.thomasse.
Bottcher
J. Böttcher, N. Frankl, D. Mergoni Cecchelli, O. Parzcyk, J. Skokan, Graphs with large minimum degree and no small odd cycles are 3-colourable, arXiv: 2302.01875v1.
Erdos59
P. Erdős, Graph theory and probability, Canadian J. Math. 11 (1959), 34–38.
Erds
P. Erdős, On some new inequalities concerning extremal properties of graphs,
in: P. Erdős, G. Katona (Eds.), Theory of Graphs, Academic Press, New York, 1968, pp. 77–81.
Erd1966Sta1
P. Erdős, Some recent results on extremal problems in graph theory (Results), In: Theory of Graphs (International Symposium Rome, 1966), Gordon and Breach, New York, Dunod, Paris, 1966, pp. 117–123.
Er
P. Erdős, R. J. Faudree, A. Gyárfás, R. H. Schelp,
Odd cycles in graphs of given minimum degree, In Graph Theory, Combinatorics, and Applications,Vol. 1
(Kalamazoo, MI, 1988), Wiley, New York, 1991, pp. 407–418.
ES66
P. Erdős, M. Simonovits,
A limit theorem in graph theory, Stud. Sci. Math. Hungar. 1 (1966) 51–57.
ES46
P. Erdős, A.H. Stone,
On the structure of linear graphs,
Bull. Amer. Math. Soc. 52 (1946) 1087–1091.
Erdos
P. Erdős, M. Simonovits, On a valence problem in extremal graph theory, Discrete Math. 5 (1973), 323–334.
Gallai
P. Erdős, T. Gallai, On maximal paths and circuits of graphs, Acta Math. Hungar, 10 (1959), 337–356.
Hggkvist
R. Häggkvist, Odd cycles of specified length in nonbipartite graphs, Graph
Theory (Cambridge, 1981), North-Holland Math Stud, 62, North-Holland, Amsterdam, New York, 1982, pp. 89–99.
Illingworth
F. Illingworth, Minimum degree stability of H-free graphs, Combinatorica 43 (2023), 129–147.
Jin
G. P. Jin, Triangle-free four-chromatic graphs, Discrete Math. 145 (1995), no. 1-3, 151–170.
KNS G. Katona, T. Nemetz and M. Simonovits, On a problem of Turán in the theory of graphs, Mat. Lapok, 15(1964), 228-238.
Kom
J. Komlós, M. Simonovits, Szemerédi's regularity lemma and its applications in graph theory, In Combinatorics, Paul Erdős is eighty, Vol. 2 (Keszthely, 1993), volume 2 of Bolyai Society Mathematical Studies, pages 295–352. János Bolyai Mathematical Society, Budapest, 1996.
Let
S. Letzter, R. Snyder, The homomorphism threshold of {C_3, C_5}-free graphs, J. Graph Theory 90 (2019), 83–106.
LMRX. Liu, D. Mubayi and C. Reiher, A unified approch to hypergraph stability, J. Combin. Theory Ser. B, 158(2022), 36-62.
Ma
J. Ma, Cycles with consecutive odd lengths, European J. Combin. 52 (2016), 74–78.
Nikiforov1
V. Nikiforov, Chromatic number and mimimum degree of K_r-free graphs, arXiv:1001.2070.
Nikiforov
V. Nikiforov, R. H. Schelp, Paths and cycles in graph of large minimal
degree, J. Graph Theory 47 (2004), 39–52.
Sankar
M. Sankar, Homotopy and the Homomorphism Threshold of Odd Cycles, arXiv: 2206.07525v1.
Simonovits
M. Simonovits, Extremal graph problems with symmetrical extremal graphs, Additional chromatic conditions, Discrete Math. 7 (1974), 349–376.
Sim1966
M. Simonovits,
A method for solving extremal problems in graph theory,
stability problems, in: Theory of Graphs,
Proc. Colloq., Tihany, 1966, Academic Press, New York, (1968), pp. 279–319.
Van
N. Van Ngoc and Z. Tuza, 4-chromatic graphs with large odd girth, Discrete Math. 138 (1995), no. 1-3, 387-392, 14th British Combinatorial Conference (Keele, 1993).
Thomassen
C. Thomassen, On the chromatic number of pentagon-free graphs of large minimum degree, Combinatorica 27 (2007), no. 2, 241–243.
Tur
P. Turán, Eine Extremalaufgabe aus der Graphentheorie. Mat. Fiz. Lapok 48 (1941), 436–452.
Ver
J. Verstraëte, On arithmetic progressions of cycle lengths in graphs, Combinatorics, Probability and Computting 9 (2000), 369–373.
YuanPeng
X. Yuan and Y. Peng, Minimum degree stability of C_2k+1-free graphs, J Graph Theory, (2024), 307–321.
YuanPeng2
X. Yuan and Y. Peng, Degree stability of graphs forbidding odd cycles, (2023), arXiv:2305.02762v1.
|
http://arxiv.org/abs/2409.02623v1 | 20240904113251 | Chebyshev polynomials related to Jacobi weights | [
"Jacob S. Christiansen",
"Olof Rubin"
] | math.CA | [
"math.CA",
"math.CV",
"41A50, 30C10, 33C45"
] |
AdvSecureNet: A Python Toolkit for Adversarial Machine Learning
Melih Catal [email protected]
Software Evolution and Architecture Lab
University of Zurich, Switzerland
Manuel Günther [email protected]
Artificial Intelligence and Machine Learning Group
University of Zurich, Switzerland
September 9, 2024
=================================================================================================================================================================================================================================================================================
§ ABSTRACT
We investigate Chebyshev polynomials corresponding to Jacobi weights and determine monotonicity properties of their related Widom factors. This complements work by Bernstein from 1930-31 where the asymptotical behavior of the related Chebyshev norms was established. As a part of the proof, we analyze a Bernstein-type inequality for Jacobi polynomials due to Chow et al. Our findings shed new light on the asymptotical uniform bounds of Jacobi polynomials. We also show a relation between weighted Chebyshev polynomials on the unit circle and Jacobi weighted Chebyshev polynomials on [-1,1]. This generalizes work by Lachance et al. In order to complete the picture we provide numerical experiments on the remaining cases that our proof does not cover.
Keywords Chebyshev polynomial, Widom factor, Jacobi weight, Jacobi polynomial, Bernstein-type inequality
Mathematics Subject Classification 41A50. 30C10. 33C45.
§ INTRODUCTION
In an extensive two part analysis of extremal polynomials on the unit interval, found in <cit.>, S. N. Bernstein investigates the asymptotic behavior of the quantity
inf_a_1,…,a_nsup_x∈ [-1,1]w(x)|∏_k=1^n(x-a_k)|
as n→∞ for a variety of different conditions on the weight function w:[-1,1]→ [0,∞). If w is bounded and strictly positive on a set consisting of at least n points on [-1,1], there exists a unique set of nodes {a_1^∗,…,a_n^∗}, all situated in [-1,1], such that
sup_x∈ [-1,1]w(x)|∏_k=1^n(x-a_k^∗)| =inf_a_1,…,a_nsup_x∈ [-1,1]w(x)|∏_k=1^n(x-a_k)|,
see for instance <cit.>. In other words, the infimum of (<ref>) can be replaced by a minimum.
The polynomial
T_n^w(x) := ∏_k=1^n(x-a_k^∗)
is the weighted Chebyshev polynomial of degree n corresponding to the weight function w and it is the unique monic polynomial of degree n minimizing (<ref>). Under the assumption that the weight function is continuous, the minimizer of (<ref>) can be characterized by the so-called alternation property. That is to say, a monic polynomial P of degree n is equal to T_n^w (and thus minimizes (<ref>)) if and only if there exists n+1 points -1≤ x_0<…<x_n≤ 1 such that
P_n(x_j)w(x_j) = (-1)^n-jmax_x∈ [-1,1]w(x)|P(x)|.
This result – which is central to the study of best approximations on the real line – can be found in many works on approximation theory, see for instance <cit.>.
The study of (<ref>) dates back to the work of P. L. Chebyshev
<cit.> who considered weight functions of the form w(x) = 1/P(x) where P is a polynomial which is strictly positive on the interval [-1,1]. Later A. A. Markov <cit.> extended these results to allow for weight functions of the form w(x) = 1/√(P(x)), where again P is a polynomial which is assumed to be strictly positive on [-1,1]. It should be stressed that both Chebyshev and Markov provided explicit formulas for the minimizer T_n^w in these cases in terms of the polynomial P. These formulas can be conveniently found in Achiezer's monograph <cit.>.
The considerations in <cit.> were different from those of Chebyshev and Markov as Bernstein considered the asymptotic behavior of (<ref>) as n→∞ for a broad family of weights, not restricting himself to continuous ones. In particular, writing f(n)∼ g(n) as n→∞ if f(n)/g(n)→ 1 as n→∞, he obtained the following result.
Let b_k∈ [-1,1] and s_k∈ for k=1,…,m, and let w_0:[-1,1]→ (0,∞) be a Riemann integrable function such that there exists a value M>1 for which 1/M<w_0(x)<M holds for every x∈ [-1,1]. Consider the weight function given by
w(x) = w_0(x)∏_k=1^m|x-b_k|^s_k.
Under the above conditions, we have that
min_a_1,…,a_nsup_x∈ [-1,1]w(x)|∏_k=1^n(x-a_k)|∼ 2^1-nexp{1/π∫_-1^1log w(x)/√(1-x^2)dx}
as n→∞.
This result is shown in two steps. Bernstein initially considers weight functions w_0 as in Theorem <ref> and finds that if c_1^∗,…,c_n^∗ are the unique nodes satisfying
∫_-1^1|∏_k=1^n(x-c_k^∗)|^2w_0(x)^2/√(1-x^2)dx = min_c_1,…,c_n∫_-1^1|∏_k=1^n(x-c_k)|^2w_0(x)^2/√(1-x^2)dx,
then the weighted expression
w_0(x)∏_k=1^n(x-c_k^∗)
is asymptotically alternating on [-1,1], see <cit.>. As it turns out, this information suffices to apply a result due to de la Vallée-Poussin found in <cit.> and ensure that
max_x∈ [-1,1]|w_0(x)∏_k=1^n(x-c_k^∗)|∼min_a_1,…,a_nmax_x∈ [-1,1]w_0(x)|∏_k=1^n(x-a_k)|
as n→∞. Note that
∏_k=1^n(x-c_k^∗)
is nothing but the monic orthogonal polynomial of degree n corresponding to the weight function w_0(x)^2/√(1-x^2) on [-1,1]. By showing that
max_x∈ [-1,1]|w_0(x)∏_k=1^n(x-c_k^∗)|∼ 2^1-nexp{1/π∫_-1^1log w_0(x)/√(1-x^2)dx}
as n→∞, Bernstein obtains (<ref>) from (<ref>). Consequently, Theorem <ref> is verified in this case. Proceeding from this, Bernstein extends the analysis by allowing for vanishing factors of the form |x-b_k|^s_k to be introduced to the weight where b_k∈ [-1,1] and s_k∈ℝ, see <cit.>. To show this, he uses a clever technique of bounding the factors |x-b_k|^s_k from above and below. However, the connection to the maximal deviation of the corresponding orthogonal polynomials as in (<ref>) gets lost.
To illustrate why this connection becomes more delicate when zeros are added to the weight function, he provides the explicit example of the monic Jacobi polynomials. Following <cit.>, we let P_n^(α,β) denote the classical Jacobi polynomials with parameters α,β >-1. These are uniquely defined by the property that P_n^(α,β) is a polynomial of exact degree n and
∫_-1^1(1-x)^α(1+x)^β P_m^(α,β)(x)P_n^(α,β)(x)dx = 2^α+β+1/2n+α+β+1Γ(n+α+1)Γ(n+β+1)/Γ(n+α+β+1)n!δ_nm,
where δ_nm is the Kronecker delta.
Since all zeros of the Jacobi polynomials reside in [-1,1] and since
P_n^(α,β)(x) = 2^-n2n+α+βnx^n+,
we find that
2^nP_n^(α,β)(x)/2n+α+βn = ∏_k=1^n(x-cosψ_k^∗)
for some values ψ_k^∗∈ [0,π]. Bernstein's analysis provides the following result.
Let ρ_α = α/2+1/4 and ρ_β = β/2+1/4, and let ψ_k^∗ be associated with P_n^(α,β) as in (<ref>).
* If
0≤ρ_α≤ 1/2 and 0≤ρ_β≤ 1/2
both hold, then
max_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|∼ 2^1-n-ρ_α-ρ_β
as n→∞.
* If one of the conditions in (<ref>) fails to hold, then so does (<ref>).
Note that by replacing w_0 with (1-x)^ρ_α(1+x)^ρ_β in (<ref>), the corresponding weight function, to which the orthogonal polynomials are associated, is nothing but
[(1-x)^ρ_α(1+x)^ρ_β]^2/√(1-x^2) = (1-x)^α(1+x)^β.
It is a simple matter to verify that
2^1-nexp{1/π∫_-1^1log [(1-x)^ρ_α(1+x)^ρ_β]/√(1-x^2)dx} = 2^1-n-ρ_α-ρ_β,
see for instance <cit.> where also a translation of the proof of Theorem <ref> appears. As such, we see that the monic Jacobi polynomials defined in (<ref>) are close to minimal with respect to
min_a_1,…,a_nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-a_k)|
precisely when (<ref>) holds.
As a partial result, we wish to carry out a detailed analysis of the quantity
max_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|
for the parameters satisfying (<ref>). By carefully manipulating a Bernstein-type inequality due to Chow et al. <cit.>, we will establish the following result.
Let ρ_α = α/2+1/4 and ρ_β = β/2+1/4, and let ψ_k^∗ be as in (<ref>). If (<ref>) holds, then
max_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|≤ 2^1-n-ρ_α-ρ_β
for every n with equality if and only if ρ_α,ρ_β∈{0,1/2}.
By combining Theorem <ref> with Theorem <ref>, we see that
lim_n→∞2^nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|
= sup_n2^nmax_x∈ [-1,1](1-x)^ρ_α2(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|= 2^1-ρ_α-ρ_β
which shows that the convergence is from below.
The question of obtaining upper bounds for the quantity
max_x∈ [-1,1](1-x)^a(1+x)^b|∏_k=1^nP_n^(α,β)(x)|
for different values of a,b,α,β is well studied, see for instance <cit.> and has applications to e.g. representation theory and the study of Schrödinger operators. Our main interest is the applications that such a result has to the corresponding Chebyshev problem (<ref>). To state our main result, we introduce the Widom factor _n(ρ_α,ρ_β) defined by
_n(ρ_α,ρ_β):= 2^nmin_a_1,…,a_nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-a_k)|.
The choice of naming in honour of H. Widom – who in <cit.> considered Chebyshev polynomials corresponding to a plethora of compact sets in the complex plane – stems from <cit.>. The considerations of Widom factors in the literature are plentiful, see for instance <cit.>. In this article we wish to carry out a detailed study of the behavior of _n(ρ_α,ρ_β) for different parameter values.
For any value of the parameters ρ_α,ρ_β≥ 0, we have that
_n(ρ_α,ρ_β)∼ 2^1-ρ_α-ρ_β
as n→∞. Furthermore:
* If ρ_α,ρ_β∈{0,1/2}, then the quantity _n(ρ_α,ρ_β) is constant.
* If ρ_α,ρ_β∈ [0,1/2], then
sup_n_n(ρ_α,ρ_β)= 2^1-ρ_α-ρ_β.
* If ρ_α,ρ_β∈{0}∪ [1/2,∞), then
inf_n_n(ρ_α,ρ_β) = 2^1-ρ_α-ρ_β,
sup_n _n(ρ_α,ρ_β)≤(2ρ_α/ρ_α+ρ_β)^ρ_α(2ρ_β/ρ_α+ρ_β)^ρ_β,
and _n(ρ_α,ρ_β) decreases monotonically as n increases.
The case of ρ_α,ρ_β∈{0,1/2} is classical and if ρ_β∈{0,1/2} and ρ_α≥ 1/2 (or vice versa), the result was settled in <cit.>. We will in fact apply a similar technique to what is used there for the general setting of ρ_α,ρ_β≥1/2. This method originates with <cit.>. As such, the proofs of the different cases of Theorem <ref> uses essentially different techniques and will be split into three separate sections.
We should also note that while negative parameters ρ_α and ρ_β are allowed in Theorem <ref>, they simply result in the fact that a zero of the minimizer is forcedly placed to cancel the effect of the pole and hence the setting transfers to the one of non-negative parameters if the degree of the associated Chebyshev polynomial is sufficiently large.
Asymptotic results for the Chebyshev polynomials corresponding to (<ref>) exist for smooth enough weight functions w which are strictly positive on [-1,1]. See for instance <cit.>. If the weight function w vanishes somewhere on [-1,1], there does not seem to be many general results for the corresponding Chebyshev polynomials apart from Theorem <ref> found in <cit.>.
The extension of Chebyshev polynomials to the complex plane was first carried out by Faber in <cit.>. Apart from the proof of Theorem <ref>, our considerations exclusively concern Chebyshev polynomials corresponding to Jacobi weights and we therefore refrain from a detailed presentation of the general complex case. We note, however, that if ⊂ is a compact set and w:→ [0,∞) is a weight function which is non-zero for at least n points, then there exists a unique monic polynomial T_n^w of degree n minimizing
max_z∈w(z)|∏_k=1^n(z-a_k)|.
We invite the reader to consult <cit.> for a detailed presentation of complex Chebyshev polynomials.
Our motivation in providing a detailed study of _n(ρ_α,ρ_β) originates from an inquiry of complex Chebyshev polynomials corresponding to star graphs of the form {z:z^m∈ [-2,2]}. It can be shown that these complex Chebyshev polynomials can be directly related to Chebyshev polynomials on [-1,1] with Jacobi weights, see <cit.> for details. In the study of the Widom factors for star graphs, certain monotonicity properties appear which can be explained by Theorem <ref>. We trust that Theorem <ref> will also be useful in many other connections.
§ THE CASE OF Ρ_Α,Ρ_Β∈{0,1/2}
We proceed by delving into the proof of Theorem <ref>, split into three cases. The first case is very simple. When ρ_α,ρ_β∈{0,1/2}, the minimizers of (<ref>) are simply the Chebyshev polynomials of the 1^st to 4^th kind, see e.g. <cit.>. Using basic trigonometry, it follows that if θ∈ [0,π] then
cosθ/2 = √(1+cosθ/2),
sinθ/2 = √(1-cosθ/2).
Through the change of variables x = cosθ, the trigonometric functions
T_n(x) = 2^1-ncos nθ, U_n(x) = 2^-nsin(n+1)θ/sinθ,
V_n(x) = 2^-ncos(n+1/2)θ/cosθ/2, W_n(x) = 2^-nsin(n+1/2)θ/sinθ/2,
all define monic polynomials in x of degree n. These are precisely, in order, the monic Chebyshev polynomials of the 1^st, 2^nd, 3^rd and 4^th kind. It is a straightforward matter to verify that they all satisfy the alternating property from (<ref>) upon multiplication with the weight functions
1, sinθ=√(1-x^2),
√(2)cosθ/2=√(1+x), √(2)sinθ/2=√(1-x),
respectively. As a result, T_n, U_n, V_n, and W_n
are the minimal configurations sought after in (<ref>) for the weight parameters ρ_α,ρ_β∈{0,1/2}. It is evident that _n(ρ_α,ρ_β) = 2^1-ρ_α-ρ_β in all these cases and this shows the first part of Theorem <ref>.
§ THE CASE OF 0≤Ρ_Α,Ρ_Β≤ 1/2
Having settled the first case of Theorem <ref>, we turn toward the verification of the second case. For parameter values ρ_α,ρ_β∈ [0,1/2], we want to show that
sup_n_n(ρ_α,ρ_β) = 2^1-ρ_α-ρ_β.
These parameter values correspond to the lower left square in Figure <ref>. Observe that if ρ_α,ρ_β∈{0,1/2} then this follows from the first case of Theorem <ref> since then the sequence {_n(ρ_α,ρ_β)} is constant. Our approach will consist of first verifying Theorem <ref>. For this reason we remind the reader of our definition of ψ_k^∗∈ [0,π] through the formula
∏_k=1^n(x-cosψ_k^∗)=2^nP_n^(α,β)(x)/2n+α+βn,
where P_n^(α,β) denotes the Jacobi polynomial with parameters α,β and the angles ψ_k^∗ are those for which cosψ_1^∗,…,cosψ_n^∗ enumerate its zeros. We also recall the relation between the L^2 and L^∞ parameters
ρ_α=α/2+1/4,
ρ_β =β/2+1/4.
If we can show the validity of (<ref>), then we trivially obtain
_n(ρ_α,ρ_β)≤ 2^nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|≤ 2^1-ρ_α-ρ_β
from which (<ref>), the second case of Theorem <ref> follows from the fact that _n(ρ_α,ρ_β)∼ 2^1-ρ_α-ρ_β as n→∞. The key to verifying (<ref>) is the following result.
If -1/2≤α,β≤ 1/2 and θ∈[0,π], then
(sinθ/2)^α+1/2(cosθ/2)^β+1/2|P_n^(α,β)(cosθ)|≤Γ(q+1)/Γ(1/2)n+qn(n+α+β+1/2)^-q-1/2
where q = max(α,β).
This result provides a Bernstein-type inequality for Jacobi polynomials and sharpens a previous result of Baratella <cit.> who had shown (<ref>) with the larger factor 2.821 in place of Γ(q+1)/Γ(1/2). In the case where |α| = |β| = 1/2, the inequality in (<ref>) is saturated. For arbitrary -1/2≤α,β≤ 1/2 the constant Γ(q+1)/Γ(1/2) is asymptotically sharp as can be seen from an argument using Stirling's formula which we will provide in the proof of Lemma <ref>. The type of inequality exemplified in (<ref>) originate with Bernstein who showed in <cit.> that the Legendre polynomials P_n := P_n^(0,0) satisfy
(sinθ)^1/2|P_n(cosθ)|≤(2/π)^1/2n^-1/2, θ∈ [0,π].
Antonov and Holšhevnikov <cit.> sharpened (<ref>) and later Lorch <cit.> extended such an inequality to Gegenbauer polynomials. In terms of Jacobi polynomials, this provides a Bernstein-type inequality for P_n^(λ,λ) with -1/2≤λ≤ 1/2. Theorem <ref> contains all these inequalities as special cases. In a different direction, the Erdélyi–Magnus–Nevai Conjecture <cit.> proposes that if P̂_n^(α,β) denotes the orthonormal Jacobi polynomial then for any parameter values α,β≥-1/2,
max_x∈ [-1,1](1-x)^α+1/2(1+x)^β+1/2(P̂_n^(α,β)(x))^2 = O(max[1,(α^2+β^2)^1/4]).
Theorem <ref> can be used to verify this conjecture in the case where |α|,|β|≤ 1/2. This is shown in <cit.> where an extensive consideration concerning the sharpness of Theorem <ref> is studied numerically for different parameter values.
We consider transferring the trigonometric weights to algebraic weights as in the statement of Theorem <ref>. Letting x=cosθ, we find that
(sinθ/2)^α+1/2 =(1-x/2)^α/2+1/4 = 2^-ρ_α(1-x)^ρ_α,
(cosθ/2)^β+1/2 =(1+x/2)^β/2+1/4=2^-ρ_β(1+x)^ρ_β,
with the parameter relation stated in (<ref>).
Together with (<ref>), (<ref>) and (<ref>), the change of variables transforms (<ref>) into the statement that for x∈ [-1,1],
(1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|≤Γ(q+1)2^n+α+β+1/2n+qn/√(π)(n+α+β+1/2)^q+1/22n+α+βn,
where q = max(α,β). From (<ref>) we see that this implies that
_n(ρ_α,ρ_β)≤ 2^nΓ(q+1)2^n+α+β+1/2n+qn/√(π)(n+α+β+1/2)^q+1/22n+α+βn
for ρ_α,ρ_β∈[0,1/2]. In order to determine the upper bound of (<ref>) and (<ref>), we are ultimately led to consider the quantity
M_n(α,β):=Γ(q+1)2^2n+α+β+1/2n+qn/√(π)(n+α+β+1/2)^q+1/22n+α+βn.
Suppose that α,β∈ [-1/2,1/2]. Then M_n(α,β), defined in (<ref>), increases monotonically to 2^1-ρ_α-ρ_β as n→∞.
The proof we provide constitutes a lengthy computation. We have therefore chosen to illustrate the simpler case where α = β = q in the proof below while postponing most of the remaining computations to the appendix. The asymptotical behavior M_n(α,β)∼ 2^1-ρ_α-ρ_β as n→∞ merits attention since this shows that the constant Γ(q+1)/Γ(1/2) is sharp in (<ref>). To see this, note that
_n(ρ_α,ρ_β)≤ 2^nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|≤ M_n(α,β).
The left-hand side converges to 2^1-ρ_α-ρ_β as n→∞ and consequently
lim inf_n→∞M_n(α,β)≥ 2^1-ρ_α-ρ_β.
On the other hand, as Lemma <ref> shows, M_n(α,β)∼ 2^1-ρ_α-ρ_β as n→∞ and therefore the constant factor in (<ref>) cannot be improved.
We rewrite the binomial coefficients using Gamma functions so that
n+qn=Γ(n+q+1)/Γ(n+1)Γ(q+1),
2n+α+βn=Γ(2n+α+β+1)/Γ(n+1)Γ(n+α+β+1).
The Legendre duplication formula implies that
Γ(2n+α+β+1)=2^2n+α+β/√(π)Γ(n+α+β+1/2)Γ(n+α+β/2+1).
The combination of (<ref>), (<ref>), (<ref>) and (<ref>) yields
M_n(α,β) =2^2n+α+β+1/2Γ(n+q+1)Γ(n+α+β+1)/√(π)(n+α+β+1/2)^q+1/2Γ(2n+α+β+1)
=2^1-α-β/2Γ(n+q+1)Γ(n+α+β+1)/(n+α+β+1/2)^q+1/2Γ(n+α+β+1/2)Γ(n+α+β/2+1).
The Gamma function satisfies that Γ(n+a)/Γ(n+b) ∼ n^a-b as n→∞, and we may conclude that
M_n(α,β)∼2^1-α-β/2n^q+1n^α+β+1/n^q+1/2n^α+β+1/2n^α+β/2+1=2^1-α-β/2=2^1-ρ_α-ρ_β
as n→∞.
We are left to determine the monotonicity of M_n(α,β). For this reason we consider the quotient
M_n+1(α,β)/M_n(α,β) =(n+q+1)(n+α+β+1)(n+α+β+1/2)^q+1/2/(n+α+β+1/2+1)^q+1/2(n+α+β+1/2)(n+α+β/2+1)
=(n+q+1)(n+α+β+1)(n+α+β+1/2)^q-1/2/(n+α+β+1/2+1)^q+1/2(n+α+β/2+1).
Whether this quotient is bigger than or smaller than 1 determines the monotonicity of the sequence {M_n(α,β)}. To simplify, we begin by considering the case α=β = q. Writing M_n(q):=M_n(q,q), the quotient (<ref>) becomes
M_n+1(q)/M_n(q) = (n+1+q)(n+2q+1)/(n+q+1/2)(n+q+1)(n+q+1/2/n+q+3/2)^q+1/2
= (n+2q+1)(n+q+1/2)^q-1/2/(n+q+3/2)^q+1/2.
We introduce the function
f(x) = (x+2q+1)(x+q+1/2)^q-1/2/(x+q+3/2)^q+1/2,
and claim that f'(x)<0 for x>0. From the fact that f(x)→ 1 as x→∞, this will imply that f(x)≥ 1 for every x>0. As a consequence we see that M_n+1(q)≥ f(n)M_n(q)≥ M_n(q).
Taking the logarithmic derivative of f generates
f'(x)/f(x) = q^2-1/4/(x+2q+1)(x+q+1/2)(x+q+3/2).
If -1/2<q<1/2, then it is clear that
f'(x)/f(x)<0
and as a consequence f'(x)<0. We conclude that M_n+1(q)≥ M_n(q) and this settles the case where α= β = q.
If α≠β, then the problem is more delicate. We again need to show that the quotient (<ref>) is strictly greater than 1. In order to show this, we reintroduce the function f by defining it as
f(x)=(x+q+1)(x+α+β+1)/(x+α+β/2+1)(x+α+β+1/2)^q-1/2/(x+α+β+1/2+1)^q+1/2.
In a completely analogous manner, we only need to show that f'(x)<0 for every x∈ (0,∞) in order to conclude that f(x)≥ 1 and consequently M_n+1(α,β)≥ f(n)M_n(α,β)≥ M_n(α,β). Since M_n is symmetric with respect to its variables and q = max(α,β), it is enough to consider the parameter values -1/2≤β≤α≤ 1/2.
The logarithmic derivative of f is given by
f'(x)/f(x)=1/x+q+1+1/x+α+β+1+q-1/2/x+α+β+1/2-1/x+α+β/2+1-q+1/2/x+α+β+1/2+1.
Written over common denominator and utilizing that α = q, this expression becomes
f'(x)/f(x)=c_2(α,β)x^2+c_1(α,β)x+c_0(α,β)/(x+α+1)(x+α+β+1)(x+α+β/2)(x+α+β/2+1)(x+α+β/2+3/2),
where
c_2(α,β) =α^2/2+β^2/2-1/4,
c_1(α,β) =3α^3/4+(4β+8)α^2/8+(β^2-1)α/4+β^3/2+β^2-β/4-1/2,
c_0(α,β) =α^4/4+(3β+6)α^3/8+(β+2)^2α^2/8+(β^2-1)(β+2)α/8+β^4/8+β^3/2+3β^2/8-β/4-1/4.
Now it is immediately clear that c_2(α,β)<0 unless |α|=|β|=1/2. We claim that the same is true for the remaining coefficients c_1(α,β) and c_0(α,β). Namely, c_k(α,β)<0 for k=0,1,2 unless |α|=|β| = 1/2. A consequence of these inequalities is that f'(x)<0 for x>0 which is precisely the sought after inequality. The proof of this is provided in Lemma <ref> in the appendix. As a result, we obtain that f(n)≥ 1 and therefore
M_n+1(α,β)=f(n)M_n(α,β)≥ M_n(α,β).
In conclusion, M_n(α,β) converges monotonically to 2^1-ρ_α-ρ_β from below.
An immediate consequence of Lemma <ref> together with (<ref>) is that
_n(ρ_α,ρ_β)≤ 2^nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosψ_k^∗)|≤ M_n(α,β)≤ 2^1-ρ_α-ρ_β.
This simultaneously shows that Theorem <ref> holds and that
_n(ρ_α,ρ_β)≤ 2^1-ρ_α-ρ_β
is valid for every n. On the other hand, (<ref>) implies that these quantities are all asymptotically equal as n→∞. This is possible only if
sup_n_n(ρ_α,ρ_β)= 2^1-ρ_α-ρ_β
and hence the second case of Theorem <ref> is shown. From the fact that _n(ρ_α,ρ_β)≤ M_n(α,β), we gather the following corollary to Lemma <ref>.
If 0≤ρ_α, ρ_β≤ 1/2 and r=2max(ρ_α, ρ_β), then
_n(ρ_α,ρ_β)≤Γ(r+1/2)2^2n+ρ_α+ρ_βn+r-1/2n/√(π)(n+ρ_α+ρ_β)^r2n+2ρ_α+2ρ_β-1n≤ 2^1-ρ_α-ρ_β
and the rightmost inequality is strict unless ρ_α, ρ_β∈{0,1/2}.
§ THE CASE OF Ρ_Α,Ρ_Β≥ 1/2
We are left to prove the third and final case of Theorem <ref>. Our approach to proving this revolves around relating Jacobi weighted Chebyshev polynomials to weighted Chebyshev polynomials on the unit circle. These ideas have previously been investigated in <cit.> for certain choices of the parameters. We will provide a general argument which works for any choice of parameters ρ_α,ρ_β≥ 1/2.
The approach first studied in <cit.> hinges upon the Erdős–Lax inequality, see <cit.>. If P is a polynomial of degree n which does not vanish anywhere on , then
max_|z| = 1|P'(z)| ≤n/2max_|z| = 1|P(z)|.
This result relates the maximum modulus of a polynomial with its derivative. Furthermore, equality occurs in (<ref>) if and only if all zeros of P lie on the unit circle. A recent extension of (<ref>) to the case of generalized polynomials having all zeros situated on the unit circle is the following.
Let s_k≥ 1 and θ_k∈ [0,2π) for k = 1,…,n. Then
max_|z| = 1|d/dz{∏_k=1^n(z-e^iθ_k)^s_k}| = ∑_k=1^ns_k/2max_|z| = 1|∏_k=1^n(z-e^iθ_k)^s_k|.
Given a polynomial with zeros in the unit disk, there is a procedure of constructing a related polynomial whose zeros all lie on the unit circle.
Let |a_k|≤ 1 for k = 1,…,n and suppose that m∈ℤ_+ and θ∈. Then
z^m∏_k=1^n(z-a_k)+e^iθ∏_k=1^n(1-a_k z)
does not vanish away from |z| = 1.
Utilizing these two results, we obtain the following theorem which “opens up” the interval to the circle.
Given ρ_α,ρ_β≥ 1/2, let θ_1^∗,…,θ_n^∗∈ [0,2π) be the unique minimizing data of the expression
I_n:=min_θ_1,…,θ_nmax_x∈ [-1,1]|(1-x)^ρ_α(1+x)^ρ_β∏_k=1^n(x-cosθ_k)|.
In that case,
1/2ρ_α+2ρ_β+2nd/dz{(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^n(z-e^iθ_k^∗)(z-e^-iθ_k^∗)}
is the unique minimizer of
C_n:=min_a_1,…,a_2n+1max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k)|.
Furthermore, we have that
C_n = 2^n+ρ_α+ρ_β-1 I_n.
We will prove this result in the opposite direction as it is stated by first analyzing the minimizer of (<ref>). Assume that the points a_k^∗ are uniquely chosen to satisfy
max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)| = C_n.
A result of Fejér <cit.> says that the zeros of a complex Chebyshev polynomial corresponding to a compact set are always situated in the convex hull of . In our setting, a similar reasoning ensures us that |a_k^∗|≤ 1 holds for k = 1,… 2n+1. It is actually possible to show that this inequality is strict, see <cit.> for details. We form the combination
P(z) = z∏_k=1^2n+1(z-a_k^∗)-∏_k=1^2n+1(1-a_k^∗ z).
It is immediate that P is a monic polynomial of degree 2n+2 and by Lemma <ref> we conclude that P has all its zeros on the unit circle. Since the minimizer of (<ref>) is unique, all a_k^∗ must come in conjugate pairs with one possible exception. Due to symmetry of the weight function, this exceptional zero must always be real.
Therefore,
P(z) = z∏_k=1^2n+1(z-a_k^∗)-∏_k=1^2n+1(1-a_k^∗ z)
and as a consequence,
P(1) = ∏_k=1^2n+1(1-a_k^∗)-∏_k=1^2n+1(1-a_k^∗) = 0,
P(-1) = -∏_k=1^2n+1(-1-a_k^∗)-∏_k=1^2n+1(1+a_k^∗) = 0.
We conclude that
P(z) = (z-1)(z+1)∏_k=1^2n(z-e^iθ_k^∗)
for some values θ_k^∗∈ [0,2π).
Now let φ_k∈ [0,2π) be chosen such that
max_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)| = min_θ_1,…,θ_2nmax_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iθ_k)|.
From the fact that
∏_k=1^2n+1|1-a_k^∗ z| = ∏_k=1^2n+1|z-a_k^∗|
for |z| = 1 together with the representation in (<ref>), we conclude from the triangle inequality that
max_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)| ≤max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1P(z)|
≤ 2 max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)| = 2 C_n.
On the other hand, the expression
1/2ρ_α+2ρ_β+2nd/dz(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)
is of the form
(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-b_k)
for some values of b_k∈. From the Gauss–Lucas Theorem we can actually conclude that |b_k|≤ 1. As it turns out, this derivative is a candidate for a minimizer of (<ref>). Therefore,
max_|z|=1|1/2ρ_α+2ρ_β+2nd/dz{(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)}|
≥max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)|
and Theorem <ref> immediately gives us that
2max_|z|=1|1/2ρ_α+2ρ_β+2nd/dz{(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)}|
= max_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)|.
By combining (<ref>), (<ref>) and (<ref>), we conclude that equality must hold in all the inequalities.
In particular,
max_|z|=1|1/2ρ_α+2ρ_β+2nd/dz{(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)}|
= max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)|
and
max_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)| = max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1P(z)|.
As a consequence of (<ref>) together with uniqueness of the minimizer, we have that
1/2ρ_α+2ρ_β+2nd/dz{(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iφ_k)} =(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)
and so the minimizing points φ_k, which were chosen to satisfy (<ref>), are determined from the point set {a_k^∗}. Incidentally we have established that the solution to (<ref>) is unique and as a consequence all nodes e^iφ_k come in conjugate pairs. Furthermore, (<ref>) implies that after a possible rearrangement, it holds that θ_k^∗ = φ_k for k = 1,…,2n.
We now let x= (z+z^-1)/2 and recall that x∈ [-1,1] when |z| = 1. Under this transformation,
(z-e^iθ_k)(z-e^-iθ_k) = 2z(x-cosθ_k)
so if we arrange the angles θ_k^∗ in such a way that θ_k+n^∗ = θ_k^∗+π for k=1,…,n, then
(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iθ_k∗) = (2z)^ρ_α+ρ_β+n(x-1)^ρ_α(x+1)^ρ_β∏_k=1^n(x-cosθ_k^∗).
In conclusion, to each expression of the form
(x-1)^ρ_α(x+1)^ρ_β∏_k=1^n(x-cosθ_k),
there corresponds a function
(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^n(z-e^iθ_k)(z-e^-iθ_k)
and their absolute values are related through multiplication with the constant 2^ρ_α+ρ_β+n. Since the left-hand side of (<ref>) is minimal in terms of maximal modulus on the unit circle, we conclude that
max_x∈ [-1,1]|(x-1)^ρ_α(x+1)^ρ_β∏_k=1^n(x-cosθ_k^∗)| = I_n.
The result now follows from the uniqueness of the minimizer of (<ref>).
We end this section by listing some consequences of Theorem <ref>.
When ρ_α,ρ_β≥ 1/2, there is a unique configuration θ_1^∗,…,θ_2n^∗∈ [0,2π) such that
max_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iθ_k^∗)| = min_θ_1,…,θ_2nmax_|z| = 1|(z-1)^2ρ_α(z+1)^2ρ_β∏_k=1^2n(z-e^iθ_k)|.
Our main conclusion is the following monotonicity result which constitutes the final puzzle piece to proving the third case in Theorem <ref>.
If ρ_α,ρ_β≥ 1/2, then _n(ρ_α,ρ_β) decays monotonically to 2^1-ρ_α-ρ_β as n→∞ and for every n, we have that
_n(ρ_α,ρ_β)≤(2ρ_α/ρ_α+ρ_β)^ρ_α(2ρ_β/ρ_α+ρ_β)^ρ_β.
We begin by verifying the monotonicity of the sequence of Widom factors. From (<ref>) and Theorem <ref> we know that
_n(ρ_α,ρ_β) =
2^nmin_θ_1,…,θ_nmax_x∈ [-1,1](1-x)^ρ_α(1+x)^ρ_β|∏_k=1^n(x-cosθ_k)|
=
2^1-ρ_α-ρ_βmin_a_1,…,a_2n+1max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k)|.
Clearly the right-hand side decreases monotonically in n since multiplying by z^2 does not change the absolute value and
max_|z|=1|z^2(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2n+1(z-a_k^∗)|
≥min_a_1,…,a_2(n+1)+1max_|z| = 1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1∏_k=1^2(n+1)+1(z-a_k)|.
Hence monotonicity of _n(ρ_α,ρ_β) follows.
To show the upper bound, we note that Theorem <ref> is still valid if n = 0 meaning that the expression
1/2ρ_α+2ρ_βd/dz{(z-1)^2ρ_α(z+1)^2ρ_β} = (z-1)^2ρ_α-1(z+1)^2ρ_β-1(z-a^∗)
minimizes (<ref>) in the case where n=0. Consequently,
2^1-ρ_α-ρ_βmin_amax_|z|=1|(z-1)^2ρ_α-1(z+1)^2ρ_β-1(z-a)| = max_x∈ [-1,1]|(1-x)^ρ_α(1+x)^ρ_β|
= (1+ρ_α-ρ_β/ρ_α+ρ_β)^ρ_α(1-ρ_α-ρ_β/ρ_α+ρ_β)^ρ_β = (2ρ_α/ρ_α+ρ_β)^ρ_α(2ρ_β/ρ_α+ρ_β)^ρ_β
and the upper bound now follows from the monotonicity of _n(ρ_α,ρ_β).
§ WHAT ABOUT THE REMAINING PARAMETERS Ρ_Α,Ρ_Β?
To conclude our investigation we question what can be said concerning the monotonicity properties of _n(ρ_α,ρ_β) for the remaining values of the parameters. These are precisely those which satisfy
0<ρ_α<1/2 and ρ_β≥ 1/2 or 0<ρ_β<1/2 and ρ_α≥ 1/2,
as illustrated by the blank strips in Figure <ref>.
First of all, we note the following continuity of the Widom factors. We mention in passing that a general result for the continuity of L^p Widom factors with p<∞ is provided in <cit.>.
For fixed n≥ 0, the Widom factor _n(ρ_α,ρ_β) is continuous in ρ_α,ρ_β≥ 0.
Let θ̃_k^∗ and θ_k^∗ denote the minimizing nodes associated with the parameters (ρ̃_α,ρ̃_β) and (ρ_α,ρ_β), respectively. For any choice of parameters we have
max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-cosθ̃_k^∗)| ≤max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-cosθ_k^∗)|
and vice versa when the roles
of the parameters (ρ_α,ρ_β) and (ρ̃_α,ρ̃_β) are interchanged.
Given ε>0 it is always possible to choose (ρ̃_α, ρ̃_β) close enough to (ρ_α,ρ_β) in a manner that, independently of the choice of a_k∈ [-1,1], it holds that
|max_x∈[-1,1]|(1-x)^ρ_α(1+x)^ρ_β∏_k=1^n(x-a_k)|-max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-a_k)||≤ε.
We thus find that
max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-cosθ̃_k^∗)| ≤max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-cosθ_k^∗)|
≤max_x∈[-1,1]|(1-x)^ρ_α(1+x)^ρ_β∏_k=1^n(x-cosθ_k^∗)|+ε
and therefore
max_x∈[-1,1]|(1-x)^ρ̃_α(1+x)^ρ̃_β∏_k=1^n(x-cosθ̃_k^∗)|-max_x∈[-1,1]|(1-x)^ρ_α(1+x)^ρ_β∏_k=1^n(x-cosθ_k^∗)|≤ε.
By symmetry, we may also deduce that the negative of the left-hand side is ≤ε
when (ρ̃_α, ρ̃_β) is sufficiently close to (ρ_α,ρ_β). This implies the desired continuity of _n(ρ_α,ρ_β) with respect to the parameters ρ_α,ρ_β.
When ρ_α,ρ_β∈ [0,1/2], we know from Theorem <ref> (part 2) that
_n(ρ_α,ρ_β)≤ 2^1-ρ_α-ρ_β.
The same theorem (part 3) shows that
_n(ρ_α,ρ_β)≥ 2^1-ρ_α-ρ_β
for ρ_α,ρ_β≥ 1/2. In particular, for a fixed n, Proposition <ref> implies the existence of parameters where the equality
_n(ρ_α,ρ_β) = 2^1-ρ_α-ρ_β
is attained. In fact, such parameters exist on any arc (in the first quadrant) connecting a point of [0,1/2]× [0,1/2] with a point in [1/2,∞)×[1/2,∞). However, these parameters may very well be n dependent. A natural question is for which parameters such an equality occurs.
If we allow ourselves to assume that (<ref>) can be extended slightly outside the parameter domain (ρ_α, ρ_β)∈ [0, 1/2] × [0, 1/2], it is natural to consider parameters for which (<ref>) vanishes. This occurs precisely on the circular segment
{(ρ_α,ρ_β) = (1/4+cosθ/√(8),1/4 +sinθ/√(8)): θ∈[-π/4,3π/4] }
which is illustrated in Figure <ref>. To be specific, we recall that M_n(α,β) defined in (<ref>) provides an upper bound of _n(ρ_α,ρ_β) and that
M_n+1(α,β)/M_n(α,β) = (n+q+1)(n+α+β+1)(n+α+β+1/2)^q-1/2/(n+α+β+1/2+1)^q+1/2(n+α+β/2+1).
By replacing all occurrences of the natural number n with the real variable x on the right-hand side, we obtain the function f(x) from (<ref>). We previously saw in (<ref>) that
f'(x)/f(x)=c_2(α,β)x^2+c_1(α,β)x+c_0(α,β)/(x+α+1)(x+α+β+1)(x+α+β/2)(x+α+β/2+1)(x+α+β/2+3/2)
and that the leading term c_2(α,β)x^2 vanishes precisely when |α| = |β| = 1/2. This indicates that the convergence of f'(x)/f(x)→ 0 as x→∞ is “rapid” in this case. If α^2+β^2>1/4, then f'(x)/f(x) will eventually be positive if x is large enough. We therefore believe that the monotonicity properties of M_n(α,β) change somewhere in the vicinity of the circle
{(α,β):α,β≥ 0, α^2+β^2 = 1/4}.
Replacing the variables (α,β) by (ρ_α,ρ_β) transfers the circular segment (<ref>) precisely to (<ref>).
Since we do not know how far the inequality
_n(ρ_α,ρ_β)≤ M_n(α,β)
can be extended outside of -1/2≤α,β≤ 1/2, our theoretical approach cannot be applied to study this case. We resort to a numerical investigation of the behavior of the corresponding Widom factors using a generalization of the Remez algorithm due to Tang <cit.>. See also <cit.> for an overview on how this algorithm can be applied to the numerical study of Chebyshev polynomials. The numerical results are illustrated in Figure <ref>. It is clearly suggested by these plots that in the vicinity of the set specified in (<ref>), the monotonicity properties of _n(ρ_α,ρ_β) seem to shift from monotonically increasing to monotonically decreasing as we move from the inside of the disk to its exterior. Since these considerations are based on numerical experiments, we are left to speculate if this is indeed the true picture. We formulate the following conjecture.
Suppose that ρ_α, ρ_β≥ 0.
* If
(ρ_α-1/4)^2+(ρ_β-1/4)^2< 1/8,
then _n(ρ_α,ρ_β) is monotonically increasing to 2^1-ρ_α-ρ_β as n→∞.
* If
(ρ_α-1/4)^2+(ρ_β-1/4)^2> 1.184/8,
then _n(ρ_α,ρ_β) is monotonically decreasing to 2^1-ρ_α-ρ_β as n→∞.
§ APPENDIX
For the benefit of the reader we restate the definitions of the terms c_0(α,β) and c_1(α,β) as defined in (<ref>) and (<ref>).
c_1(α,β) =3α^3/4+(4β+8)α^2/8+(β^2-1)α/4+β^3/2+β^2-β/4-1/2,
c_0(α,β) =α^4/4+(3β+6)α^3/8+(β+2)^2α^2/8+(β^2-1)(β+2)α/8+β^4/8+β^3/2+3β^2/8-β/4-1/4.
We aim to show that these are upper bounded by 0 in the parameter domain {-1/2≤β≤α≤ 1/2}. This is a necessary part of proving Lemma <ref>.
If -1/2≤β≤α≤ 1/2, then c_k(α,β)≤ 0 for k = 1,2 with equality if and only if |α| = |β| = 1/2.
To show that this is indeed true, we perform a detailed analysis of the coefficients c_0(α,β), c_1(α,β) on the set {-1/2≤β≤α≤1/2} illustrated in Figure <ref>.
The boundary of {(α,β):-1/2≤β≤α≤ 1/2} is parametrized with t∈ [0,1] through
c_0(1/2,t-1/2) =1/8(t+5/2),(t+1)t(t-1),
c_0(t-1/2,-1/2) =t(t-1)(1/4t^2+5/16t+1/8),
c_0(t-1/2,t-1/2) =(t+1/2)^2t(t-1),
c_1(1/2,t-1/2) =t(t-1)(t/2+7/8),
c_1(t-1/2,-1/2) =3/4t(t-1)(t+1/2),
c_1(t-1/2,t-1/2) =2(t+1/2)t(t-1).
Based on these factorizations it becomes apparent that c_0<0 and c_1<0 holds for t∈ (0,1) but these values of t are precisely those which correspond to the boundary of {(α,β):-1/2≤β≤α≤ 1/2} with the vertices removed.
Differentiating c_k(α,β) for k = 0,1 with respect to α yields
∂/∂αc_0(α,β) = α^3+9(β+2)/8α^2+(β+2)^2/4α-(1-β^2)(β+2)/8,
∂^2/∂α^2c_0(α,β) = 3α^2+9/4(β+2)α+(β+2)^2/4,
∂^3/∂α^3c_0(α,β) = 6α+9/4(β+2),
∂/∂αc_1(α,β) = 9/4α^2+(β+2)α-1-β^2/4.
We begin with showing that c_0(α,β)<0 unless the parameters belong to the vertices of {(α,β):-1/2≤β≤α≤ 1/2}. Assuming that α,β∈ [-1/2,1/2] we obtain from (<ref>) that
∂^3/∂α^3c_0(α,β)=6α+9/4(β+2)≥-3-9/8+9/2=3/8.
In particular, this implies that
α↦∂/∂αc_0(α,β)
is a convex function on [-1/2,1/2] for any fixed β∈ [-1/2,1/2]. Consequently, if β≤ 0 then
max_α∈ [β,0]∂/∂αc_0(α,β)≤max{∂/∂αc_0(β,β),-(1-β^2)(β+2)/8}.
Clearly, the quantity
-(1-β^2)(β+2)/8
is negative if -1/2≤β≤ 1/2. We claim that the same is true for ∂/∂αc_0(β,β). To see this we introduce the function h(β) = ∂/∂αc_0(β,β) which is given by
h(β) = β^3+9/8(β+2)β^2+(β+2)^2/4β-(1-β^2)(β+2)/8 .
Differentiating two times we obtain
h”(β) = 15β+7.
It is evident that h(β) is convex if β≥-7/15. On the other hand, if -1/2≤β≤ -7/15 then
h'(β) = 15/2β^2+7β+7/8≤15/8-49/15+7/8<0
which shows that h is decreasing on [-1/2,-7/15]. Consequently, if -1/2≤β≤ 0 we have
∂/∂αc_0(β,β) = h(β)≤max{h(-1/2),h(0)} =max{-1/8,-(1-β^2)(β+2)/8}<0.
By refering to (<ref>) we obtain
max_α∈ [β,0]∂/∂αc_0(α,β)<0
and therefore c_0(α,β) is strictly decreasing on [β,0]. From (<ref>) we gather that if β∈ [-1/2,1/2] then for any α≥0,
∂^2/∂α^2c_0(α,β) = 3α^2+9/4(β+2)α+(β+2)^2/4≥ 0.
This implies that α↦ c_0(α,β) defines a convex function on [0,1/2] and since we already concluded that α↦ c_0(α,β) is strictly decreasing on [β,0], we find that for any fixed β∈ [-1/2,1/2]
max_α∈ [β,1/2]c_0(α,β) = max{c_0(β,β),c_0(1/2,β)}≤ 0
and c_0(α,β) =0 if and only if α,β∈{-1/2,1/2}. This in particular shows that c_0(α,β)≤ 0 with equality if and only if α = β = ± 1/2 or α = 1/2 and β = -1/2.
We now proceed to study the coefficient c_1(α,β) and show that c_1(α,β)≤ 0 holds on the parameter domain {(α,β):-1/2≤β≤α≤ 1/2}. This turns out to be straightforward. From (<ref>) we find that
∂/∂αc_1(α,β) = 9/4α^2+(4β+8)/4α+(β^2-1)/4
= 9/4(α+2β+4/9+√((2β+4/9)^2+1-β^2/9))(α-2β+4/9+√((2β+4/9)^2+1-β^2/9)).
Since -1/2≤β≤α≤ 1/2, we have
(α+2β+4/9+√((2β+4/9)^2+1-β^2/9))≥ -1/2+2β+4/9+2β+4/9≥ -1/2+2·3/9 = 2/3-1/2>0
and consequently the sign of ∂/∂αc_1(α,β) coincides with that of
(α-2β+4/9+√((2β+4/9)^2+1-β^2/9)).
This is sufficient to conclude that the maximum of c_1(α,β) on [β,1/2] must be attained at an endpoint of [β,1/2] and we gather that
max_α∈ [β,1/2]c_1(α,β) = max{c_1(β,β),c_1(1/2,β)}≤ 0
with equality attained if and only if α = β = ± 1/2.
§ ACKNOWLEDGEMENT
This study originated from discussions held during a SQuaRE meeting at the American Institute of Mathematics (AIM) facility in Pasadena, CA. We extend our heartfelt gratitude to AIM for their generous hospitality and for creating such an inspiring scientific environment.
plain
|
http://arxiv.org/abs/2409.03739v1 | 20240905175352 | Better bounds on Grothendieck constants of finite orders | [
"Sébastien Designolle",
"Tamás Vértesi",
"Sebastian Pokutta"
] | math.OC | [
"math.OC",
"quant-ph"
] |
[email protected]
Zuse-Institut Berlin, 14195 Berlin, Germany
HUN-REN Institute for Nuclear Research, 4001 Debrecen, Hungary
Zuse-Institut Berlin, 14195 Berlin, Germany
Institute of Mathematics, Berlin Institute of Technology, 10587 Berlin, Germany
§ ABSTRACT
Grothendieck constants d bound the advantage of d-dimensional strategies over 1-dimensional ones in a specific optimisation task.
They have applications ranging from approximation algorithms to quantum nonlocality.
However, apart from d=2, their values are unknown.
Here, we exploit a recent Frank-Wolfe approach to provide good candidates for lower bounding some of these constants.
The complete proof relies on solving difficult binary quadratic optimisation problems.
For d∈{3,4,5}, we construct specific rectangular instances that we can solve to certify better bounds than those previously known; by monotonicity, our lower bounds improve on the state of the art for d≤9.
For d∈{4,7,8}, we exploit elegant structures to build highly symmetric instances achieving even greater bounds; however, we can only solve them heuristically.
We also recall the standard relation with violations of Bell inequalities and elaborate on it to interpret generalised Grothendieck constants d2 as the advantage of complex quantum mechanics over real quantum mechanics.
Motivated by this connection, we also improve the bounds on d2.
Better bounds on Grothendieck constants of finite orders
Sebastian Pokutta
5th September 2024
========================================================
§ INTRODUCTION
Published in French in a Brazilian journal, Grothendieck's pioneering work on Banach spaces from 1953 <cit.>, now informally known as his Résumé, has long remained unnoticed.
In 1968, Lindenstrauss and Pełczyński <cit.> discovered it and rephrased the main result, the Grothendieck inequality, which proves a relationship between three fundamental tensor norms through the so-called Grothendieck constant, denoted K_G.
Since then, this far-reaching theorem has found numerous applications <cit.>, in particular in combinatorial optimisation where it is at the heart of an algorithm to approximate the cut-norm of a matrix <cit.>.
Quantum information is another field where this result is influential: following early observations by Tsirelson <cit.>, an explicit connection has been established with the noise robustness in Bell experiments <cit.>.
These experiments aim at exhibiting a fascinating property of quantum mechanics in correlation scenarios: nonlocality <cit.>.
The link with Grothendieck's theorem has then raised a surge of interest for the value of 3, the Grothendieck constant of order three.
Many works have thus demonstrated increasingly precise lower bounds <cit.> and upper bounds <cit.> on its value.
More recently, a numerical method has also been developed to come up with an even more precise (but not provable) estimate of 3 <cit.>.
For Grothendieck constants of higher orders, the link with quantum nonlocality remains <cit.>.
However, the bounds on their values have been less studied and are less tight <cit.>.
There are quite a few difficulties that explain this relative scarcity of results, many of them being manifestations of the curse of dimensionality.
Finding suitable high-dimensional ansätze indeed becomes increasingly hard and resulting instances involve sizes that rapidly become intractable.
In this article, we combine the recent projection technique from <cit.> with the powerful solver developed in <cit.> to obtain better lower bounds on d for 3≤ d≤9.
Following <cit.>, we also consider symmetric structures in high dimensions emerging from highly symmetric line packings <cit.> to suggest even better bounds on 4, 7, and 8, that we unfortunately cannot prove as they involve optimisation problems that we only solve heuristically.
We also consider the generalised Grothendieck constants d2 and interpret them as the advantage of d-dimensional quantum mechanics over real quantum mechanics, a fact that was already studied in <cit.>, but recently received more attention through the powerful results of <cit.>.
Our bounds on d are analytical, while those on d2 strongly rely on numerical methods: we post-process the upper bound on 32 to convert it into an exact result, but the lower bounds remain inaccurate.
We first formally define the constants we want to bound in <ref> before presenting in <ref> our method and main results on lower bounds on d, summarised in <ref>.
Then we turn to generalised constants in <ref>, where we review existing bounds before deriving ours, which are particularly tight on the value of 32.
Finally, we recall the connection with quantum mechanics in <ref> and conclude in <ref>.
§ PRELIMINARIES
Given a real matrix M of size m_1× m_2, we define
dM=max{∑_x=1^m_1∑_y=1^m_2 M_xya_xb_y | ∀ x∈[m_1], a_x∈ S^d-1, ∀ y∈[m_2], b_y∈ S^d-1},
where [m]={1,…,m} and where S^d-1 is the d-dimensional unit sphere, which reduces to S^0={-1,1} for d=1.
By introducing the set
dm_1,m_2={X | X_xy=a_xb_y, ∀ x∈[m_1], a_x∈ S^d-1, ∀ y∈[m_2], b_y∈ S^d-1},
we get the equivalent definition
dM=max{MX | X∈dm_1,m_2}.
The reason for this name comes from the interpretation of the Grothendieck constant that we are about to define in the context of rank-constrained semidefinite programming <cit.>.
In the following, when m_1=m_2, we simply write dm,m=dm; also, when the size m_1,m_2 is either clear from the context of irrelevant, we will use the shorthand notation dm_1,m_2=d.
In essence, the Grothendieck inequality states that there exists a (finite) constant independent of the size of the matrix M that bounds the ratio between the quantities dM for various d.
More formally, given n≤ d, for all real matrices M, we have <cit.>
dM≤dnnM,
so that the exact definition of these generalised Grothendieck constants is
dn=sup{dM/nM | M∈^m_1× m_2, m_1,m_2∈}.
The standard Grothendieck constant of order d is obtained when n=1, in which case we use the shorthand notation d1=d.
The Grothendieck constant of infinite order K_G initially studied in <cit.> corresponds to the limit lim_d→∞d, but it is out of the scope of our work, although we briefly mention it in <ref>.
We refer the reader interested in general aspects and further generalisations of these constants to <cit.>.
§ LOWER BOUNDS ON KG(D)
Given the definition of d in <ref>, any matrix M automatically provides a valid lower bound.
However, there are two main difficulties when looking for good lower bounds.
Given m_1 and m_2, how to find a matrix M such that the inequality in <ref> is as tight as possible?
Given such a matrix M, how to compute the resulting 1M or at least a close upper bound?
<ref> is the most limiting one, as the computation of 1M is equivalent to MaxCut and therefore NP-hard, so that the size of the matrices to consider are limited by the methods and resources available to compute this number.
In general, solving <ref> exactly is out of reach, but good candidates can be found and this suffices to derive bounds.
In the rest of this section, we first recall the method from <cit.> to both problems above, in particular to <ref>.
To illustrate the idea of the method, we give up on solving <ref> for the sake of the elegance of the solution to <ref>, providing instances of remarkable symmetry for which future works may solve <ref> to improve on the bounds on 4, 7, and 8.
We then focus on <ref> and consider rectangular matrices allowing us to obtain certified bounds on 3, 4, and 5 that beat the literature up to d=9, by monotonicity.
§.§ Obtaining facets of the symmetrised correlation polytope
When n=1, the set 1 is a polytope, called the correlation polytope <cit.>.
In <cit.> the method used to solve <ref> is to start from a point P∈d and to derive M as a hyperplane separating P from 1.
This hyperplane is obtained by solving the projection of P onto the correlation polytope 1 via Frank-Wolfe algorithms.
We refer to <cit.> for a gentle introduction to FW algorithms, to <cit.> for a complete review, and to <cit.> for the details of our implementation.
Note that some accelerations developed in <cit.> and used in this work are now part of the package <cit.>.
In particular, the symmetrisation described in <cit.> is crucial, but as it depends on the underlying group and its action of the matrices we consider, exposing the full structure of the symmetrised correlation polytope for each group would be tedious.
Instead, we give an brief general definition of this polytope and refer to <cit.> for a detailed example.
Given a group G acting on [m_1] and [m_2] (by means of signed permutations), the symmetrised correlation polytope G(1m_1,m_2) is the convex hull of the averages of all orbits of the vertices of 1m_1,m_2 under the action of G.
Note that this polytope lives in a subspace of the space invariant under the action of G on matrices of size m_1× m_2.
In <ref> we illustrate the various geometrical cases that we encounter in this work.
Their understanding justifies the procedure that we describe in <ref> to derive facets of the symmetrised correlation polytope, that is, separating hyperplanes touching the polytope on a space of codimension one.
Importantly, for symmetric instances, the dimension of the ambient space is strictly smaller than m_1m_2.
In the following, we consistently denote these facets by A, while M will indicate separating hyperplanes without this extra property.
Putative facets will also be denoted by A, that is, in cases where we rely on heuristic methods to compute 1A.
These heuristic methods play an important role throughout the FW algorithm as they quickly give a good direction to make primal progress.
Here we present them in a generalised framework that will turn useful when generalising our algorithm to d2 in <ref>.
At a given step of the FW algorithm minimising the squared distance to the set n, finding the best direction with respect to the current gradient M is exactly the problem in <ref>.
This subroutine is called the Linear Minimisation Oracle (LMO) and, in the course of the algorithm, we usually use an alternating minimisation to obtain heuristic solutions that are enough to make progress <cit.>.
More formally, for all x∈[m_1], we pick a random a^h_x∈ S^n-1 and we compute, for all y∈[m_2],
b^h_y = _b_y∈ S^n-1∑_y=1^m_2 b_y (∑_x=1^m_1 M_xya^h_x), that is, b^h_y = ∑_x=1^m_1 M_xya^h_x/∑_x=1^m_1 M_xya^h_x.
In the unlikely case where the denominator happens to be zero, we set b^h_y to be a predetermined vector, for instance, (1,0,…,0)∈ S^n-1.
Similarly, we then use these b^h_y to compute
a^h = _a_x∈ S^n-1∑_x=1^m_1 a_x (∑_y=1^m_2 M_xyb^h_y), that is, a^h_x = ∑_y=1^m_2 M_xyb^h_y/∑_y=1^m_2 M_xyb^h_y,
and we repeat <ref> until the objective value in <ref> stops increasing, up to numerical precision when n>1.
Depending on the initial choice of a^h_x, the value attained will vary.
Therefore we restart the procedure a large number of times: from a few hundreds or thousands within the FW algorithm, to 10^5 in <ref> and 10^8 in <ref>.
In the following, we clearly mention when the last part of our method — the computation of 1M (or 2M in <ref>) where M is the last gradient return by our FW algorithm — is done with the heuristic procedure just described, which comes with no theoretical guarantee, or with a more elaborate solver for which the exact value can be certified or rigourously upper bounded.
§.§ Heuristic results with highly symmetric line packings
Starting from a good point P∈d in the procedure described above is key to obtaining good bounds on d.
Following <cit.>, we observed that spanning the d-dimensional projective unit sphere in the most uniform way seems to be favourable, which served as our guiding principle to come up with structures in higher dimensions.
In this <ref>, we use the same distribution of points on both sides in <ref>, that is, m_1=m_2=m and a_1=b_1,… a_m=b_m; the matrix P is therefore a Gram matrix.
The intuitive but vague notion of uniform spreading on the sphere can be formally seen as the problem of finding good line packings <cit.>.
In <ref>, for various symmetric d-dimensional line packings, we give the values of dP/1P and dA/1A.
Since the Gram matrix P is positive semidefinite, the ratio dP/1P is upper bounded by K_G^≽(d), the positive semidefinite Grothendieck constant of order d <cit.>, whose value is known to be γ(d)π/2 <cit.>, where γ(d) is defined in <ref> below.
Note that, for d≥9, this bound is the current best known lower bound on d which is naturally greater than K_G^≽(d).
Among the structures that we present in <ref>, the d-dimensional configurations reaching the best known kissing number in dimension d are of particular interest.
In particular, in dimension d=3, different realisations of the kissing number give rise to different values of 3A/1A.
As the icosahedron is an optimal line packing, this indicates that giving good bounds on d does not, in general, boil down to finding good line packings.
Note that the unicity of the configuration in dimension d=4 was recently proven <cit.>.
Upon inspection of <ref> it is apparent that, in the small dimensions that we consider in this work (d≤8), root systems play an important role in obtaining good kissing configurations.
These structures are inherently symmetric and play an important role in the theory of Lie groups and Lie algebras.
We refer to <cit.> for definitions and elementary properties.
The family D_d has been studied in <cit.>; there, starting from the Gram matrix P of size d(d-1), the exact value of 1P is computed by invoking symmetry arguments.
Moreover, for d∈{3, 4, 5}, the optimal diagonal modification of λ=2/3 is derived for D_d; in other words, <cit.> shows that, among all matrices with shape P-λ𝕀, the matrix P-2/3𝕀 is giving the best lower bound on d.
Here we go a bit further by extending this result to d∈{6, 7, 8}, a fact that was suspected in <cit.>, but we also observe that the corresponding matrices A=P-2/3𝕀 are actually facets of the symmetrised polytope D_d(1d(d-1)).
Moreover, we computationnally establish optimal diagonal modifications for other configurations, see <ref>.
About the root system E_8, we note that the value 45/31 reached without diagonal modification is already provided in <cit.>, where it is attributed to Reeds and Sloane.
It indeed follows from <cit.> and from the transitivity of the Weyl group, see the acknowledgements in <ref>.
However, these arguments do not directly apply to solve the diagonally modified case.
Note that in all cases involving irrational numbers (except the icosahedron), the facet A cannot be obtained via a diagonal modification of P.
This is because the off-diagonal elements of P feature rational and irrational numbers.
The exception of the icosahedron is due to its extra property of being an equiangular tight frame (ETF), all off-diagonal elements being ±1/√(5), which can then lead to a facet by taking an irrational diagonal modification.
The 600-cell and the 120-cell have already been studied in <cit.>, where symmetry arguments are exploited to compute the value of 1P, even for the 300×300 matrix arising from the 120-cell.
Interestingly, although the difference of value for 4P/1P is almost negligible, the facets that we obtain here “activate” the advantage of the 120-cell.
Also, we emphasise the advantage of our geometric approach for these structures: in <cit.>, the optimal diagonal modification of 23/9 was indeed derived for the 600-cell, giving rise the the value of 35(-37+27√(5))/569≈1.4378, which is significantly smaller than our result of about 1.4740, see <ref>.
This is because we are not restricted to diagonal modifications in our algorithm, which gives a significant superiority for these configurations.
For some instances of 1P and 1A, the size of the matrix is too large for numerical solvers to handle it.
The values presented with an asterisk in <ref> and summarised in <ref> are then obtained heuristically and cannot be considered final results until these values are confirmed to be optimal.
For this, the high symmetry of these instances, detrimental for our branch-and-bound algorithm and not exploited in available QUBO/MaxCut solvers, could play a crucial role.
We leave this open for further research.
§.§ Exact results in asymmetric scenarios
Now, we discuss how to address <ref>, namely, the computation of
1M=max_a_x=±1b_y=±1∑_x=1^m_1∑_y=1^m_2 M_xya_xb_y
In <cit.> this problem is reformulated into a Quadratic Unconstrained Binary Optimisation (QUBO) instance, which is then given to the solver QuBowl from <cit.>.
The instance solved there involves m_1=m_2=97 and is bigger than the ones from <cit.> (of maximal size m_1=m_2=92).
However, it is worth noticing that the solution of the exact instance solved in <cit.> stands out in the landscape of all binary variables.
More precisely, the exact solution is likely to be unique and its value is far above the other feasible points, so that the solver could efficiently exclude vast portions of the search space.
This empirical observation is supported by two facts: instances from later stages of convergence of the Frank-Wolfe method cannot be solved by QuBowl (even within ten times as much time), and the symmetric instance A of size 63×63 obtained from the E_7 root system (see <ref>) cannot be solved by QuBowl either, although it is way smaller.
Given these difficulties, we consider other formulations exploiting the specificities of the problem.
In particular, the branch-and-bound algorithm from <cit.> works by breaking the symmetry between the binary variables a_x and b_y in <ref>, fixing the latter to obtain:
1M=max_a_x=±1∑_y=1^m_2|∑_x=1^m_1 M_xya_x|,
which has half as many variables.
Importantly, the parameter m_2 plays a less critical role in the complexity of the resulting algorithm, which is indeed still exponential in m_1, but now linear in m_2.
This suggests to use rectangular matrices M in our procedure, with a large m_2 to increase the achievable lower bound on d and a relatively small m_1 to maintain the possibility to solve <ref>, that is, to compute 1M.
Such rectangular matrices can be obtained by following the procedure described in <ref> starting from rectangular matrices P∈dm_1,m_2.
Good starting points are still obtained by using well spread distribution on the sphere, but these distributions a_1,…,a_m_1 and b_1,…,b_m_2 are now different, so that the matrix P with entries a_xb_y is not a Gram matrix any more.
We give our best lower bounds on d in <ref> and present relevant details below.
All matrices can be found in the supplementary files [Supplementary files can be found on Zenodo: <https://doi.org/10.5281/zenodo.13693164>.].
In dimension d=3, despite our efforts, we could not find a rectangular instance beating the 97×97 from <cit.>.
However, the quantum point provided therein, namely, the polyhedron on the Bloch sphere consisting of 97 pairs of antipodal points, is not the best quantum strategy to violate the inequality.
By using the Lasserre hierarchy at its first level, we can indeed obtain a slightly better violation, hence a tighter lower bound on 3.
Summarising, with the matrix M from <cit.> that had been constructed starting from a Gram matrix P∈397, we have:
* 3M≥MP≈2.0000×10^22,
* 3M≥MP'≈2.0001×10^22 where P'∈397 is obtained at the first level of the Lasserre hierarchy (rational expression in the supplementary file <cit.>),
* 1M=13921227005628453160441 (solved with QuBowl <cit.>).
In dimension d=4, we start with the 600-cell on one side and use the compound formed by the same 600-cell together with its dual (the 120-cell) on the other side.
This creates a matrix P∈460,360 we can run our separation procedure on.
Similarly to <cit.>, we exploit the symmetry of P throughout the algorithm to reduce the dimension of the space and accelerate the convergence.
The resulting matrix A already has integer coefficients and satisfies:
* 4A≥AP=30(227668+322725√(2)+170064√(5)+182375√(10)),
* the first level of the Lasserre hierarchy numerically confirms this value as optimal,
* 1A=33135128 (solved with the branch-and-bound algorithm from <cit.>).
In dimension d=5, we make use of structures in http://neilsloane.com/grass/Sloane's database <cit.>.
On one side, we directly use their five-dimensional structure with 65 lines.
On the other side, we take their five-dimensional structure with 37 lines and augment it by adding the center of all edges, properly renormalised to lie on the sphere, which creates a five-dimensional structure with 385 lines.
The resulting matrix P∈565,385 goes through our separation algorithm until a satisfactory precision is attained.
After rounding we obtain a matrix M satisfying:
* 5M≥MP≈2.1061×10^8,
* 5M≥MP'≈2.1068×10^8 where P'∈565,385 is obtained at the first level of the Lasserre hierarchy (rational expression in the supplementary files <cit.>),
* 1M=141074623 (solved with the branch-and-bound algorithm from <cit.>).
§ BOUNDS ON KG(D->2)
We now turn to the generalised Grothendieck constants d2, which have not received a lot of attention so far.
Motivated by the quantum interpretation of these constants (see <ref>), we extend the techniques presented above and in previous works to refine the bounds known on their values.
§.§ Bounds arising from the literature
There is an explicit lower bound on dn for any d>n that uses a particular one-parameter family of linear functionals of infinite size.
This bound, initially proven in <cit.> (see also <cit.> for the proof), reads
dn≥γ(d)/γ(n), with γ(d)=2/d(Γ(d+1/2)/Γ(d/2))^2,
where Γ(z)=∫_0^∞ t^z-1^-t t is the gamma function.
When n=2, γ(n)=π/4 and <ref> can be rewritten:
d2≥dd-1k^2/2^2d-3 when d=2k and d2≥2^2d+1/dd-1k^2π^2 when d=2k+1.
The bounds in <ref> are better than indirect bounds derived from natural combinations of bounds on the Grothendieck constants d, even when considering the improved bounds proven in this article.
Starting with the Grothendieck inequality (<ref>), dividing both sides with 1M and lower and upper bounding the left and right sides, we indeed obtain the following relation
d≤dnn.
Then, inserting 2=√(2) from <cit.> and using our lower bounds exposed in <ref> yields 32≥1.0159, 42≥1.0506, and 52≥1.0560, which does not improve on the values from <ref>.
As can be seen in <ref>, <ref> also outperforms all known bounds obtained with finite matrices.
Therefore, to the best of our knowledge, <ref> represents the state of the art.
As far as we authors know, there are no nontrivial upper bounds on dn readily available in the literature.
The higher complexity of finding such bounds is partly explained by the natural direction imposed by the definition of dn as a supremum, see <ref>.
In <ref> below, we give a general algorithmic method to overcome this difficulty.
§.§ Numerical lower bounds on KG(d->2)
Similarly to <ref>, providing a matrix M together with an upper bound on 2M and a lower bound on dM suffices to obtain a lower bound on d2.
However, solving the optimisation problem 2M is even harder than 1M: it is also nonlinear and nonconvex, but with continuous variables instead of binary ones.
In this section we explain how we obtain numerical upper bounds on 2M and summarise our results in <ref>.
We first rewrite <ref> by introducing the (real) coordinates of the d-dimensional vectors a_x and b_y:
dM=max{∑_x=1^m_1∑_y=1^m_2 M_xy[ a_x,1; ⋮; a_x,d ][ b_y,1; ⋮; b_y,d ] | ∀ x∈[m_1], ∑_i=1^da_x,i^2=1, ∀ y∈[m_2], ∑_i=1^db_y,i^2=1}.
With this reformulation, we see that obtaining upper bounds on dM amounts to solving the Lasserre hierarchy of this polynomial system with d(m_1+m_2) variables and m_1+m_2 constraints at a certain level <cit.>.
At the first level, this method is used in <ref> to numerically confirm the value of dA, as the upper bound computed matches, up to numerical accuracy, our analytical lower bound.
In the following, we use it at the second level and for n=2 to numerically obtain upper bounds on 2M.
Similarly to <ref>, the different values that we reach are given in <ref>.
The solution to the Lasserre hierarchy was obtained with the implementation in <cit.>.
Importantly, these results suffer from numerical imprecision.
This is all the more relevant here because of the huge size of most instances, making the use of a first-order solver, COSMO <cit.> in our case, almost mandatory, which is detrimental to the precision of the optimum returned.
This explains the deviations observed in <ref>: the fourth digit of the bound computed via the Lasserre hierarchy is not significant.
Interestingly, the icosahedron now provides us with a better bound than the cuboctahedron, whereas this was the opposite in <ref>.
The dodecahedron also gives no advantage over the icosahedron, up to numerical precision at least.
Similarly, the 600-cell and the 120-cell give very close bounds, which is in strong contrast with <ref>.
Note that the bounds presented in <ref> allow us to prove that <ref> is strict for n=2 and d=3, a fact that could not be verified with <ref>.
In this case, we indeed have that 3≤1.455 from <cit.> and that 232≥√(2)×1.103≈1.560 from <cit.> and with the bound given in <ref>.
§.§ Exact upper bound on KG(3->2)
Here we reformulate the procedure first introduced in <cit.> and later used to obtain better upper bounds on 3 in <cit.> in a way that makes the generalisation to other Grothendieck constants more transparent.
Let P∈dm_1,m_2 with underlying vectors a_1,…,a_m_1,b_1,…,b_m_2∈ S^d-1 such that there exist η_A,η_B∈[0,1] for which η_AS^d-1⊂{a_x}_x=1^m_1 and η_BS^d-1⊂{b_y}_y=1^m_2.
If α is such that α P∈nm_1,m_2, then
dn≤1/α η_Aη_B.
Consider a matrix N∈dm_1,m_2 and observe that the assumptions on P imply that α η_Aη_B N∈nm_1,m_2.
Then, for any matrix M,
α η_Aη_B dM=max_N∈dm_1,m_2Mα η_Aη_B N≤nM,
so that <ref> follows from the definition of dn in <ref>.
To obtain our bound on 32, we start from the Gram matrix P obtained from a centrosymmetric polyhedron with 912 vertices, that is, 406 lines.
Note that this polyhedron is different from the one used in <cit.> and features a slightly higher shrinking factor through the following construction: starting from an icosahedron, apply four times the transformation adding the normalised centers of each (triangular) facet.
Taking v_0=0.8962 and running our Frank-Wolfe algorithm, we obtain a decomposition using 7886 extreme points of 2406 and recovering v_0P up to a Euclidean distance of ε≈2.7×10^-4.
This algorithm is strictly identical to the one in <cit.> except for the LMO, which corresponds to the case decribed in <ref> when n=2.
Following <cit.> we can make this decomposition purely analytical by noting that the unit ball for the Euclidean norm is contained in 1 and therefore in 2.
Note that, contrary to <cit.> where the extremal points of 1 were automatically analytical as they only have ±1 elements, an extra step is needed here to make numerical extremal points of 2 analytical.
Here we follow a rationalisation procedure very similar to the one described in <cit.> and we obtain a value α=v_0/(1+ε) that can be used in <ref> to derive the bound
32≤1.1233…
whose analytical expression is provided in the supplementary files <cit.>.
This upper bound is indeed entirely rigourous, contrary to the lower bounds presented above in <ref>, which rely on numerical methods that we did not convert into exact results.
§.§ Implications in Bell nonlocality
The connection between d and Bell nonlocality has first been established by Tsirelson <cit.> and was later investigated more thoroughly in <cit.>.
We refer to the appendix of <cit.> for a detailed construction of the quantum realisation of the Bell inequalities constructed in our work.
Note that the new lower bound mentioned in <ref> slightly improves on the upper bound on v_c^Wer=1/3, namely, it reduces it from about 0.69606 to 0.69604; see also <cit.> for the definition of this number and its interpretation as a noise robustness.
Works anticipating the connection with d2 can be traced back to 2008 <cit.>, when Vértesi and Pál studied the realisation of Bell inequalities with real quantum mechanics and the advantage of d-dimensional quantum systems over real ones.
This continued with the work of Briët et al. <cit.> but gained traction when the powerful result by Renou et al. <cit.> showed that complex numbers are necessary for the complex formalism.
We refer to <cit.> for the explicit connection; for this, the examination of <ref> should be of some help as it gives the bridge between their notations and ours.
§ DISCUSSION
We generalised the method used in <cit.> to obtain lower bounds on 3 to Grothendieck constant of higher order d.
We obtained certified lower bounds on 3, 4, and 5 beating previous ones, and setting, by monotonicity, the best known lower bounds on d for 3≤ d≤9.
For these constants of higher orders, we also showed how our Frank-Wolfe algorithm can be used to derive putative facets, but solving the corresponding quadratic binary problems remains open.
The instances given in this article would allow immediate improvement on 4, 7, and 8.
We expect future work to exploit their symmetry to make the computation possible.
Beyond the perhaps anecdotal improvement brought to their respective constants, developing such tools could unlock access to higher-dimensional structures with a size way larger than anything that numerical methods could ever consider solving.
This could in turn give rise to lower bounds on Grothendieck constants of finite order strong enough to compete with the best known lower bound on the Grothendieck constant (of infinite order), namely, the one presented by Davie in 1984 <cit.> and independently rediscovered by Reeds in 1991 <cit.>, both works being unpublished.
This bound reads
K_G≥sup_0<λ<11-ρ(λ)/max{ρ(λ),F_ρ(λ)(λ)}≈1.676956674…,
attained for λ≈0.255730213…
where
ρ(λ)=√(2/π)λ^-λ^2/2 and
F_ρ(λ)=2/π^-λ^2+ρ(1-2√(2/π)∫_λ^∞^-x^2/2 x).
We also extended the procedure of <cit.> to derive bounds on the generalised constant d2.
These constants appear quite naturally when considering the advantage of quantum mechanics over real quantum mechanics.
In particular, our analytical upper bound on 32 delivers some quantitative insight on this question, while our numerical lower bounds on d2 could turn useful when looking for estimates of the benefit of considering higher-dimensional quantum systems.
We also note that our techniques naturally apply in the complex case, for which similar Grothendieck constants have also been defined.
However, since the complex Grothendieck constants of finite order do not have known interpretations, we refrained from deriving bounds on them, although our code is already capable of dealing with this case.
§ CODE AVAILABILITY
The code used to obtain the results presented in this article is part of the package introduced in <cit.> and based on the package <cit.>.
While the algorithm used to derive bounds on d is directly available in the standard version of this package, the extension to generalised constants dn can be found on the branch of this repository.
Installing this branch can be directly done within package manager by typing .
§ ACKNOWLEDGEMENTS
The authors thank
Péter Diviánszky for fruitful discussions and his help to run the branch-and-bound algorithm from <cit.> on asymmetric instances;
Jim Reeds and Neil Sloane for sharing the details of the derivation of the value 45/31 for the E_8 root system mentioned in <cit.>;
Timotej Hrga, Nathan Krislock, Thi Thai Le, and Daniel Rehfeldt for trying to solve our symmetric instances with various solvers;
Mathieu Besançon, Daniel Brosch, Wojciech Bruzda, Dardo Goyeneche, Volker Kaibel, David de Laat, Frank Valentin, and Karol Życzkowski for inspiring discussions.
This research was partially supported by the DFG Cluster of Excellence MATH+ (EXC-2046/1, project id 390685689) funded by the Deutsche Forschungsgemeinschaft (DFG).
T. V. acknowledges the support of the European Union (QuantERA eDICT) and the National Research, Development and Innovation Office NKFIH (Grants No. 2019-2.1.7-ERA-NET-2020-00003 and No. K145927).
sd2
|
http://arxiv.org/abs/2409.02412v1 | 20240904033711 | Anomalous Hall effects in magnetic weak topological insulator films | [
"Rui Chen",
"Xiao-Xia Yi",
"Bin Zhou",
"Dong-Hui Xu"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, Hubei University, Wuhan 430062, China
Department of Physics, Hubei University, Wuhan 430062, China
Department of Physics, Hubei University, Wuhan 430062, China
Key Laboratory of Intelligent Sensing System and Security of Ministry of Education,
Hubei University, Wuhan 430062, China
[][email protected]
Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 400044, China
Center of Quantum Materials and Devices, Chongqing University, Chongqing 400044, China
§ ABSTRACT
The interplay between magnetism and strong topological insulator gives rise to distinct new topological phases and various intriguing phenomena, attracting significant attention in recent years. However, magnetic effects in weak topological insulators remain largely unexplored. In this work, we systematically investigate the magnetic effect on thin films of weak topological insulators. We focus on ferromagnetic and antiferromagnetic effects, which have been extensively studied in strong topological insulators, as well as the recently highlighted altermagnetic effect. We reveal that the interplay between magnetism and weak topological insulators leads to a variety of Hall effects in the absence of an external magnetic field, including the metallic quantum anomalous Hall effect without chiral edge states, the quantum anomalous Hall effect with a higher Hall conductance plateau, the quantized layer Hall effect, the metallic half-quantized valley-like Hall effect, and a quantized valley-like Hall effect. This work provides valuable insights for exploring magnetic effect on weak topological insulators.
Anomalous Hall effects in magnetic weak topological insulator films
Dong-Hui Xu
September 9, 2024
===================================================================
§ INTRODUCTION
Over the past two decades, topological insulators have attracted considerable attention due to their fundamental novelty and potential applications <cit.>. Unlike conventional insulators, topological insulators exhibit hallmark gapless surface Dirac states with a linear dispersion, resulting from the symmetry-protected nontrivial bulk topology. Within the topological band theory <cit.>, three-dimensional topological insulators are
classified into strong topological insulators (STIs) and weak topological insulators (WTIs), distinguished by their odd or even number of surface Dirac cones, respectively. STIs have been experimentally realized in materials like Bi_1-xSb_x <cit.>, Bi_2Se_3 <cit.> family, and TiBiSe_2 family <cit.>, while WTIs have emerged in ZrTe_5 <cit.>, Bi_4I_4 <cit.>, and Bi_4Br_2I_2 <cit.>.
The interplay between magnetism and STIs leads to the discovery of novel topological phases, such as the quantum anomalous Hall insulator <cit.>, axion insulator <cit.>, semi-magnetic topological insulator <cit.>, metallic quantum anomalous Hall insulator <cit.>, and half-quantum mirror Hall insulator <cit.>.
These phases have been experimentally observed in ferromagnetic topological
insulators like (Bi, Sb)_2Te_3 and the intrinsic antiferromagnetic topological insulator MnBi_2Te_4 <cit.>. However, the magnetic effect on WTIs remains an unexplored territory. Concurrently, the emergence of altermagnetism, a unique form of collinear antiferromagnetism, has attracted significant attention. This magnetic state possesses the ability to induce a spontaneous Hall effect <cit.>. Moreover, the band topology induced by altermagnetism has been found in different works <cit.>.
In this work, we investigate the effect of ferromagnetism, antiferromagnetism, and altermagnetism on a WTI film. Our research uncovers an array of intriguing Hall effects facilitated by the interplay between magnetism and WTIs. Moreover, in the magnetic WTI film, we find that the half-quantized Hall conductance is revealed through the integration of the Berry curvature over half of the Brillouin zone. This is different from the previous studies on the magnetic STI film, that the half-quantization is manifested through the integration of the Berry curvature over the whole Brillouin zone <cit.>. Below, with close reference to Fig. <ref>, we provide a summary of these closely-related topological phases in the ferromagnetic and altermagnetic systems. It is noticed that the total Hall conductance is related to the number of magnetism-induced gapped surface Dirac cones.
(i) Figure <ref>(a): a WTI film without magnetization. Each of the top and bottom surfaces hosts two gapless Dirac cones. Time-reversal symmetry ensures that the Hall conductance of this system is zero.
(ii) Figure <ref>(b): a metallic quantum anomalous Hall effect, a phenomenon that challenges conventional expectations and highlights the interplay of magnetism and the topology in WTIs. The ferromagnetism is selectively introduced to the top surface of the WTI film. The two Dirac cones on the top surface are gapped due to local time-reversal symmetry breaking caused by the magnetization. In contrast, the two Dirac cones on the bottom surface remain gapless, as this layer is not subjected to magnetization.
(iii) Figure <ref>(c): a quantum anomalous Hall effect with a higher Hall conductance plateau σ_xy=2e^2/h, resulting from contributions of e^2/h from both top and bottom surfaces. Introducing ferromagnetism to both surfaces in the WTI film gaps all four Dirac cones. This scenario can be regarded as a double version of Case (ii), but it features two pairs of gapless chiral edge states within the insulating gap.
(iv) Figure <ref>(d): a quantized layer Hall effect with the top and bottom surfaces possessing quantized Hall conductances of opposite sign. Introducing ferromagnetism to both surfaces, with oppositely oriented magnetic moments, gaps all four Dirac cones. However, the two Dirac cones on the bottom surface acquire an opposite Dirac mass compared to those on the top.
(v) Figure <ref>(e): a metallic half-quantized valley-like Hall effect with a half-quantized Hall conductance e^2/2h or -e^2/2h distributed across different halves of the Brillouin zone. This effect is protected by the C_4T symmetry, where C_4 is the 4-fold rotational symmetry and T is time-reversal symmetry. Due to altermagnetism, the two Dirac cones at X and Y points on the top surface gain opposite masses, while those on the bottom surface remain gapless due to the lack of magnetization.
(vi) Figure <ref>(f): a quantized valley-like Hall effect with a quantized Hall conductance e^2/2h or -e^2/2h distributed across different halves of the Brillouin zone. This effect is protected by the C_4T symmetry. The two Dirac cones at X and X' points, as well as those at Y and Y' points gain opposite masses due to the altermagnetism. This scenario can be regarded as a double version of Case (v), but with two pairs of counter-propagating gapless chiral edge states within the insulating gap.
Additionally, we explore two distinct scenarios showcasing the antiferromagnetic effect depicted in Figs. <ref>(a) and <ref>(b). We discover that these configurations lead to two distinct phenomena: a quantum anomalous Hall effect characterized by a higher Hall conductance plateau and a quantized layer Hall effect.
§ MODEL AND METHOD
§.§ Model
We start with a generic four-band topological insulator model defined on a cubic lattice <cit.>
H=H_0(k)+H_fm(z)+H_am(k_x,k_y,z)+H_afm(z),
H_0(k)=ℳ(k) Γ_4+A (Γ_1 sin k_x+Γ_2 sin k_y+Γ_3 sin k_z),
where ℳ(k)=M_0+6 B-2 B ∑_i cos k_i, Γ_1=σ_1 τ_1, Γ_2=σ_2 τ_1, Γ_3=σ_3 τ_1, and Γ_4= σ_0 τ_3. Here M_0, B and A are model dependent parameters. σ and τ are Pauli matrices representing spin and orbital degrees of freedom, respectively.
In the absence of magnetization [i.e., H_fm=0, H_am=0, and H_afm=0], H_0(k) represents a trivial insulator when M_0>0. When M_0<-12 B, it depicts an STI phase with the Z_2 index (1 ; 000) or a STI phase with (1 ; 111) when 0>M_0>-4 B or -8 B>M_0>-12 B. It corresponds a WTI phase with (0 ; 111) when -4 B>M_0>-8 B <cit.>. This four-band model, originally proposed by Zhang et al., has been successfully used to describe the STI of Bi_2 Se_3 family <cit.>. As our primary focus is on the WTI phase, we concentrate on the parameter regime -4 B>M_0>-8 B.
The last three terms of the Hamiltonian in Eq. <ref>, H_fm(z), H_am(k_x,k_y,z), and H_afm(z) depict the layer-dependent effect of ferromagnetism, altermagnetism, and antiferromagnetism, respectively.
Specifically, the three types of magnetization have the following form
H_fm(z) =m(z)σ_3 τ_0,
H_am(k_x,k_y,z) =m'(z)(cos k_x-cos k_y)σ_3 τ_0,
H_afm(z) =m”(-1)^zσ_3 τ_0.
Ferromagnetism can be induced through magnetic doping <cit.>. Equation (<ref>) represents a d-wave altermagnetic ordering <cit.>, which can be achieved via the proximity effect. Equation (<ref>) depicts the antiferromagnetic ordering, where neighboring layers possess opposite magnetizations. The amplitude of the magnetization within each layer is m”.
The ferromagnetic term m( z ) and altermagnetic term m^'( z ) are given by
m( z ) =
m_t
0
m_b
[ z=1,2,3,; elsewhere,; z=n_z-2,n_z-1,n_z,; ]
and
m^'( z ) =
m_t^'
0
m_b^'
[ z=1,2,3,; elsewhere,; z=n_z-2,n_z-1,n_z.; ]
This is more clearly illustrated in Fig. <ref>, where ferromagnetic and altermagnetic effects are confined solely to the top and bottom surface layers.
In the following content, we first introduce the nonmagnetic system in Sec. <ref>, then focus on the ferromagnetic and altermagnetic effects in Sec. <ref>, and finally discuss the antiferromagnetic effect in Sec. <ref>.
§.§ Method
The ferromagnetic and antiferromagnetic systems are characterized by the total Hall conductance
σ_xy=∑_k_x,k_y,zσ_xy(k_x,k_y,z),
and the layer Hall conductance
σ_xy(z)=∑_k_x,k_yσ_xy(k_x,k_y,z).
In Appendix <ref>, we provide more details for calculating the layer- and momentum-resolved Hall conductance σ_xy(k_x,k_y,z) <cit.>. Here, the layer Hall conductance σ_xy(z) counts the contributions to the total Hall conductance from the z-th layer. The total Hall conductance σ_xy counts the contribution to the Hall conductance from all layers, i.e., σ_xy=∑_zσ_xy(z). Moreover, we define
σ_xy^t=∑_z=n_z-3^z=n_zσ_xy(z),
σ_xy^b=∑_z=1^z=4σ_xy(z),
which count the contributions to the Hall conductance from the top and bottom surface layers, respectively.
On the other hand, we adopt a momentum-resolved Hall conductance to characterize the altermagnetic system, with
σ_xy^'(θ)=∑_k_x,k_y∈ P(θ),zσ_xy(k_x,k_y,z),
where P(θ) is the area enclosed by the green lines with a θ angle in Fig. <ref>(a).
§ NONMAGNETIC SYSTEM
As depicted schematically in Fig. <ref>(a), the nonmagnetic system corresponds to a WTI film with four gapless Dirac cones. Two of these Dirac cones are situated at X and Y points of the top surface Brillouin zone, while the other two are located at X^' and Y^' points of the bottom surface Brillouin zone. This is further clarified in Figs. <ref>(a) and <ref>(b), which show the local density of state (LDOS) of the nonmagnetic surface. The Hall conductance of the system is zero due to the existence of time-reversal symmetry T=I_n_zσ_y τ_0 K, where I_n_z is the identity matrix and K denotes the complex conjugation.
§ FERROMAGNETIC AND ALTERMAGNETIC SYSTEMS
In this section, we consider five different scenarios for the ferromagnetic system [Figs. <ref>(b) and <ref>(c)] and altermagnetic systems [Figs. <ref>(e) and <ref>(f)]. In the numerical calculations, the coefficients are given by the following tabular,
c]c|ccccccc (i) (ii) (iii) (iv) (v) (vi)
m_t 0 V V V 0 0
m_b 0 0 V -V 0 0
m_t^' 0 0 0 0 V V
m_b^' 0 0 0 0 0 V
The other parameters are taken as A=1, B=1, M=-6, and V=0.6.
§.§ Metallic quantum anomalous Hall
effect without chiral edge states
Let us first consider the simplest case where the ferromagnetization is introduced only to the top surface layers of the WTI film [Fig. <ref>(b)]. The Dirac cones on the top surface are gapped out due to the breaking of local time-reversal symmetry induced by the magnetization [Figs. <ref>(b) and <ref>(c)]. The gapless Dirac cones on the bottom surface remain due to the absence of local magnetization [Figs. <ref>(b) and <ref>(b)].
Figure <ref>(a) shows the numerically calculated Hall conductance σ_xy and the DOS as functions of the Fermi energy E_F. The system exhibits a quantized anomalous Hall conductance (the blue circle line) when the Fermi energy is inside the magnetic gap of the top surface (|E_F|<V). However, it is important to note that the system differs from conventional quantum anomalous Hall insulators, where the quantized Hall conductance only emerges when the Fermi energy resides within the insulating gap. In contrast, the present system lacks an insulating energy gap, as revealed by the numerical results of the DOS (the red circle line). Moreover, the DOS exhibits a linear increase as the Fermi energy increases, indicating its origin from the 2D gapless Dirac cones on the bottom surface.
Figure <ref>(c) shows the longitudinal conductance σ_xx as a function of the Fermi energy E_F, numerically obtained using the Kubo-Greenwood formula expressed in terms of the Chebyshev polynomials <cit.> (see Appendix <ref>). The system manifests a metallic quantum anomalous Hall effect, distinguished by a quantized Hall conductance and a nonvanishing longitudinal conductance. Figure <ref>(e) displays the layer Hall conductance as a function of layer index z with E_F=0. We find that σ_xy^t=0.9998e^2/h and σ_xy^b=0.0001e^2/h [see Eq. (<ref>)]. The slight deviation from perfect quantization arises from the quantum confinement effect. These results further confirm that the quantized Hall conductance originates from the ferromagnetic top surface.
Very recently, the metallic quantum anomalous Hall effect has been proposed in an STI film with a ferromagnetic layer at interior layers <cit.>. Our work presents an alternative approach to realize the metallic quantum anomalous Hall effect. Moreover, the metallic quantum anomalous Hall effect can be regarded as a double version of the semi-magnetic topological insulator, which has been experimentally detected using the traditional six-terminal Hall-bar measurement and is characterized by a half-quantized Hall conductance and a nonvanishing longitudinal conductance <cit.>. Thus, we believe that the metallic quantum anomalous Hall effect in our work could be observed using a similar experimental setup.
§.§ Quantum anomalous Hall
effect with a higher plateau
Here we consider the case shown in Fig. <ref>(c), with parallel magnetization alignment on the top and bottom surfaces. All the surface Dirac cones are gapped out due to the magnetic effect, and the top and bottom surfaces share the same LDOS shown in Fig. <ref>(c). The system hosts a global energy gap characterized by a vanishing DOS when |E_F|<V [see the red circle line in Fig. <ref>(b)]. This energy gap features a higher quantized Hall conductance σ_xy=2e^2/h [the blue circle line in Fig. <ref>(b)]. Consequently, the system is identified as a quantum anomalous Hall insulator. The insulating nature of the system is further confirmed by calculating the longitudinal conductance σ_xx [Fig. <ref>(d)], which remains zero as long as the chemical potential resides within the energy gap. Analysis of the layer Hall conductance reveals that both the top and bottom surfaces contribute a nearly quantized Hall conductance [Fig. <ref>(f)], with σ_xy^t=σ_xy^b=0.9998e^2/h. In experiments, the quantum anomalous Hall effect can be detected using the traditional six-terminal Hall-bar measurement.
§.§ Quantized layer Hall effect
Now we consider the case shown in Fig. <ref>(d), where the magnetizations on the top and bottom surfaces are antiparallel. All the surface Dirac cones are gapped due to the magnetic effect, and the top and bottom surfaces share the same LDOS as shown in Fig. <ref>(c). It is noticed that the top and bottom surfaces gain an opposite Dirac mass due to the antiparallel magnetization alignments on the top and bottom surfaces [labeled by the red and blue gapped Dirac cones in Fig. <ref>(d)].
The breaking of local time-reversal symmetry results in a nonzero layer Hall conductance as shown in Fig. <ref>(a). Moreover, the layers connected by the
PT symmetry are exactly compensated, leading to a zero net Hall conductance. Here, P=Mσ_0τ_z depicts the inversion symmetry, and M is the orthogonal matrix that permutes the layers of the entire system perpendicularly. Figure <ref>(b) shows σ_xy^t/b and the total Hall conductance σ_xy as functions of the chemical potential E_F [see the dashed lines]. Both the top and bottom surfaces exhibit a quantized surface Hall conductance but with an opposite sign, establishing the quantized layer Hall effect.
The quantized layer Hall effect can be viewed as a double version of the axion insulator, which has been proposed for experimental detection by various approaches <cit.>. One of the most direct approaches involves using the traditional transport measurement by applying an external magnetic field [Figs. <ref>(a)-<ref>(c)] <cit.>. By varying the strength and direction of the external magnetic field, the Hall conductance of the system should fluctuate between -2e^2/h, 0, and 2e^2/h. The quantized layer Hall effect is characterized by the intermediate zero plateau, while the quantum anomalous Hall effect mentioned in Sec. <ref> is characterized by the quantized Hall conductance ± 2e^2/h.
Another method to detect the quantized layer Hall effect is by applying an external perpendicular electric field [Figs. <ref>(d)-<ref>(f)] <cit.>, which breaks the PT symmetry and induces an energy offset between the plateaus of the surface Hall conductance [Fig. <ref>(b)]. The electric-field-induced Hall conductance has opposite signs for opposite Fermi energies and its amplitude increases with the increasing strength of the field [Fig. <ref>(c)]. For moderate film thickness and strength of the external electric field, we find that the emergent Hall conductance can approximate the quantized value of e^2/h [Fig. <ref>(d)]. These signatures provide evidence for observing the quantized layer Hall effect in the ferromagnetic WTI films.
§.§ Metallic half-quantized valley-like Hall effect
We consider the altermagnetic case, which can be achieved by approximating the WTI to an altermagnetic material. The system breaks time-reversal symmetry but preserves the combined C_4 T symmetry, where C_4=I_n_zexp(-iσ_zτ_0π/4)R_4 and R_4 is the rotational matrix with R_4 (k_x,k_y)=(k_y,-k_x). The C_4 T symmetry requires that
Ω(k)=-Ω(R_4k),
which guarantees a zero total Hall conductance.
Figure <ref>(a) shows the Berry curvature of the altermagnetic system as a function of k_x and k_y. Our numerical results align with Eq. (<ref>), where the Berry curvature Ω(k) is opposite in sign to Ω(R_4k).
The blue curve in Fig. <ref>(c) shows the Hall conductance σ_xy^'(θ) as a function of θ, where σ_xy^'(θ) is the total Hall conductance contributed by the k points within the green cone area shown in Fig. <ref>(a). We observe that half of the Brillouin zone contributes to a half-quantized Hall conductance. Consequently, the other half of the Brillouin zone must contribute another half-quantized Hall conductance with the opposite sign due to the C_4 T symmetry. Moreover, similar to the situation discussed in Sec. <ref>, the bottom surface is gapless, making the system a metal. Therefore, we identify this system as a metallic half-quantized valley-like Hall effect.
For a comparative study, we also examine the Berry curvature distribution of the ferromagnetic case [see Sec. <ref> and Fig <ref>(b)]. The ferromagnetic system is protected by the C_4 symmetry and requires that Ω(k)=Ω(R_4k). Thus, each half of the Brillouin zone will contribute a half-quantized Hall conductance, resulting in a quantized Hall conductance overall. This implies that each gapped Dirac cone is associated with a half-quantized Hall conductance, and the half-quantization is manifested through half of the Brillouin zone in the magnetic WTI film. This scenario differs from previous studies on magnetic STI films, where the half-quantization in the semimagnetic topological insulator phase is manifested through the integration of the Berry curvature over the entire Brillouin zone <cit.>.
Furthermore, Figure <ref>(d) shows σ_xy^'(θ) as a function of θ, in a system where both the ferromagnetic and altermagnetic effects coexist. We find that the relationship σ_xy(π/2)= 0.5 is stable by varying the strength of ferromagnetic ordering.
§.§ Quantized valley-like Hall effect
Now we consider the case shown in Fig. <ref>(f), where the altermagnetic effect is introduced to both the top and bottom surface layers. The situation in Fig. <ref>(f) closely resembles that in Fig. <ref>(e). However, there are three key distinctions: (1) The Berry curvature and the corresponding Hall conductance double. In other words, half of the Brillouin zone contributes a quantized Hall conductance, and the other half contributes another quantized Hall conductance with the opposite sign; (2) the system is an insulator because both the top and bottom surfaces are gapped; and (3) the system hosts a chiral edge state inside the insulating surface gap [see Fig. <ref> and Appendix <ref>]. Consequently, the system is identified as a quantized valley-like Hall effect.
§ ANTIFERROMAGNETIC SYSTEM
In this section, we investigate the antiferromagnetic effect on WTI films, focusing on the two scenarios depicted in Figs. <ref>(a) and <ref>(b). In the presence of antiferromagnetism, the Dirac cones on the top and bottom surfaces acquire local topological masses with the same sign for odd layers [Fig. <ref>(a)] and the opposite signs for even layers [Fig. <ref>(b)]. Figures <ref>(c) and <ref>(d) illustrate the layer Hall conductance σ_xy(z) as a function of the layer index z. Each magnetic layer exhibits a nonzero layer Hall conductance with its sign depending on the direction of its magnetic moment. Additionally, the layer Hall conductance is enhanced at the surface layers due to the surface Dirac cones. In the odd-layer system, the total Hall conductance is 2e^2/h due to the breaking of time-reversal symmetry. In the even-layer system, the total Hall conductance is always zero protected by the PT symmetry.
To mitigate the oscillatory behavior of the bulk layers, we define the renormalized Hall conductance <cit.> as
σ̃_xy(z)=σ_xy(z)+σ_xy(z+1)/2,
where z=2,⋯,n_z, σ̃_xy(1)=σ_xy(1)/2, and σ̃_xy(n_z+1)=σ_xy(n_z)/2. This approach effectively counts each layer once, except for the surface layers, which are counted with a weight of 1/2. It can be considered an application of the sliding window averaging method <cit.>.
Figures <ref>(e) and <ref>(f) show the renormalized layer Hall conductance σ̃_xy(z) as a function of layer index z. The Hall conductance contributed by the bulk layers is compensated to zero due to the local PT symmetry at the bulk. Moreover, we obtain ∑_z=1^z=3σ̃_xy(z)=0.9980 e^2/h for the top surface layers of both systems, and ∑_z=n_z-2^z=n_zσ̃_xy(z)=(-) 0.9980 e^2/h for the odd-layer (even-layer) systems. By adopting the renormalized layer Hall conductance, we extract the quantized surface Hall conductance from the bulk layers. This indicates that the system in Fig. <ref>(a) realizes a quantum anomalous Hall effect with a higher Hall conductance plateau, while the system in Fig. <ref>(b) realizes a quantized layer Hall effect.
§ CONCLUSION
In this study, we investigate the magnetic effects on films of WTIs. Our findings demonstrate that the interplay between magnetism and WTIs gives rise to a diverse range of anomalous Hall effects, including the metallic quantum anomalous Hall effect without chiral edge states, the quantum anomalous Hall effect with a higher Hall conductance plateau, the quantized layer Hall effect, the metallic half-quantized valley-like Hall effect, and the quantized valley-like Hall effect. Notably, the quantized layer Hall effect, metallic half-quantized valley-like Hall effect, and quantized valley-like Hall effect are not observed in magnetic STIs.
D.-H.X. was supported by the NSFC (under Grant Nos. 12074108, 12474151 and 12347101), the Natural Science Foundation of Chongqing (Grant No. CSTB2022NSCQ-MSX0568). R.C. acknowledges the support of the NSFC (under Grant No. 12304195) and the Chutian Scholars Program in Hubei Province. B.Z. was supported by the NSFC (under Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (under Grant No. T2020001) and the innovation group project of the natural science foundation of Hubei Province of China (under Grant No. 2022CFA012).
Note-added.—
Recently, we became aware of a complementary study, which focus on a two-band two-dimensional topological semimetal, where the Brillouin zone is divided into two patches characterized by half-quantized Berry curvature fluxes with opposite signs due to the presence of the C_4z T magnetic symmetry <cit.>.
§ HALL CONDUCTANCE
In the calculations, the Hall conductances in Eqs. (<ref>)-(<ref>) are calculated from the following expression <cit.>:
σ_xy(k_x,k_y,z)=-4 π e^2/N_kShIm∑_v v^' c X_v c Y_v^' c ^†ρ_v v^'(z),
which corresponds to the contribution to the total Hall conductance σ_xy from the z-th layer of a certain (k_x,k_y) point. The matrix element for the position operator along the x or y directions, is denoted as X(Y)_v c =⟨ψ_v |x(y)| ψ_c ⟩= ⟨ψ_v |i ħ v_x(v_y)| ψ_c ⟩/E_c -E_v, which is related to the energy difference between the conduction and valence bands E_c -E_v. The indices v and c represent the valence and conduction bands. ρ_v v^'(z) is the projection matrix on to the corresponding z-th layer, which implies a summation over all orbitals v, v^', c belonging to that layer. N_k represents the number of k-points and S represents the unit cell area.
§ LONGITUDINAL CONDUCTANCE
The longitudinal conductance is obtained by using the Kubo-Greenwood formula expressed in terms of the Chebyshev Polynomials <cit.>. In zero temperature, it can be expressed as <cit.>
σ_xx(Ẽ_F)=4 e^2 ħ/π S∑_m, n ≤ mμ_n m^xx T_n(Ẽ_F) T_m(Ẽ_F),
where
μ_m n^xx≡g_m g_n/(1+δ_n 0)(1+δ_m 0)Tr[v_x T_m(H̃) v_x T_n(H̃)],
and
g_m^J=(M-m+1) cosπ m/M+1+sinπ m/M+1π/M+1/M+1
is the Jackson Kernel, T_m(x)=cos(marccos x) is the Chebyshev polynomials of the first kind. H̃ is the rescaled Hamiltonian so that its eigenvalues Ẽ_F is contained in the interval [-1,1].
In the numerical calculation for the longitudinal conductance, the system size is taken as 400× 400, the moment M=1000, the disorder strength W=0.1 and the results are obtained after averaging on 20 independent disorder configurations.
§ SPECTRUM WITH OPEN BOUNDARY CONDITIONS ALONG THE Y- AND Z-DIRECTIONS
In addition, we plot the energy spectrum of the six systems, with periodic boundary condition along the x-direction and open boundary conditions along the y- and z-directions. For Case (i) shown in Fig. <ref>(a), there is no chiral current due to the absence of magnetism. The systems in Cases (ii) and (iii) break the T symmetry, which allow chiral current propagating along the same direction near k_x=0 and k_x=π points. In Case (iv), each band is double degenerate and there is no net chiral current for the degenerate bands due to the PT symmetry. In Cases (v) and (vi), there appear chiral currents propagating along the opposite directions k_x=0 and k_x=π points, which results in a vanishing net chiral current.
Furthermore, it is noticed that the energy gaps between the edge states in Figs. <ref>(b) and <ref>(e) are attributed to the quantum confinement effect. For a slab system with a large enough n_y, the system corresponds to a metal due to the gapless Dirac cone on the bottom surface layers. The situation is different from the case in Figs. <ref>(c) and <ref>(f), with an insulating bulk even in the large n_y limit.
apsrev4-1-etal-title_6authors
|
http://arxiv.org/abs/2409.02455v1 | 20240904053600 | An Effective Tag Assignment Approach for Billboard Advertisement | [
"Dildar Ali",
"Harishchandra Kumar",
"Suman Banerjee",
"Yamuna Prasad"
] | cs.DS | [
"cs.DS",
"cs.IR"
] |
Ali et al.
Indian Institute of Technology Jammu,
J & K-181221, India. National Institute of Technology Raipur, Chhattisgarh, India
[email protected], [email protected], [email protected], [email protected]
An Effective Tag Assignment Approach for Billboard Advertisement
Dildar Ali1 Harishchandra Kumar2 Suman Banerjee1 Yamuna Prasad1
===================================================================
§ ABSTRACT
Billboard Advertisement has gained popularity due to its significant outrage in return on investment. To make this advertisement approach more effective, the relevant information about the product needs to be reached to the relevant set of people. This can be achieved if the relevant set of tags can be mapped to the correct slots. Formally, we call this problem the Tag Assignment Problem in Billboard Advertisement. Given trajectory, billboard database, and a set of selected billboard slots and tags, this problem asks to output a mapping of selected tags to the selected slots so that the influence is maximized. We model this as a variant of traditional bipartite matching called One-To-Many Bipartite Matching (OMBM). Unlike traditional bipartite matching, a tag can be assigned to only one slot; in the OMBM, a tag can be assigned to multiple slots while the vice versa can not happen. We propose an iterative solution approach that incrementally allocates the tags to the slots. The proposed methodology has been explained with an illustrated example. A complexity analysis of the proposed solution approach has also been conducted. The experimental results on real-world trajectory and billboard datasets prove our claim on the effectiveness and efficiency of the proposed solution.
§ INTRODUCTION
In the past few years, advertisement has become a central goal for any E-commerce house as it gives a way to convince people about their products or different social and political events, and because of this, their products and ticket sales increase, i.e., revenue increases. The existing literature reported that around 7 - 10% of total revenue is utilized by an E-commerce house for advertising purposes[<https://www.lamar.com/howtoadvertise/Research/>]. Among the several advertising techniques (e.g., newspapers, social networks, television, billboards, etc), billboard advertising has emerged as an effective advertising technique as it is easy to place advertisements and ensures more investment returns[<https://www.thebusinessresearchcompany.com/report/billboard-and-outdoor-advertising-global-market-report>]. Billboard Advertisement is a technique for creating a promotional advertisement for products, events, etc., with the hope that if the content is displayed to a relevant group of people, a subset of them will be influenced, and they might buy the product or attend the event. This will help increase revenue or promote the event. The advertisement content is formalized as tags, and different tags are relevant to different sets of people. So, the tag plays an important role in determining the influence of a billboard. In real-life scenarios, billboard advertisements require the selection of slots and tags, and although the influence value depends on billboard slots and tags, the literature is limited. However, a few studies have considered using tags to maximize their influence on social networks <cit.>. Ke at al.<cit.> first studied the problem of finding ℓ many tags and k many users in a social network to maximize the influence. In the context of the billboard advertisement, Ali et al. <cit.> first introduce the problem of finding the most influential billboard slots and tags. Knowing which tag is important to which slot to maximize the influence is essential in practical scenarios. This is the key problem addressed in this paper.
Motivation.
To make an advertisement effective, it is important to adopt a suitable approach such that the information reaches the group of relevant people. In real life, different tags are relevant to various categories of people. Consider a political party running a campaign for an upcoming election and proposing different projects in their election manifesto for different categories of people. For example, this might include doubling the farmer's income in the next three years and a certain percentage tax benefit to the industrialists. Now, the tags relevant to the benefits of the farmers must be displayed on the billboard slots where the audience is farmers. So, to address the real-life scenario study of Tag Assignment Problem in Billboard Advertisement is essential.
Our Contribution.
In billboard advertising, the influence varies greatly depending on which tags are displayed in which slots. Studies like <cit.> have explored finding the top-k influential billboard slots using various techniques. Recently, Ali et al. <cit.> proposed methods to identify influential slots and tags jointly. However, no existing literature specifically addresses the tag assignment problem in billboard advertisements. This paper tackles this issue and proposes an iterative approach for incrementally allocating tags to slots.
In particular, we make the following contributions in this paper:
* We model the Tag Assignment Problem as a two-sided matching problem and show that this problem is NP-hard.
* We introduce One-to-Many Bipartite Matching and propose an iterative solution for the tag assignment problem.
* We analyzed the time and space requirement of the proposed solution and provided an approximation guarantee.
* The proposed approaches have been implemented with real-world trajectory and billboard datasets to highlight their effectiveness and efficiency.
Rest of the paper has been organized as follows. Section <ref> describes the required preliminary concepts and defines our problem formally. Section <ref> contains the proposed solution approaches with a detailed analysis. The experimental evaluation has been described in Section <ref>. Finally, Section <ref> concludes our study and gives directions for future research.
§ PRELIMINARIES AND PROBLEM DEFINITION
This section describes the preliminary concepts and formally defines our problem. Initially, we start by explaining the billboard advertisement and its procedure.
§.§ Billboard Advertisement
A trajectory database contains the location information of trajectories of a particular city, and it can be stated in Definition <ref>.
The trajectory database 𝒟 is a collection of a tuple in the form of <𝒰_id, 𝒰_loc, [t_1,t_2]>, where 𝒰_id, 𝒰_loc and [t_1,t_2] represents the unique user id, user location and time stamp, respectively.
For example, there is a tuple <𝒰_9, Kolkata_Airport, [200, 300]> and it denotes user 𝒰_9 was in the Kolkata_Airport for the duration of [200, 300]. Next, we define the billboard database in Definition <ref>.
A billboard database can be defined in the form of a tuple <b_id, b_loc, b_cost>, where b_id, b_loc and b_cost denotes a unique billboard id, billboard location, and billboard cost, respectively.
Assume a set of digital billboards ℬ={b_1, b_2, …, b_m} owned by an influence provider are placed across a city run for the duration [T_1, T_2]. Different commercial houses can hire these billboards slot-wise to show their advertisement content. Now, we define the notion of billboard slot in Definition <ref>.
A billboard slot is denoted by a tuple of the form (b_j, [t,t+Δ]) where b_j ∈ℬ and t ∈{T_1, T_1 + Δ + 1, …, T_2- Δ-1}. Here, Δ denotes the duration of each slot.
It can be observed that for each billboard, the number of slots associated with it will be T_2-T_1/Δ [Here, we assume that Δ perfectly divides T_2-T_1.]. There are m many billboards (i.e., |ℬ|=m), hence, the total number of slots denoted as ℬ𝒮 are m ·T_2-T_1/Δ. Depending on the available budget, an e-commerce house hires slots weekly or monthly to fulfill their influence demand. Next, we describe the influence function in Section <ref>.
§.§ Influence Function
Normally, e-commerce companies approach the influence provider, who owns multiple billboard slots, to advertise their products and maximize the influence of their products. Now, one question arises: How is the influence of a billboard slot calculated? We state this in Definition <ref>.
Given a subset of billboard slots 𝒮⊆ℬ𝒮, the influence of 𝒮 is denoted as ℐ(𝒮) and defined it as the sum of the influence probabilities of the individual users in the trajectory database.
ℐ(𝒮)= u ∈𝒟∑ [1- b ∈𝒮∏(1-Pr(u,b))]
Here, the influence function ℐ() maps each subset of billboard slots to its corresponding influence value, i.e., ℐ: 2^ℬ𝒮⟶ℝ^+_0 and ℐ(∅) = 0. The influence function defined in Equation <ref> is widely used in the existing advertising literature <cit.> also.
The Influence function ℐ() is non-negative, monotone and submodular.
As mentioned previously, when one person views an advertisement running on a billboard slot, he is influenced by a certain probability. Most existing literature <cit.> does not consider that the probability value also depends on the running advertisement content (e.g., tag).
Therefore, we have considered tags to be an important factor in this paper. Consider 𝒯_u_i be the set of tags associated with user u_i and for all the users in 𝒰, the set of tags is relevant as a whole can be denoted as 𝒯 = u_i ∈𝒰⋃𝒯_u_i. Now, for every u ∈𝒰 and t ∈𝒯, the influence probability can be denoted as Pr(u|t) and defined in Definition <ref>.
Given a subset of tags 𝒯^'⊆𝒯 and any user u ∈𝒰, the tag-specific influence probability can be defined in Equation <ref>.
Pr(u|𝒯^')= 1-t ∈𝒯^'∏ (1-Pr(u|t))
Now, to find out the impact of tags in billboard slots, we define the Tag-specific influence of a billboard slot in Definition <ref>.
Given a subset of billboard slots 𝒮⊆ℬ𝒮 and a subset of tags 𝒯^'⊆𝒯, the tags-specific influence of 𝒮 is denoted by ℐ(𝒮 | 𝒯^') and defined using Equation No. <ref>.
ℐ(𝒮|𝒯^')= u ∈𝒰∑ 1- b ∈𝒮∏ (1-Pr(u,b|𝒯^'))
Here, the influence function ℐ() is a combined function that maps each tag and billboard slot to its corresponding influence value, i.e., ℐ: 2^𝒯× 2^ℬ𝒮⟶ℝ^+_0. Next, we describe the bipartite matching in Section <ref>.
§.§ Bipartite Matching
In this work, we formulate our problem as a bipartite matching problem and state this problem in Section <ref>. Next, we define bipartite matching in Definition <ref>.
A matching is bipartite if it contains a set of edges ℰ_b⊆ E, which do not share any common vertex <cit.>.
In traditional bipartite matching <cit.>, each tag is allocated to exactly one billboard slot, providing an optimal solution in polynomial time (𝒪((n+m)^3) via the Hungarian Method) <cit.>. However, this doesn't suit our problem, as we need to allocate a single tag to multiple billboard slots, with each slot associated with only one tag. To address this, we introduce an iterative approach, formulating it as a one-to-many bipartite matching problem, defined in Definition <ref>.
A matching is said to be one-to-many bipartite matching if it contains a set of edges ℰ_b∈ E that may share a common vertex in u ∈𝒰 whereas each vertex in v ∈𝒱 can connect with at most one vertex in 𝒰 via an edge e ∈ℰ_b <cit.>.
§.§ Problem Definition
This section defines the Tag Allocation Problem formally. The inputs to this problem are trajectory, and billboard database, and set of selected slots (𝒮) and tags (𝒯^') and the goal is allocate the tags to the slots such that the influence is maximized. One thing to highlight here is that two tags can not be allocated to a single slot. However, the vice versa can happen. Also, it can be observed that in the worst case the number of possible allocations will be of 𝒪(|𝒮|^|𝒯^'|). Now, we state our problem formally in Definition <ref>.
Given a trajectory database 𝒟, a billboard database ℬ, a selected subset of slots 𝒮 and tags 𝒯^' respectively, the tag allocation problem asks to assign a tag to a slot such that the influence is maximized.
From the computational point of view, this problem can be posed as follows:
Tag Allocation Problem
Input: A trajectory (𝒟) and Billboard (ℬ) Database, A set of slots (𝒮) and tags (𝒯).
Problem: Find out an allocation of the tags to the slots such that the influence is maximized.
By a reduction from the Set Cover Problem, we can show that the Tag Assignment Problem is NP-hard. This result has been presented in Theorem <ref>. Due to the space limitation, we are not able to give the whole reduction.
The
Tag Assignment Problem in Billboard Advertisement is NP-hard and hard to approximate in a constant factor.
Next, we discuss the proposed solution methodologies in Section <ref>.
§ PROPOSED SOLUTION APPROACH
Our experiments are represented in two folded ways: first, by selecting influential slots and tags, and second, by allocating tags to the slots. To address the first goal, we adopt the stochastic greedy approach for influential slots and tags selection introduced by Ali et al. <cit.>. Next, we construct a weighted bipartite graph using the selected slots and tags, and it is described in Section <ref>.
§.§ Construction of the Weighted Bipartite Graph
In this formulation, we have billboard slots and tags represented in the form of a bipartite graph 𝒢_b = (𝒰∪𝒱,ℰ_b), where 𝒰 contains the set of tags while 𝒱 contains a set of billboard slots and in practice, |𝒰| < < |𝒱|. We have adopted the following approach to construct the edge set of the bipartite graph. For every tag u ∈𝒰 and every slot v ∈𝒱, we compute the influence (i.e., edge weight) of all the individual allocations using Equation No. <ref>. Next, we compute the mean weight (μ) and standard deviation of all the edge weight using μ = 1/|ℰ_b|∑_e ∈ℰ_b𝒲(e) and σ = √(1/|ℰ_b|∑_e ∈ℰ_b (𝒲(e) - μ)^2), respectively. For each e ∈ℰ_b we compute θ-score value 𝒵(e) = 𝒲(e) - μ/σ and prune the edge if 𝒵(e) < θ, where θ is the user-defined parameter (it may be -1, 0, etc.). Basically, we set a threshold, i.e., μ + θ·σ, and if 𝒲(e) < μ + θ·σ, then prune that edge. This method ensures that only edges with weights significantly lower than the average are pruned.
Next, we introduce the one-to-many bipartite matching approach and describe it in Section <ref> to address the second goal.
§.§ One-to-Many Bipartite Matching (OMBM)
The OMBM algorithm takes a bipartite graph containing edges between tags and billboard slots as input and returns an allocation of tags to the billboard slots in the form of a 1-D array. This array maps a billboard slot to a tag such that each is allocated to exactly one tag; however, one tag may be allocated to more than one slot. At first, in Line No 1 to 3, we initialize the array 𝒬 as empty for the slots v ∈𝒱. In-Line No. 4 loop will execute till the billboard slots, |𝒱| is empty and in Line No. 5 to 6, for each slot v ∈𝒱 store the possible matches in the adjacent vertices of v into 𝒞_v from tags set 𝒰. Initially, 𝒞_v is initialized with the adjacent vertices of v i.e., 𝒜(v). Before further discussions, we define the notion of a dominating edge in Definition <ref>.
In a bipartite graph, an edge ℰ_uv is said to be dominating if 𝒲_uv > 𝒲_uv^' and 𝒲_uv > 𝒲_u^'v where 𝒲_uv denotes edge weight of tag u to slot v and v^'≠ v, u^'≠ u, u ∈𝒰, v ∈𝒱.
Next in Line No. 7 dominating edge set E_d is initialized to empty set. In-Line No. 8 to 12 for each slot v ∈𝒱 the lc(v) function takes v as input and returns an adjacent tag u of v that has maximum edge weight, i.e., the best tag for matching. If v is also the best match for lc(u), then such a vertex pair will be added to E_d. Now, in Line No. 11 to 17, Algorithm <ref> deals with the edges that are not added to E_d in Line No. 8 to 12. At first, we pick an end node u ∈𝒰 from E_d. Next, we delete the edge ℰ_uv associated with u as the slot is already matched with a tag. In-Line No. 17, we iterate through all 𝒞_u, and this represents the slots that have not yet been matched with any tag. Now, in Line No. 18, we also check if the total number of slots |Count_lc(v)| connected for the tag lc(v) is still within upper limit Bound_lc(v) or not. If both conditions are satisfied, we add an edge with v and u to E_d and allocate slot v to tag u. In-Line No. 21 and 22, we remove the allocated tag u and slot v, respectively. Finally, Algorithm <ref> will return 𝒬 by allocating billboard slots to tags.
Comment/* */
Complexity Analysis.
In-Line No. 1 to 3, initializing a 1-dimensional array will take 𝒪(k) time as there is k number of slots. The loop at Line No. 4 will execute for 𝒪(k) times and Line No. 5 to 6 will execute for 𝒪(k^2). Next, in Line No. 7 initializing dominating set E_d will take 𝒪(k) time. Line No. 8 will execute for 𝒪(k^2) times, and Line No. 9 finding the best-matching vertex will take 𝒪(k^2·ℓ) time. Line No. 10 to 12 will take 𝒪(k^2) times, and Line No. 8 to 12 will take 𝒪(k^2·ℓ + k^2) time to execute. Next, in Line No. 13 will execute for 𝒪(k^2) time and Line No. 14 to 16 will take 𝒪(k^2) time. The at Line No. 17 to 20 will execute for 𝒪(k^2·ℓ) time. Line No. 21 and 22 will take 𝒪(k^2) time. Finally, Line No. 23 to 24 will be executed for 𝒪(k) times in the worst case. So, Algorithm <ref> will take total 𝒪(k + k + k^2·ℓ + k^2 + k^2·ℓ), i.e., 𝒪(k^2·ℓ) time to execute in the worst case. The additional space requirement for Algorithm <ref> will be 𝒪(k + k + ℓ) i.e., 𝒪(k + ℓ) for the 1-D array 𝒬, 𝒞(v) and E_d.
Note that |E| = kℓ in the case of a complete bipartite graph. However, if some slots are not eligible for certain tags to be allocated due to the θ-score threshold value, then |E| < kℓ. Therefore, we state the complexity of Algorithm <ref> as linear on the number of edges generated between tags and slots.
The time and space complexity of the Algorithm <ref> will be 𝒪(k^2·ℓ) and 𝒪(k+ℓ), respectively.
All edges in the dominating edge set E_d must be a part of the optimal matching in the solution obtained before Line No. 13 in Algorithm <ref>.
Assume ℳ^opt is an optimal matching in graph 𝒢. Suppose there exists an edge {u,v}∈ E_d not in ℳ^opt. Let {u,v'}∈ℳ^opt with 𝒲_uv > 𝒲_uv'. Consider ℳ^' = [ℳ^opt∖{u,v'}] ∪{u,v}. Then 𝒲(ℳ^') > 𝒲(ℳ^opt), contradicting ℳ^opt's optimality. Hence, all edges in E_d are in ℳ^opt.
Each billboard slot will be assigned at most one tag in the solution.
Given a billboard slot v ∈𝒱 and a tag u ∈𝒰, u is assigned to v only if 𝒬(v) = ∅ (as stated in Line No. 17 to 20 of Algorithm <ref>). Once a tag u is assigned to a billboard slot v, 𝒬(v) is set to u. Since 𝒬(v) ≠∅ after this assignment, the condition 𝒬(v) = ∅ will no longer hold for v. Therefore, v cannot be reassigned to another tag and it defines ∀ v ∈𝒱, ∃ at most one u ∈𝒰 such that 𝒬(v) = u.
This implies that each billboard slot v is assigned at most one tag in the solution.
Each tag u_i∈𝒰 will finally have exactly Bound_i many billboard slots assigned to it.
In Algorithm <ref>, the allocation performed in Line No. 18 𝒬_v = lc(v) happens when |Count_lc(v)| < Bound_lc(v) and along with Lemma <ref> allows us to conclude this.
§.§ Tag Allocation Process
The overall tag allocation procedure is shown in Algorithm <ref>. At first, in Line No. 1, a set of billboard slots (𝒮) and a set of tags (𝒯) is provided as an input in the stochastic greedy <cit.>, and it returns a subset of slots and tags, i.e., 𝒮^'⊆𝒮 and 𝒯^'⊆𝒯 as output. Next, in Line No. 2, the bipartite graph is generated using bipartite sets of tags and billboard slots. In-Line No. 3, based on the θ-score threshold value, a pruned bipartite graph (𝒢^') is generated. In-Line No 4, 𝒢^' is used in Algorithm <ref> as input, where it finds a mapping of the billboard slots to the tags and returns a multi-slot tag allocation.
Comment/* */
Algorithm <ref> gives an approximation ratio of ρ≤ 1 + max_1 ≤ i ≤ m (𝒦_i - δ_i), where δ_i∈{0,1} and 𝒦_i is the set of slots assigned to tag u_i.
As mentioned previously, in Algorithm <ref> Line No. 13 to 20 performs a matching of tag u_i to (|𝒦_i| - δ_i) slots where δ_i∈{0,1} for i = 1, 2, …, ℓ. Next, lc(v) = u represents the tag u_i is matched with slot v_j where v ∈𝒱 as given in Line No. 18 v = lc(u)= lc(lc(v)) is satisfied. If {v,u} is not part of optimal matching, however, it is added to the E_d then at most (1 + 𝒦_i - δ_i) many edges may not be considered. The cost for assigning slots to a tag t_i will be at most |𝒦_i| + 1 - δ_i. So, the total cost C_A can be approximated by summing the costs over all tags t_i i.e., C_A≤∑_i=1^m (|𝒦_i| + 1 - δ_i) and the optimal solution has a cost C_opt = ∑_i=1^m |𝒦_i|. So we can write ρ = max( ∑_i=1^m (|𝒦_i| + 1 - δ_i)/∑_i=1^m |𝒦_i|).
Since |𝒦_i| are the optimal slots, ρ≤ 1 + max_1 ≤ i ≤ m( 1 - δ_i/|𝒦_i|). The term max_1 ≤ i ≤ m( 1 - δ_i/|𝒦_i|) simplifies to (1 - δ_i) for the worst-case scenario where |𝒦_i| = 1 and hence ρ≤ 1 + max_1 ≤ i ≤ m (𝒦_i - δ_i).
An Illustrative Example
Consider there are ten billboard slots 𝒱 = {b_0,b_1,…, b_9}, three tags 𝒰 = {t_0, t_1, t_2} with corresponding influence value as output after applying stochastic greedy approach <cit.> on the billboard, and tag datasets as shown in Table <ref>, <ref>. Next, a bipartite graph 𝒢 and its corresponding 10 × 3 weight matrix is generated. Now, based on the θ-score value described in Section <ref>, a new bipartite graph 𝒢^' with an updated weight matrix is generated. We initialize an empty (Initialize to -1) 1-D array 𝒬 to store the allocation as shown in Table <ref>. In the first iteration, the best allocations for the slots to the tags are as follows: {(0 → 2), (1 → 2), (3 → 2), (4 → 2), (5 → 2), (6 → 2), (7 → 2), (8 → 2), (9 → 2)}, i.e., all the slots are allocated to only one tag t_2 and the allocation for the tag to the slot are { (0 → 4), (1 → 4), (2 → 4)}, i.e., slot b_4 is the best match for t_0, t_1 and t_2 and b_4 is matched to tag t_2 as shown in Table <ref>. The edge weight of all the edges from slot b_4 to the tags is set to be 0, and the influence of tag t_2 is set to 0. Next, the matching is performed between tags t_0, t_1 and all the slots except b_4. Similarly, slots b_9 matched with t_1 and b_2 is matched with t_0. After this, all the tag's influence is 0, iteration 1 is completed, and updated allocation is shown in Table <ref>. Still, some slots are unallocated. So, assign the initial influence value for all the tags and repeat the process till improvement in allocation occurs. The final allocation of slots to tags is as follows: {(2 → 0), (4 → 2), (5 → 0), (7 → 2), (8 → 1), (9 → 1)} as shown in Table <ref>.
§ EXPERIMENTAL EVALUATION
In this section, we describe the experimental evaluation of the proposed solution approach. Initially, we start by describing the datasets.
Dataset Description.
We use two datasets for our experiments, previously utilized in various trajectory data analytics studies <cit.>. The New York City (NYC) Dataset, collected from April 12, 2012, to February 16, 2013, includes 227,428 check-ins containing user ID, location name, timestamps, and GPS coordinates [<https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page>]. The VehDS-LA (Vehicle Dataset in Los Angeles) consists of 74,170 samples from 15 streets, featuring data like user ID, street name, latitude, longitude, and timestamps [<https://github.com/Ibtihal-Alablani>]. Further, we have created tag datasets from trajectory datasets of NYC and VehDS-LA containing tag names and tag influences to do our experiments. Additionally, we use billboard data from LAMAR [<http://www.lamar.com/InventoryBrowser>], including billboard ID, latitude, longitude, timestamp, and panel size. The NYC dataset contains 716 billboards (1031040 slots), and the LA dataset includes 1483 billboards (2135520 slots).
Experimental Setup. All the key parameters used in our experiments are summarized in Table <ref>, and the default settings are highlighted in bold. The parameters k and ℓ denote the number of slots and tags, respectively, whereas ϵ decides the sample set size in the stochastic greedy <cit.> from which maximum influential slots and tags are chosen. The user-defined parameter θ controls the θ-score threshold, and λ represents the maximum distance a slot can influence trajectories.
All codes are implemented in Python and executed on an HP Z4 workstation with 64 GB memory and an Xeon(R) 3.50 GHz processor. All the proposed and baseline methods are demonstrated for their effectiveness and efficiency. All codes are executed five times, and average results are reported. Due to the anonymity constraint, we are not putting the Github Link of our implementations which we will do during camera ready submission.
Goals of our Experiments.
In this study, we address the following Research Questions (RQ).
* RQ1: How does the number of matching tags to slots increase if we increase the number of slots and tags selected?
* RQ2: If we increase the number of slots and tags, how do the computational time requirements of the proposed methods change?
* RQ3: Varying θ-score, how does the number of matched tags and slots vary?
* RQ4: How does the influence vary before and after allocating tags to slots?
Baseline Methods.
One tag can be assigned to multiple slots in all the baseline methods; however, the opposite is restricted. Now, we will discuss different baseline methods to compare with our proposed approach as follows:
* Brute-force Method (BM). In this approach, for each billboard slot, we search for the best match in each tag based on the edge weight (e.g., influence value), and tags are assigned to slots.
* Max Degree Allocation (MDA). At first, tags are sorted in descending order based on the maximum degree. Then, for each slot, check the degree of the tags connected to the slot and the maximum degree tag allocated to that slot.
* Top-k Slot and Random Tag (TSRT). First, each slot is sorted in descending order based on individual influence value. Next, tags are allocated to the sorted slots randomly.
* Random Allocation (RA). In this approach, tags are allocated to slots uniformly at random.
Experimental Results and Discussions.
This section will discuss the experimental results of the solution methodologies
and address the research questions mentioned in Section <ref>.
Tags, Slots Vs. No. of Matching.
Figure <ref>(a,b) and <ref>(g,h) show the effectiveness of the baseline and proposed methods in the NYC and LA datasets, respectively. We have three main observations. With the fixed number of tags (100), when the number of slots varies from 100 to 300, the number of matched tags and slots increase for baseline and proposed methods, as shown in Figure <ref>(a,g) and <ref>(b,h). Second, when we increase the number of slots in the `BM' and `MDA' approaches, all the slots are matched with only one tag, and this happens because there is one most influential tag that has a maximum degree in the bipartite graph. Among the baseline methods, `RA' and `TSRT' perform well. The `OMBM' has a smaller number of matched slots because in every iteration, a tag can only be allocated to a slot if the slot is the best match for a tag, and that tag is also the best for the slot. The `OMBM' approach in the LA dataset has fewer matched slots than `RA' and `TSRT' by 2.29 % to 3% and 0.76% to 2%, respectively. In the NYC dataset, fewer matched slots are around 10.70% compared to the baseline methods. Third, in the NYC dataset, the `OMBM' outperforms all the baseline methods regarding the number of matched tags, while in the LA dataset, the number of matched tags in `OMBM' is less. This happens because in NYC, the θ-score threshold value is very low compared to the LA dataset, and for this reason, the number of edges that remain in the bipartite graph is huge, and more tags are matched.
Tags, Slots Vs. Time.
Figure <ref>(c) and <ref>(i) shows the efficiency of our proposed solution, and from this, we have three main observations. First, with the increase in slots, the run time for all the baseline and proposed approaches increases. The time requirement for the stochastic greedy approach is not presented in this paper; however, in NYC, it takes 152632, 152725, 153038, 153046, and 153125 seconds for the number of slots 100, 150, 200, 250, and 300, respectively. In LA run time is 10880, 11039, 11259, 11283, and 11289 seconds for the same parameter setting. Second, the proposed `OMBM' takes more time than the baseline methods because it needs a comparison of both side vertices in the bipartite graph, i.e., the best match for each slot to tags and tags to slots, resulting in significant increases in run time. Third, the `RA' and `TSRT' are the fastest methods because they need to compare each slot to tags only once.
Varying θ value. Figure <ref>(d,e) and Figure <ref>(j,k) show the impact of varying θ values on the NYC and LA datasets, respectively. We have three main observations. First, the parameter θ controls the pruning criteria in the bipartite graph in Algorithm <ref>, and accordingly, a dense or sparse bipartite graph is generated. Second, the effectiveness and efficiency are very sensitive to varying θ values in the NYC and LA datasets for the `OMBM', `RA', and `TSRT' when we very billboard slots from 100 to 300. Third, the large θ value will increase the efficiency with worse effectiveness. This happens because when θ increases, the threshold for pruning increases, the graph becomes sparse, and the number of matched slots and tags becomes less.
Effectiveness Test. Figure <ref>(f) and Figure <ref>(ℓ) shows the effectiveness of the proposed and baseline methods for NYC and LA datasets, respectively. We have two main observations. First, stochastic greedy (SG) selects the required number of slots and tags, which returns maximum influence compared to the proposed and baseline methods. This occurs because the output of `SG' is the input of the proposed and baseline methods in the form of a bipartite graph, and after applying the θ-score threshold, some slots and tags are pruned from that graph. Second, our proposed `OMBM' outperforms the baseline methods in allocating the most influential tags to appropriate slots, and among the baseline methods, `RA' and `TSRT' perform well compared to the other baselines.
Scalability Test.
To evaluate the scalability of our method, we vary the number of slots and tags from 100 to 300 and 25 to 125, respectively. As shown in Figure <ref>(c,i), the efficiency in `OMBM' is very sensitive compared to the baseline methods with a fixed number of tags and varying slot values. We observed that runtime also increases in both NYC and LA datasets with the increasing number of slots from 100 to 300 by 3× to 5 × more for the `OMBM' and baseline methods.
Additional Discussion.
Now, we discuss the effect of additional parameters in our experiments. First, we vary the number of slots (k) between 100 and 300 and the number of tags (ℓ) between 25 and 125 to select the required slots and tags. In Figure <ref>, all results are presented for varying k values while keeping the number of tags fixed. Second, in the stochastic greedy approach <cit.>, ϵ determines the size of the random sample subsets. This work uses ϵ = 0.01 as the default setting. Also, we varied the ϵ value from 0.01 to 0.2 and observed that as ϵ increases, the run time of the stochastic greedy approach decreases; however, the quality of the solution degrades, i.e., the influence value decreases. Third, the influence of all the proposed and baseline methods increases when we vary the distance (λ) from 25 meters to 150 meters. Both influence and run-time increase because one slot can influence more trajectories within its influence range. In our experiment, we set the value of λ to 100 meters for both the NYC and LA datasets, as we observed that increasing λ beyond 100 meters resulted in only marginal improvements. This paper does not report the effect of varying λ and ϵ due to space limitations.
§ CONCLUDING REMARKS
In this paper, we have studied the Tag Assignment Problem in the context of Billboard Advertisement and proposed a one-to-many bipartite matching-based solution approach. The proposed methodology is illustrated with an example. An analysis of the proposed methodology has been done to understand its time and space requirements as well as its performance guarantee. Experiments with real-world datasets show that our approach outperforms baseline methods in terms of influence. This study can be extended by considering the following directions. Our study does not consider the `zonal influence constraint' which can be considered for future study. The methodology proposed in this paper works for a single advertiser. One important future direction will be to extend our study in multi-advertiser setting.
splncs04
|
http://arxiv.org/abs/2409.03282v1 | 20240905064701 | Interpretable mixture of experts for time series prediction under recurrent and non-recurrent conditions | [
"Zemian Ke",
"Haocheng Duan",
"Sean Qian"
] | cs.LG | [
"cs.LG",
"eess.SP"
] |
§ ABSTRACT
Non-recurrent conditions caused by incidents are different from recurrent conditions that follow periodic patterns. Existing traffic speed prediction studies are incident-agnostic and use one single model to learn all possible patterns from these drastically diverse conditions. This study proposes a novel Mixture of Experts (MoE) model to improve traffic speed prediction under two separate conditions, recurrent and non-recurrent (i.e., with and without incidents). The MoE leverages separate recurrent and non-recurrent expert models (Temporal Fusion Transformers) to capture the distinct patterns of each traffic condition. Additionally, we propose a training pipeline for non-recurrent models to remedy the limited data issues. To train our model, multi-source datasets, including traffic speed, incident reports, and weather data, are integrated and processed to be informative features. Evaluations on a real road network demonstrate that the MoE achieves lower errors compared to other benchmark algorithms. The model predictions are interpreted in terms of temporal dependences and variable importances in each condition separately to shed light on the differences between recurrent and non-recurrent conditions.
[
[
September 5, 2024
=====================
§ INTRODUCTION
Transportation networks, designed to serve people with efficient mobility, are plagued by congestion. In 2019, U.S. roadways witnessed 8.8 billion hours of travel delay and 55 hours of delay per driving commuter <cit.>, around half of which is with incidents <cit.> (25% by accidents, 15% by weather and 10% by work zones). Those planned and unplanned incidents (e.g. hazardous weather conditions, accidents, local events, etc.) on the highway networks can catastrophically impact mobility and safety. Mitigating incident impacts requires accurate and ahead-of-curve real-time prediction and proactive operational management <cit.>. This study seeks to address the primary challenge of accurately predicting network traffic states.
Traffic conditions with the occurrence of incidents are regarded as non-recurrent, which are distinct from the recurrent conditions that follow periodic patterns. Non-recurrent conditions caused by incidents are volatile because incidents typically lead to sudden disruptions in the flow of traffic. These incidents create irregular traffic patterns and can significantly impact travel times and congestion levels. In contrast, recurrent conditions are the result of consistent and predictable factors, such as daily rush hours or regular bottlenecks, where traffic patterns follow a routine cycle.
Despite consideration of road incidents (accidents, severe weather impact, etc) in modeling, existing approaches of traffic prediction fail to distinguish between recurrent conditions and non-recurrent conditions. A general consensus is that existing real-time traffic prediction works reasonably well for recurrent traffic, but not yet well for non-recurrent traffic (see Figure <ref> in which a state-of-the-art time series forecasting model is applied). It is less studied to accurately predict traffic states in non-recurrent conditions by learning from multi-source emerging traffic data. What makes it particularly challenging is to work with an incident that occurs at a location/time where incident data is very rare or even nonexistent. It is also unclear how predictions in non-recurrent conditions differ from recurrent ones.
A Mixture of Experts (MoE) <cit.> is a universal model structure that combines multiple specialized sub-models, or "experts," each tailored to handle different aspects or patterns within a dataset. A gating network determines which expert(s) to utilize for a given input, optimizing performance by leveraging the strengths of each expert for specific types of data or tasks. Inspired by the diverse capabilities of MoE shown in recent large language models (e.g., <cit.>) and the observation that data of recurrent and non-recurrent conditions is heterogeneous, we propose a MoE-based model that contains a recurrent expert model and a non-recurrent expert model. For the expert model backbone, Temporal Fusion Transformer (TFT) <cit.> is selected because of its superior forecasting capability and interoperability. Moreover, a general training pipeline is proposed for the non-recurrent expert model to remedy the limited data issues.
The contributions of this study can be summarized as follows.
* Differentiating from the literature that is incident-agnostic, this study explicitly considers incident occurrences, so as to improve the prediction in both recurrent and non-recurrent conditions.
* We propose a MoE model that adopts TFT as expert sub-models and multi-source features are utilized.
* We propose a general training pipeline for non-recurrent traffic prediction models to overcome the challenge of extremely limited non-recurrent datasets.
* Model predictions are interpreted to shed light on differences of forecasting under recurrent and non-recurrent conditions.
The remainder of this paper is organized as follows. Related work is reviewed and summarized in Section <ref>. Section <ref> introduces the multi-source datasets. Subsequently, section <ref> elaborates on data pipeline, model structure, and model training. Experiments and results are included in section <ref>. Last, the paper concludes with a summary of the main findings in Section <ref>.
§ RELATED WORK
Leveraging machine learning to predict traffic flow or traffic time has been widely studied in the last decades. First, linear time series analysis has been widely recognized and used for traffic flow/speed prediction, such as Auto-Regressive Intergrated Moving Average (ARIMA) utilized by <cit.>, <cit.>, <cit.>, for instance. <cit.> adopts Kalman filtering for traffic flow forecasting. Non-parametric regression model was also utilized in <cit.> and <cit.>. Classical machine learning models, such as Support Vector machine model, is also used to predict traffic flow <cit.> as well as travel time <cit.>. Compressed sensing is exploited by <cit.> to reduce the complexity of network, then support vector regression (SVR) is used for predicting travel speed on a Nationwide traffic network in Singapore. <cit.> applied hidden Markov Model by incorporating traffic volume, lane occupancy, and traffic speed data in the model, using data from a 38 mile corridor of I-4 in Orlando, FL. In addition, <cit.> and <cit.> also used Markov chains to predict travel time on arterial routes. In those studies, oftentimes features used to train machines are limited to the road segment of its own. Very few spatio-temporal features were incorporated, which can drastically improve prediction performance, particularly under incidents.
Recently, studies have taken spatial-temporal correlations into consideration when predicting link travel time or flows. <cit.> considered spatial correlations as function of distance and degree of neighbors when applying a multivariate autoregressive moving-average model to the forecasting of traffic speed, which was tested on a dataset collected via 25 loop-detectors at Athens, Greece. <cit.> discusses the extensions of time series prediction model such as ARIMA by considering correlations among neighbors and the utilization of LASSO for model selection. Patterns of the spatial and temporal prediction errors are inferenced through k-means clustering as well as PCA <cit.>. <cit.> defined a graph based Coefficient of Determination (CoD) matrix and utilized a modified BFS algorithm to reduce the time complexity of calculating the CoD matrix. On top of that, a graph based lag-STARIMA is proposed and used for travel time prediction. <cit.> proposes a method using temporal Bayesian network. <cit.> introduces a space–time diurnal (ST-D) method in which link-wise travel time correlation at multiple lag time is utilized. <cit.> utilizes Gaussian process regression (GPR) model and graphic Lasso to forecast traffic flow. <cit.>proposes a KNN model to forecast travel time up to one hour ahead, the model uses redefined inter-segments distances by incorporating the grade of connectivity between road segments, and considers spatial-temporal correlations and state matrices to identify traffic state. <cit.> proposes a modified multivariate spatial-temporal autoregressive (MSTAR) model by leveraging the distance and average speed of road networks to reduce the number of parameters, the algorithm was tested on a road network of 502 links of which the traffic status are collected by loop detectors, the results remains accurate for up to one hour.
Recent development in deep learning models in general has also accelerated the advancement of traffic prediction models. Several trials have been done using (deep) neural network to estimate short-term travel time, e.g., <cit.>. In particular, <cit.> proposes a restricted Boltzmann Machine (RBM)- based RNN model with two layers, the output are binary for each link: congested or not. Matrix are used to represent transportation network, no spatial information is utilized. The model is tested using taxi GPS data of Shenzhen, China. <cit.> proposes a deep learning method to impute traffic flow data by taking into consideration a few spatial and temporal factors, such as weather and day of week. <cit.> exploit the spatio-temporal relations of network traffic have been widely used to make accurate and ahead-of-curve traffic predictions on a network level.
§ DATASET
This paper examines a Transportation Systems Management and Operations (TSMO) network in Maryland US, as illustrated in Figure <ref>. The TSMO network includes Interstate 70, US Route 29, and US Route 40, extending from the MD 32 interchange to Interstate 695. We utilize three primary data sources: (1) probe-sourced traffic speed measurements from INRIX[<https://inrix.com/products/ai-traffic/>], (2) crowd-sourced traffic incident reports from Waze[<https://www.waze.com/>], and (3) weather data from Weather Underground[<https://www.wunderground.com/>]. Historical datasets from INRIX and Waze are maintained by the Regional Integrated Transportation Information System (RITIS) platform[<https://ritis.org/>], while Weather Underground data were retrieved using the Wunderground API.
The TSMO network comprises 181 link segments. The multi-sourced dataset was compiled by organizing different source data at a 5-minute frequency from 5:30 AM to 8:59 PM daily, covering the period from February 14, 2022, to February 13, 2023. Raw data with a frequency shorter than 5 minutes were aggregated to match the 5-minute frequency, while linear interpolation was applied to time series with missing entries or longer collection intervals.
§.§ INRIX traffic speed data
The historical traffic speed measurements by INRIX were acquired from RITIS, encompassing major US highways and arterials. The INRIX data, collected at 5-minute intervals, relates to road segments identified by the INRIX Traffic Message Channel (TMC) code. Each record includes the TMC segment code, timestamp, observed speed (mph), average speed (mph), reference speed (mph), and two entries reflecting the confidence levels: the confidence score and confidence value. The dataset covers the period from February 14, 2022, to February 13, 2023. We fill in missing values by applying linear interpolation.
§.§ Waze incident data
Waze is a mobile navigation application, which not only includes navigation information but also event information such as traffic crashes, car breakdowns, congestion, hazards, and policy presence. The event information is provided and updated based on data provided by Waze users. Users can initiate new reports on events, or check whether the event is still present. The Waze data is queried from RITIS. The data span from February 14, 2022, to February 13, 2023, which are represented by dots in Figure <ref>.
§.§ Weather underground data
Weather Underground provides hourly weather measurements, detailing temperature, pressure, dew point, humidity, wind speed, hourly precipitation, visibility, and a text description of the overall weather conditions.
§ METHODOLOGY
§.§ Data pipeline for multi-source data
In this study, multi-source data is utilized to predict the traffic speed of multiple road segments, which can be formulated as an univariate or multivariate problem. To predict effectively, the model should take as input normalized and aligned features. Therefore, this section provides a data pipeline to transform raw multi-source data into predictive features. First, the raw data is processed to be features. Then, the features are aligned both spatially and temporally.
§.§.§ Data processing
Traffic speed
The raw speed data is organized from INRIX probe vehicle data in TMC (Traffic Message Channel) resolution with a temporal resolution of 5 minutes. The slowdown speed (SD) is then extracted from the organized TMC speed data by Equations (<ref>). The reason of including SD is to provide the model with indicators of traffic congestion and incidents and denoise the incident features, which will be elaborated on later.
SD_i^d,t = max(∑_j∈Γ^-1y_j^d,t/N_i-y_i^d,t, 0)
where y_i^d,t denotes the speed of link i at day d time t, and fraction term computes the average speed of N_i upstream links, the set of which is denoted by Γ^-1.
Incident features
The incident status of targeted road segments is initially collected from Waze. Several types of incidents, including accidents, road-closing events, and hazard floods, are deemed critical and chosen for our study. Because Waze does not record all critical events or anomalies and some Waze reports have trivial impacts on traffic states, we propose an algorithm to denoise the Waze incident data, which is summarized in algorithm <ref>. First, the top-n percent of slowdown speed is identified as the abnormal slowdown, which identifies unreported incidents. Then, for each incident reported by Waze, if there is no abnormal slowdown during the reported incident interval, the incident is regarded as insignificant and removed. Subsequently, the value of n is adjusted if the removal and adding percentages do not satisfy the thresholds. If all thresholds are met, the denoised incident indicator matrix is output after adding unreported anomalies and deleting insignificant incidents.
Given the denoised incident indicators, three incident features are produced, including segment incident indicator, network incident indicator, and incident count. The segment incident indicator reflects whether there are any incidents on this road segment at this moment, while the network incident indicator reflects whether there are any incidents on any road segments in the entire road network. The incident count is the number of incidents happening in the network at this time.
Time features
We consider four types of time variables, including hour-of-day, day-of-week, month-of-year, and official US holidays. For these time features, cyclic sine and cosine functions are used to transform them into numerical features [t_i^(sin), t_i^(cos)] as follows.
t_i^(sin) = sin(2π i/T)
t_i^(cos) = cos(2π i/T)
where i denotes the hour or month index and T denotes the total number of hours in a day (T=24) or the number of months (T=12) in a year. A benefit of cyclic encoding is that time variables are mapped onto a circle, so the lowest values are next to the largest value (e.g. 0 am is next to 11 pm). Holidays are encoded into binary variables to indicate whether it is a holiday in the US.
Weather features
The weather information is extracted from the Weather Underground. The processing follows the procedures specified in <cit.>. All the processed features are summarized in Table <ref>.
§.§.§ Data integration
Data in this study are collected from multiple sources, so they initially are not aligned either in spatial or temporal. To leveraging these data to predict traffic state, the following spatial and temporal integration steps are conducted.
Spatial integration INRIX speed data is spatially labeled by TMC road segment code, so TMC road segments serve as the spatial integration basis in our study. Waze incidents are linked to the TMC road segments based on the incident locations. The weather data represents the weather conditions of the entire network area, so the weather features are identical along different road segments at same time.
Temporal integration. Different data sources collect data in different time intervals. Waze events are reported by Waze users in real-time without specific time intervals, and the Underground Weather data is hourly measurements. INRIX speed data is collected at 5-minute intervals, so we map all features into 5-minute intervals. Waze data is interpolated or aggregated into 5-minute intervals, and we resample hourly Weather data and fill in the missing values.
§.§ Model structure
§.§.§ Mixture of experts
The Mixture of Experts (MoE) model is a machine learning approach that combines the strengths of multiple specialized models to solve complex prediction tasks. Each "expert" within the mixture is typically a separate neural network or other machine learning models. A gate or router dynamically assigns input data to the most appropriate experts. This gate determines the weighting of each expert's output based on the input features. The MoE approach is advantageous for predicting recurrent and non-recurrent traffic states due to its ability to handle diverse and complex patterns in traffic data. By employing multiple specialized models (experts) and a gate to dynamically allocate input data, MoE allows each expert to focus on specific aspects of traffic patterns, leading to more accurate and robust predictions. Besides, this method enhances model interpretability, as it highlights the factors influencing different traffic states, and offers flexibility and scalability, allowing for the addition of new experts without retraining the entire model.
We adopt the MoE model structure to suit the diverse needs of predictions in recurrent and non-recurrent conditions. Specifically, our model is composed of independent recurrent and non-recurrent components, thereby enabling efficient adaptation to diverse traffic conditions.
_t = (1 - p_t) ⊙μ_ϕ (_t-c,⋯,_t, ⋯, _t+h-1; _t-c, ⋯, _t-1)_recurrent component
+ p_t ⊙μ_θ (_t-c,⋯,_t, ⋯, _t+h-1; _t-c, ⋯, _t-1)_non-recurrent component
where _t is predicted link speeds at time t, and is the past speed values. μ_ϕ (·) is the recurrent expert model, and μ_θ (·) is the non-recurrent expert model. The input of expert models includes input features from the beginning of context window t-c to the end of prediction window t+h-1 and past speed values from t-c to t-1. p_t is the assigned weight to the output of the non-recurrent expert model. Our model overview is plotted in Figure <ref>.
§.§.§ Expert model
Though the choice of expert models is flexible, the temporal fusion transformer (TFT) model <cit.> is adopted as the expert model backbone in our study for two reasons. First, TFT demonstrated superior time series forecasting performances in various types of datasets including traffic and electricity <cit.>. More importantly, unlike other deep learning models that work as black boxes, TFT is more interpretable by capturing the temporal dependencies and quantifying the global feature importances.
The main blocks within TFT are plotted on the right of Figure <ref>. The input features and past target values first go through variable selection blocks, in which gated residual networks are utilized to assign variable selection weights to all input variables. Then, the weighted input features in the context time window and the weighted past target values are sent to a long short-term memory (LSTM)-based encoder, while the weighted input features in the prediction window are sent to an LSTM-based decoder. Subsequently, a temporal fusion decoder calculates the multi-head attention among embeddings of different time steps similar to the attention mechanism in the transformer model <cit.>. These blocks enable TFT to learn the temporal dependencies and the variable importance, which bring not only strong forecasting performances but also superior interpretability. For details about TFT, we refer to <cit.>.
§.§ Model training
To account for temporal data distribution shift issues, the dataset was divided into training (80%), validation (10%), and test (10%) sets over time. Therefore, the least recent data is used for model training, and models demonstrating the best performance on the validation set are selected to test on the most recent data.
In time series forecasting problems, there are typically two time windows, including a context window of length c and a prediction window of length h. The context window is how far the model looks back, and the prediction window is how far the model predicts. Thus, the datasets are processed into data slices of length c+h using a sliding window with a step size of 1.
Given the observation that traffic patterns with occurrences of incidents deviate from recurrent traffic patterns, we define recurrent and non-recurrent conditions based on the incident occurrences within the road network. Figure <ref> demonstrates the four most common conditions. The solid circles in Figure <ref> are time steps when there are any incidents within the road network, while the hollow circles are time steps without any incidents. If all time steps in both windows are incident-free, it is obviously a recurrent condition. If an incident happens in the context window but ends before the prediction window, it is categorized as a recurrent condition. If the incident happens in the prediction window or the incident happens in the context window but lasts to the prediction window, it is regarded as a non-recurrent condition.
The data distribution of the non-recurrent conditions is different from recurrent conditions because of the occurrences of incidents. Therefore, the recurrent expert model is trained on the train data in recurrent conditions only. For the non-recurrent expert model, we first pre-train it on the train data in recurrent conditions, and then fine-tune it on the train data in non-recurrent conditions. This training procedure for the non-recurrent model is designed for two reasons. First, the non-recurrent conditions are not as prevalent as the recurrent conditions, so the dataset size of non-recurrent conditions is not relatively small. Thus, using non-recurrent data only is not desirable for deep learning models that typically desire large datasets. Besides, we think recurrent data contain some common and general traffic patterns that may be helpful for prediction in non-recurrent conditions. In the above train, pre-train, and fine-tune processes, the quantile losses in <cit.> are adopted.
§ EXPERIMENTS AND RESULTS
§.§ Experiment setup
Aside from the TFT model, this study includes the following baselines for comparison.
* LOb (last observation) In LOb, the latest observed speed in the context window on the road segment is used to predict the speed in the prediction window,
_t:t+h-1 = _t-1
* DeepAR (deep autoregressive model <cit.>) DeepAR utilizes an LSTM-based recurrent neural network to learn the probabilistic distributions of multiple related time series <cit.>. During inference, DeepAR outputs probabilistic distributions, from which the predictions are sampled. Though DeepAR is typically trained on multiple time series, it is univariate time series forecasting.
* MVAR (multivariate autoregressive <cit.>) Similar to DeepAR, MVAR adopts an LSTM-based recurrent neural network. However, MVAR regards multiple time series as a multivariate probabilistic distribution, and a low-rank covariance structure is leveraged to model the high-dimensional covariate matrix. Predictions are sampled from the modeled multivariate probabilistic distribution during inference.
To evaluate the model performances, we calculate the Symmetric Mean Absolute Percentage Error (SMAPE) in equation (<ref>) and Root Mean Squared Error (RMSE) in equation (<ref>) of traffic speed predictions of all prediction horizons and all road segments in the test dataset.
SMAPE = 1/T∑_t=1^T 1/H∑_h=1^H 1/N∑_n=1^N
|ŷ_n, t + h - y_n, t + h|/(|ŷ_n, t + h| + |y_n, t + h|) / 2× 100
RMSE = 1/T∑_t=1^T 1/H∑_h=1^H √(1/N∑_n=1^N ( ŷ_n, t + h - y_n, t + h)^2)
§.§ Model performances
First, the expert models are evaluated on the recurrent and non-recurrent test datasets respectively. Table <ref> summarized the evaluation results. DeepAR, MVAR-R, and TFT-R are models trained on the recurrent dataset, while DeepAR-NR, MVAR-NR, and TFT-NR are trained on the non-recurrent dataset. Note that TFT-FT is a TFT model that is firstly pre-trained on the recurrent dataset and then fine-tuned on the non-recurrent dataset.
The results of recurrent expert models show that TFT-R leads to the lowest SMAPE and RMSE. Besides, MVAR-R is worse than DeepAR-R, indicating that formulating the traffic speed prediction as a multivariate time series forecasting problem is not desirable in our study. One possible reason is that the efforts to model the correlation between time series hinder the model training efficiency.
The results of non-recurrent expert models also show the superiority of TFT models. TFT-NR and TFT-FT are the two best models. Notably, TFT-FT performs better than TFT-NR, which validates the efficacy of our proposed training pipeline for non-recurrent models.
Comparing the results of recurrent and non-recurrent, SMAPE and RMSE in non-recurrent conditions are generally higher than in recurrent conditions. This observation is expected, because traffic patterns in the non-recurrent conditions are less predictable due to the disruptive impacts of incidents.
Then, the MoE model is tested on the entire test dataset. For comparison, we also train DeepAR, MVAR, and TFT models on the entire train dataset, which are denoted as DeepAR-ALL, MVAR-ALL, and TFT-ALL respectively. The results are summarized in Table <ref>. Among all baselines, TFT-ALL is the best, and our proposed MoE is better than the TFT-ALL, which proves the functionality of the proposed MoE structure.
In time series forecasting problems, how the prediction errors propagate along prediction horizons is of interest. We first show the prediction errors on the entire test dataset in Figure <ref>. Among all different prediction horizons, MoE leads to the lowest SMAPE and RMSE. Besides, both types of errors increase along prediction horizons as expected, but the increased magnitude of MoE is the smallest.
In non-recurrent conditions, the prediction performances on the road segments where incidents happen are critical for proactive management to mitigate incident impacts. To this end, the models are evaluated on the incident road segments during incident occurrences, as Figure <ref> shows. Interestingly, DeepAR-ALL which performs well in the entire test dataset, is sometimes even worse than LOb in this test. This observation indicates that DeepAR-ALL learns to fit the main recurrent pattern and regards these incident conditions as noise. Among all models, MoE leads to the lowest SMAPE and RMSE, and the gap between MoE and other baselines is more significant than the results on the entire test dataset.
To demonstrate how MoE outperforms TFT-ALL, we give two prediction examples of TFT-ALL and MoE in the non-recurrent conditions, as Figure <ref> show. In example 1, TFT-ALL and MoE both can capture the speed drop tendency caused by incidents, and predict that the speed continues to decline in the prediction horizon. However, in example 2 where the speed drop is less observable in the context window, TFT-ALL forecasts that the speed will be flat, while MoE successfully foresees the speed decrease. These examples specifically show MoE can adapt to non-recurrent conditions better than TFT-ALL.
§.§ Interpretation
One research question of this study is trying to answer is how traffic prediction in non-recurrent conditions differs from recurrent conditions. Interpretation of patterns learned by our proposed MoE model may give some insights.
In the context of deep learning, attention measures how much predictions depend on different vectors. In our study, attention quantifies the temporal dependencies between predicted speed and the values in the context window. The temporal dependencies learned by MoE are plotted separately for recurrent and non-recurrent conditions, as Figure <ref> shows. In both conditions, the largest dependencies are located at the beginning and the end of the context window, indicating the model learns the tendency of past values, such as speed drop and speed recovery. Compared with recurrent conditions, non-recurrent conditions depend more on the most recent time indices. Speed in non-recurrent conditions is volatile due to occurrences of incidents, so the predictions depend more on recent values.
Our models are trained on multi-source features, so we analyze the importance of various features. The variable importances are split by recurrent and non-recurrent conditions. The encoder variables include multi-source features and past speed values, while the decoder variables only contain the multi-source features. Importance percentages of the most important variables are plotted in Figure <ref>. Generally, we find 'speed' is not the most important feature, while 'link incident', 'precipitation', and 'hour of the day' are the most important. This indicates traffic speed either is impacted by incidents or follows a periodic pattern. The decoder variable importance shows holidays significantly affect traffic speed as well. Comparing the decoder variable importance in the recurrent conditions with the non-recurrent conditions, we can find the 'link incident' feature is more important in the non-recurrent conditions.
§ CONCLUSION
In conclusion, this study presents a Mixture of Experts (MoE) model that leverages recurrent and non-recurrent expert models to predict traffic states up to 30 minutes in advance. The MoE architecture allows the model to handle the diverse patterns of recurrent and non-recurrent traffic conditions, leading to superior prediction accuracy compared to baseline models. We also propose a novel training pipeline for non-recurrent models to accommodate limited data issues.
By interpreting the model prediction in recurrent and non-recurrent conditions separately, we shed light on the differences among two types of conditions. Future work could explore incorporating additional data sources, such as real-time incident severity information, to further enhance prediction accuracy.
§ ACKNOWLEDGEMENT
This research is supported by US Department of Transportation Exploratory Advanced Research Award 693JJ321C000013. The contents of this paper reflect the views of the authors only, who are responsible for the facts and the accuracy of the information presented herein.
plain
|
http://arxiv.org/abs/2409.03675v1 | 20240905162800 | Fine-Grained Equivalence for Problems Related to Integer Linear Programming | [
"Lars Rohwedder",
"Karol Węgrzycki"
] | cs.DS | [
"cs.DS",
"cs.CC"
] |
Signature of maturity in cryptocurrency volatility
Bikas K. Chakrabarti
September 9, 2024
==================================================
§ ABSTRACT
Integer Linear Programming with n binary variables and m many 0/1-constraints can be solved in time 2^Õ(m^2)poly(n) and it is open whether the dependence on m is optimal.
Several seemingly unrelated problems, which include variants of Closest String, Discrepancy Minimization, Set Cover, and Set Packing, can be modelled as Integer Linear Programming with 0/1 constraints to obtain algorithms with the same running time for a natural parameter m in each of the problems.
Our main result establishes through fine-grained reductions that these problems are equivalent, meaning that a 2^O(m^2-ε)poly(n) algorithm with ε > 0 for one of them implies such an algorithm for all of them.
In the setting above, one can alternatively obtain an n^O(m) time algorithm for Integer Linear Programming using a straightforward dynamic programming approach, which can be more efficient if n is relatively small (e.g., subexponential in m).
We show that this can be improved to n'^O(m) + O(nm), where n' is the number of distinct (i.e., non-symmetric) variables.
This dominates both of the aforementioned running times.
§ INTRODUCTION
The study of parameterized complexity for Integer Linear Programming has a long history:
classical works
by Lenstra <cit.> and Kannan <cit.> and very recently Rothvoss and Reis <cit.> provide FPT algorithms in the number of variables n of an ILP of the form
Ax ≤ b, x∈ℤ^n.
In an orthogonal line of research,
Papadimitriou <cit.> gave an FPT algorithm in the number of constraints and the size of the coefficients of A and b for an ILP in standard form Ax = b, x∈ℤ^n_≥ 0.
Interest in the second line of work has been renewed by the improved algorithms due to Eisenbrand, Weismantel <cit.> and Jansen, Rohwedder <cit.>, which give essentially optimal running times (mΔ)^O(m)(n) due to a conditional lower bound based
on the Exponential Time Hypothesis (ETH) <cit.>. Here, Δ is the maximum absolute size of an entry in A.
The work by Eisenbrand and Weismantel also considers a version where variables are subject to box-constraints,
which will be the primary focus of this work, see definition below.
Integer Linear Programming
Constraint matrix A∈{-Δ,…,Δ}^m× n, right-hand side b∈ℤ^m, variable bounds ℓ, u∈ℤ_≥ 0^n.
Find x∈ℤ^n with
A x = b
ℓ_i ≤ x_i ≤ u_i i=1,2,…,n.
We refer to the variant where ℓ = (0,…,0) and u = (1,…,1) as
binary Integer Linear Programming.
Binary Integer Linear Programming
Constraint matrix A∈{-Δ,…,Δ}^m× n, right-hand side b∈ℤ^m.
Find x∈{0,1}^n with
A x = b.
The running times obtained in <cit.> for either variant is (mΔ)^O(m^2)(n). Also for matrices with only 0/1 coefficients nothing better than 2^Õ(m^2)(n) is known.
It is an intriguing question whether the slightly unusual exponent of Õ(m^2) is necessary,
which is in the spirit of fine-grained complexity.
Since the dominant complexity-theoretic assumption of P≠NP is not powerful enough
to show precise lower bounds on running times, the field of fine-grained complexity
is concerned with finding lower bounds via stronger
conjectures, see e.g., <cit.>.
A number of such conjectures exist by now, often with interesting connections between
them. Even if one doubts these conjectures, the reductions still
provide insights into how problems relate to each other.
Based on existing conjectures, the best lower bound known on the exponent
of Integer Linear Programming
is Ω(mlog m) from the easier unbounded setting <cit.>,
which is of course not tight in this setting.
In this paper, we take another path: we
assume that Integer Linear Programming cannot be solved faster than the state-of-the-art
and present several other natural parameterized
problems are all equivalent with respect to improvements on their running time.
[ILP Hypothesis]
For every > 0, there is no 2^O(m^2-)(n) time algorithm for
Integer Linear Programming with Δ = O(1).
In all of the problems below, the symbol m is chosen for the parameter of interest.
Many of them are well known applications of ILP techniques, see e.g. <cit.>.
Closest String with Binary Alphabet
Alphabet Σ = {0, 1}, strings s_1,s_2,…,s_m∈Σ^n
Find string t∈Σ^n minimizing
max_i d(t, s_i) ,
where d(t, s_i) is the Hamming distance between t and s_i, i.e., the number of positions the two strings differ in.
We refer to the generalization with arbitrary Σ simply as Closest String.
Discrepancy Minimization
Universe U = {1,2,…, n}, set system S_1,S_2,…,S_m⊆ U
Find coloring χ : U →{-1, 1} minimizing
max_i |∑_u∈ S_iχ(u)| .
Set Multi-Cover
Universe U = {1,2,…,m}, set system S_1,S_2,…,S_n⊆ U, b∈ℕ
Find I⊆{1,2,…,n} of minimal cardinality such that for each v∈ U
there are at least b sets S_i with i∈ I.
Set Multi-Packing
Universe U = {1,2,…,m}, set system S_1,S_2,…,S_n⊆ U, b∈ℕ
Find I⊆{1,2,…,n} of maximal cardinality such that for each v∈ U
there are at most b sets S_i with i∈ I.
As mentioned above, our main result is the following equivalence.
The following statements are equivalent:
* There exists an 2^O(m^2-)(n) algorithm for Integer Linear Programming with Δ = O(1) when
> 0.
* There exists an 2^O(m^2-)(n) algorithm for
Binary Integer Linear Programming with A∈{0,
1}^m× n and n≤ m^O(m) with
> 0.
* There exists an 2^O(m^2-)(n) algorithm for
Closest String with Binary Alphabet with >
0.
* There exists an 2^O(m^2-)(n) algorithm for
Discrepancy Minimization with >
0.
* There exists an 2^O(m^2-)(n) algorithm for
Set Multi-Cover with > 0.
* There exists an 2^O(m^2-)(n) algorithm for
Set Multi-Packing with >
0.
Note that <ref> is the negation of <Ref>.
All problems in <Ref> are easily transformed into the
first problem, i.e., Integer Linear Programming with Δ =
O(1), while maintaining the same value of m. Hence, the more interesting aspect of the theorem is that all these problems are as expressive as the first one.
<Ref> considers Integer Linear Programming with relatively small entries, i.e.,
Δ = O(1). One can also ask the question of whether there is any parameter regime for Δ for
which the state-of-the-art can be improved. In this spirit, a stronger variant of the conjecture is the following.
[Strong ILP Hypothesis]
For every > 0 and δ≥ 0, there is no 2^O(m^δ + 2-)(n) time algorithm for Integer Linear Programming with Δ = 2^m^δ.
Note that <ref> is a special case of <ref> for δ = 0.
Another interesting regime is the complexity of Integer Linear Programming with Δ = 2^m,
because of a connection to block-structure integer programming, which we elaborate on later.
There, the state-of-the-art algorithm requires time m^O(m^3)(n),
Integer Linear Programming with large entries can be reduced to an equivalent instance with
a 0/1 matrix as seen in
the following theorem, but the reduction is not strong enough to show equivalence between the two
hypotheses.
There is a polynomial time algorithm that transforms an instance of
Integer Linear Programming with Δ > 1 into an equivalent one with
A' ∈{0,1}^m'× n' for m' = O(mlogΔ) and n'≤m'^O(m').
This implies that if there is an algorithm with running time 2^O(m^1.5-)(n) for Integer Linear Programming with A ∈{0,1}^m× n, then there is a 2^O(m^3-')(n) time algorithm for Integer Linear Programming with Δ = 2^m.
One might hope to improve the theorem to m' = O(m √(logΔ)), since then a 2^O(m^2-)(n) time algorithm for 0/1 matrices would imply a 2^O(m^3-')(n) time
algorithm for Δ = 2^m. However, such a reduction would imply the strong result
that under ETH is equivalent to <Ref>. This is because
under ETH the Subset Sum problem, i.e., the case when m = 1, cannot be
solved in Δ^o(1)(n) time <cit.> and the hypothetical reduction would be able to encode
an instance of Subset Sum into an ILP with m = O(√(Δ)).
We are not aware of any meaningful reduction in the other direction, i.e., from large m and small Δ to smaller m and larger Δ. It is possible to aggregate m constraints into a single one
with entries bounded by Δ' = Δ^O(m^2), but this reduction seems useless since the resulting
parameter range requires (Δ') ·(n) time due to the ETH lower bound mentioned above.
Assuming <ref>, we derive a tight lower for a form of block-structured integer linear programs that has
been studied extensively in recent literature.
For simplicity, we consider here the basic setting with m× m submatrices.
n-Fold Integer Linear Programming
Square matrices A_1,…,A_n,B_1,…,B_n∈{-Δ,…,Δ}^m× m, right-hand sides b^(0),…,b^(n)∈ℤ^m.
Find x^(1),…,x^(n)∈ℤ_≥ 0^m with
A_1 x^(1) + … + A_n x^(n) = b^(0)
B_1 x^(1) = b^(1)
⋮
B_n x^(n) = b^(n)
For every δ > 0,
there is no algorithm with running time 2^O(m^3-δ)(n)
for n-Fold Integer Linear Programming when the maximum
absolute entry is bounded by Δ = O(1), unless <ref> is false.
This matches the best algorithms known for the problem, see <cit.> and references therein. The reduction follows the same idea as used in <cit.>,
where the authors show a non-tight quadratic lower bound for the exponent based on ETH.
Our lower bound is stronger simply because the conjecture we base it on is
stronger.
§.§ Tightness of more general problems
There are a number of other, more general problems to the ones mentioned above,
for which known algorithms would be tight assuming that one cannot improve the running
time for the class of problems in <Ref>.
However, for these, we do not know if they are all equivalent.
The algorithm by Eisenbrand and Weismantel <cit.> also works for the
optimization version of ILP, i.e.,
max c x, A x = b, ℓ≤ x ≤ u, x∈ℤ^n .
This leads to the same running time of (mΔ)^O(m^2)(n) except for a slightly higher constant in the exponent. Notably, the coefficients of c do not increase the number
of arithmetic operations.
Given a matrix A∈{-Δ,…,Δ}^m× n and
a convex function g : ℝ^m →ℝ∪{∞}, Dadush,
Léonard, Rohwedder, and Verschae <cit.> have shown that one can find the minimizer of
g(Ax), x ∈{0, 1}^n in time (mΔ)^O(m^2)(n) (assuming polynomial time to evaluate g).
This problem is a generalization of Binary Integer Linear Programming, since one can take g(b') = 0 if b' = b and g(b') = ∞ otherwise.
Given a linear matroid M = (E, ℐ), where |E| = n,
a matrix A∈{-Δ,…,Δ}^n× m
and a right-hand side b∈ℤ^m, Eisenbrand, Rohwedder, and Węgrzycki <cit.>
gave an algorithm that finds a basis B of M such that A x_B = b in time
O(mΔ)^O(m^2)(n), where
x_B is the incidence vector of B. This generalizes Binary Integer Linear Programming, since one can take E as the set of binary variables and n additional dummy variables
and then take M as the uniform matroid of rank n.
Finally, Closest String (with arbitrary alphabet) can still be solved in time
m^O(m^2)(n) for example by casting it as the previous matroid problem <cit.> or as a block structured ILP, see <cit.>.
§.§ Algorithm for few distinct variables
We show that the running time of 2^Õ(m^2)(n) for Integer Linear Programming with Δ = O(1),
i.e., the subject of <Ref>, can be improved if the number of distinct variables is low.
For the sake of generality, we state the algorithmic result for the optimization variant and any range of Δ.
Consider the integer programming problem
max c x
A x = b
ℓ_i ≤ x_i ≤ u_i i=1,…,n
x ∈ℤ^n .
Let Δ be an upper bound on the absolute value of entries of A.
The problem (<ref>) can be solved in time
n^m+1· O(mΔ)^m ·log(‖ u - ℓ‖_∞) .
Using standard reductions, see e.g. <Ref>, one may reduce u - ℓ_∞ to
(mΔ)^O(m), making the logarithmic factor insignificant.
We will now interpret this running time and point out interesting special cases.
*Binary ILP with A∈{0,1}^m× n.
Here, the running time above implies an n'^O(m) + O(nm) time algorithm, where n' is the number of distinct variables, i.e., variables that differ either in their entry of c or in their column of A:
one can merge identical variables in time 2^O(m) + O(nm) time, after which u_∞≤ n.
Furthermore, without loss of generality, the rows of A are linearly independent, thus m≤ n'.
Hence, the overall running time is
2^O(m) + O(nm) + n'^m+1· O(m)^m ·log n ≤n'^O(m)·log n + O(nm)
≤n'^O(m) + O(nm).
Here, the last inequality follows from a case distinction whether log n is greater than n'^O(m) or not.
Note that in the setting without objective function we have n'≤ 2^m.
Thus, this running time is at least as good as the known 2^Õ(m^2)(n) time algorithms.
It also dominates the running time of n^O(m) one would get by simple dynamic programming over the right-hand
sides b' that are feasible (for which there can be at most n^m).
*Binary ILP with A∈{0,1}^m× n and a constant number of 1s in each column.
Here, the number of distinct columns is polynomial in m and the previous case implies a running time
of m^O(m) + O(nm), meaning that <Ref> does not extend to this special case.
§ REDUCTIONS
The basic idea of all reductions for <Ref> is that we transform one problem with parameter m
into another problem
with parameter m' = O(m ·log^O(1) m). Additionally the size of the instance may only increase from n to 2^O(m^2-δ)(n) for some δ > 0.
The concrete reductions we prove can be seen in <Ref>.
§.§ Closest String
Let d be the bound on the maximum hamming distance in the decision version
of Closest String with Binary Alphabet. For a string s∈Σ^n we denote by s[i] the ith character. For i ≤ j we
write s[i… j] for the substring from the ith to the jth character.
<Ref>, Statement <ref> implies Statement <ref>
The following is an ILP model for Closest String with Binary Alphabet.
∑_i=1^n x_i ·1_{s_j[i] = 0} + (1 - x_i) ·1_{s_j[i] = 1} ≤ d for all j∈{1,2,…,m}
x ∈{0, 1}^n
One may add slack variables to turn the inequalities into equalities.
<Ref>, Statement <ref> implies Statement <ref>
We want to transform the following ILP
A x = b
x ∈{0, 1}^n
where A∈{0, 1}^m× n, into an equivalent instance of Closest String.
We first rewrite (<ref>) into a more convenient form.
One can in polynomial time construct a matrix C∈{-1,1}^(2m+2) × 2n and some c∈ℤ^2m+2 such that (<ref>) is feasible if and only if there
is a solution to
C x ≤ c
x ∈{0, 1}^2n.
Furthermore, every feasible x for (<ref>) has x_1 + ⋯ + x_2n = n.
This follows from simple transformations. We defer the proof until later and first
show how the reduction follows from it.
By comparing to the ILP formulation of Closest String we observe that (<ref>) corresponds to a “non-uniform” variant of
Closest String. It can be reformulated as: given strings s_1,…,s_2m+2∈{0,1}^2n and bounds d_1,…,d_2m+2∈ℤ, find a string t ∈{0,1}^2n such that for each j = 1,…,m we have d(t, s_j) ≤ d_j. This follows from the ILP model given in the proof of <Ref>.
Furthermore, we have the guarantee that any solution has exactly n ones.
To transform this into a regular instance, we add two more strings r_1, r_2 and
to each string s_j we add 4n more characters, which makes a total of 6n characters per string.
The strings of this instance s'_1,…,s'_2m+2,r_1,r_2 are defined as
r_1 = ( 0, …, 0, 0, …, 0, 0, …, 0, 0,
………… ,0 ),
r_2 = ( 1, …, 1, 1, …, 1, 1, …, 1, 0,
…………,0 ),
s_j' = ( s_j _2n chars, 1, …, 1_n chars, 0, …, 0_n chars, 1, …, 1,_2n - d_j chars0, …, 0_d_j chars ).
Here, we assume without loss of generality that d_j∈{0,…,2n}.
We claim that there is a solution to this instance with maximum Hamming distance 2n if and only if there is a solution to the non-uniform instance.
If there is a string t'∈{0,1}^6n with distance at most 2n to r_1, r_2, and
s'_j, j=1,2,…,2m+2, then there is also a string t∈{0, 1}^2n with
distance at most d_j to s_j, j=1,2,…,2m+2.
If there is a string t∈{0, 1}^2n with
distance at most d_j to s_j, j=1,2,…,2m+2, then there is also a string
t'∈{0,1}^6n with distance at most 2n to r_1, r_2, and
s'_j, j=1,2,…,2m+2.
From these claims, the lemma follows immediately.
We add variables x̅_1,…,x̅_n
and force a solution to take exactly n many ones
A x = b
x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n = n
x, x̅ ∈{0, 1}^n.
Next, we change the equality constraints into inequalities and A into a -1,1 matrix
A' x - 1x̅ ≤ b'
-A' x + 1x̅ ≤ -b'
x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n ≤ n
-x_1 - ⋯ - x_n - x̅_1 - ⋯ - x̅_n ≤ - n
x,x̅ ∈{0, 1}^n
where A' = 2A - 1 with 1 being the all-ones m× n matrix
and b' = 2b - (n, …, n).
Suppose there is a string t'∈{0,1}^6n with distance at most 2n to each of the strings s'_1,…,s'_2m+2,r_1,r_2.
Because d(r_1,r_2) = 4n, string t' must match r_1 and r_2 on characters where r_1 and
r_2 match. More precisely,
t'[i] = 0 for i ∈{4n,…,6n}.
Formally, this follows because of
4n ≥ d(t', r_1) + d(t', r_2)
= d(t'[1… 4n], r_1[1… 4n]) + d(t'[1… 4n], r_2[1… 4n])
+ 2 d(t'[4n+1… 6n], (0,…,0))
≥ d(r_1[1… 4n], r_2[1… 4n]) + 2 d(t'[4n+1… 6n], (0,…,0))
≥ 4n + 2 d(t'[4n+1… 6n], (0,…,0)) .
Let us now analyze the distance between t := t'[1… 2n] and s_j for some j.
Since the last 2n characters of t' are zero, we have d(t'[4n+1… 6n], s'_j[4n+1… 6n]) = 2n - d_j. Thus,
d(t, s_j) ≤ d(t', s'_j) - d(t'[4n+1… 6n], s'_j[4n+1… 6n])
≤ 2n - (2n - d_j) = d_j .
Thus, string t is a solution for the non-uniform instance.
Let t ∈{0, 1}^2n with d(t[1… 2n], s_j) ≤ d_j for all j.
We extend it to a string t'∈{0,1}^6n by setting t'[1… 2n] = t,
t[2n+1… 3n] = (1,…,1), and
t[3n+1… 6n] = (0,…,0).
Let us now verify that t' has a distance at most 2n to each string.
For r_1, r_2 note that t[1… 2n] has exactly n ones by guarantee of the
non-uniform instance. Thus, the distance to r_1 and r_2 is exactly 2n.
Consider some j∈{1,…,2m+2}. Then
d(s'_j, t) = d(s_j, t[1… 2n]) + 2n - d_j ≤ 2n .
§.§ Discrepancy Minimization
<Ref>, Statement <ref> implies Statement <ref>
Let d be a bound on the objective in the decision version of Discrepancy Minimization.
Let A be the incidence matrix of the given set system.
Then the colorings of discrepancy at most d are exactly the solutions of
[ A; -A ] y
≤[ d; d; ⋮; d ]
y ∈{-1, 1}^n.
This can be equivalently formulated as
[ 2A; -2A ] x
≤[ A; -A ][ 1; 1; ⋮; 1 ] + [ 2d; 2d; ⋮; 2d ]
x ∈{0, 1}^n.
where x_i = (1 + y_i) / 2. One may translate the inequalities into equalities by
introducing slack variables.
Therefore, an algorithm for Integer Linear Programming can be used to solve Discrepancy Minimization.
<Ref>, Statement <ref> implies Statement <ref>.
Consider an ILP of the form
A x = b
x ∈{0, 1}^n
for A∈{0, 1}^m× n and n≤ m^O(m).
We will construct an instance of Discrepancy Minimization which has discrepancy zero if and only if the ILP has a feasible solution.
Towards this, we first reformulate the ILP above as
A y = b'
y ∈{-1, 1}^n
where b' = 2b - A (1,…,1).
Note that x is feasible for (<ref>) if and only if y = 2x - (1, …, 1) is feasible for (<ref>). Also, if b' = (0, …, 0), then (<ref>) is already equivalent to an instance of Discrepancy Minimization that tests
for discrepancy zero. To handle the general case, we transform it into
an equivalent system with right-hand size (0,…,0).
We first construct a gadget of elements that have the same color.
For any k∈ℕ we can construct a pair of matrices B, B̅∈{0, 1}^(2k-1) × 2^k such that there are exactly two solutions to
B z + B̅z̅ = [ 0; ⋮; 0 ]
z,z̅ ∈{-1, 1}^2^k
namely
z = (1,…,1), z̅ = (-1,…,-1) and z = (-1,…,-1), z̅ = (1,…,1).
We will defer the proof to the end of the section.
Using this gadget with k = ⌈log_2 n ⌉ = O(mlog m) we now
replace each coefficient b'_j in the previous system by the variables from the gadget.
Note that (<ref>) is infeasible if b' _∞ > n.
Thus assume without loss of generality that b' _∞≤ n ≤
2^k.
Let C, C̅∈{0, 1}^2^k × m be defined as follows.
The jth row of C has b'_j many ones at arbitrary positions if b'_j ≥ 0 and is all zero otherwise; contrary, the jth row of C̅ has -b'_j many ones at arbitrary positions if b'_j < 0 and is all zero otherwise.
Now consider the system
A y + C z + C̅z̅ = [ 0; ⋮; 0 ]
B z + B̅z̅ = [ 0; ⋮; 0 ]
y ∈{-1, 1}^n
z,z̅ ∈{-1, 1}^2^k.
We claim that (<ref>) has a solution if and only if there is a solution to (<ref>). Let y, z, z̅ be a solution to the former. Notice that the negation of a solution is also feasible.
Due to <ref> we may assume without loss of generality that z = (-1,…,-1) and z̅ = (1, …, 1), negating the solution if necessary.
It follows that
C z + C̅z̅ = -b' .
Thus, Ay = b', which concludes the first direction.
For the other direction, assume that there is a solution y to (<ref>).
We set z = (-1,…,-1), z̅ = (1,…,1), which by <Ref>
satisfies B z + B̅z̅ = (0, …, 0). As before we have that C z + C̅z̅ = -b'. Thus, y, z, z̅ is a solution to (<ref>).
This establishes the equivalence of the initial ILP instance to (<ref>),
which corresponds to an instance of Discrepancy Minimization where we test for discrepancy
zero with m' = O(mlog m) sets.
The existence of such a matrix can be proven by induction: for k = 1, we simply take B = B̅ = (1).
Now suppose that we already have a pair of matrices B, B̅∈{0,1}^(2k-1) × 2^k as above.
Then we set
B' = [ B 0; 1 ⋯ 1 0 ⋯ 0; 0 ⋯ 0 1 ⋯ 1 ] and B̅' = [ B̅ 0; 0 ⋯ 0 1 ⋯ 1; 1 ⋯ 1 0 ⋯ 0 ]∈{0,1}^(2k+1) × 2 · 2^k
.
It can easily be checked that choosing either z = z' = (1, …, 1), z̅ = z̅' = (-1,…,-1) or
z = z' = (-1, …, -1), z̅ = z̅' = (1,…,1)
satisfies
B' [ z; z' ] + B̅' [ z̅; z̅' ] = [ 0; ⋮; 0 ] .
Now take any z,z',z̅, z̅'∈{-1,1}^2^k+1 that satisfy (<ref>).
Then we have that B z + B̅z̅ = (0,…,0). Hence by induction hypothesis
either z = (1,…,1), z̅ = (-1,…,-1) or z = (-1,…,-1), z̅ = (1,…,1). Assume for now the first case holds. Then because of the second-to-last rows of B, B̅ we
have
2^k + ∑_i=1^2^kz̅'_i = ∑_i=1^2^k z_i + z̅'_i = 0 .
Hence, z̅' = (-1,…,-1) = z̅. Similarly, the last row of B, B̅ implies that z' = (1,…,1) = z.
Analogously, if z = (-1,…,-1), z̅ = (1,…,1) then z' = z and z̅' = z̅.
§.§ Set Multi-Cover
<Ref>, Statement <ref> implies Statement <ref>.
An instance of the decision version of Set Multi-Cover with a bound
d on the cardinality can be formulated as
x_1 + ⋯ + x_n ≤ d
A x ≥[ b; ⋮; b ]
x ∈{0, 1}^n
where A∈{0, 1}^m× n is the incidence matrix of the given set system.
This can easily be translated to the form of (<ref>) by introducing slack variables.
Notice that this ILP has m + 1 constraints. Thus, a faster algorithm for ILP
would imply a faster algorithm for Set Multi-Cover.
In the remainder, we show that the converse is also true.
<Ref>, Statement <ref> implies Statement <ref>.
First, we reduce to
a “non-uniform” version of Set Multi-Cover where each element v has a different
demand b_v.
Let A∈{0,1}^m× n and b∈ℤ_≥ 0^m and consider the solutions to
Ax = b
x ∈{0, 1}^n.
First, we add n additional binary variables x̅ and the requirement that exactly n variables are equal to one, i.e.,
Ax = b
x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n = n
x ∈{0, 1}^n
x̅ ∈{0, 1}^n.
This system is equivalent to the previous one, by setting n - x_1 - ⋯ - x_n arbitrary variables x̅_i to 1. Next, we transform the equality constraints Ax = b into inequalities:
Ax ≥ b
( 1 - A) x + 1x̅ ≥[ n; ⋮; n ] - b
x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n = n
x ∈{0, 1}^n
y ∈{0, 1}^n.
Here, 1 denotes the n× n all-ones matrix.
Note that the second constraint is equivalent to Ax ≤ b, since we fixed the number of ones in
the solution. This ILP is feasible if and only if the optimum of the following
ILP is n.
min x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n
Ax ≥ b
( 1 - A) x + 1x̅ ≥[ n; ⋮; n ] - b
x_1 + ⋯ + x_n + x̅_1 + ⋯ + x̅_n ≥ n
x ∈{0, 1}^n
x̅ ∈{0, 1}^n.
This ILP corresponds to an instance of non-uniform Set Multi-Cover with 2m+1 elements.
To reduce a non-uniform instance S_1,…,S_n, (b_v)_v∈ U, to a uniform instance of Set Multi-Cover
we proceed as follows: add one new element and n many new sets to the instance. The coverage requirement of each element is n. The new element is contained in each of the new sets and in none of the old ones. Thus, each new set has to be taken. Furthermore, we add each old element v to n - b_v many arbitrary new sets.
§.§ Set Multi-Packing
<Ref>, Statements <ref> and <ref> are equivalent.
Notice the following duality between Set Multi-Cover and Set Multi-Packing.
Let U = {1,2,…,m}, S_1,S_2,…,S_n, and b∈ℕ be an instance of Set Multi-Cover. Now consider the instance of Set Multi-Packing with universe U, set system S̅_1 = U∖ S_1, S̅_2 = U∖ S_2, …, S̅_n = U∖ S_n, and bounds b̅ = n - b. This is a bipartition between instances of Set Multi-Cover and Set Multi-Packing, i.e., it can be performed in both ways.
For one pair of such instances, a solution I for Set Multi-Cover is feasible if and only if I̅ = {1,2,…,n}∖ I is feasible for Set Multi-Packing. Thus, if the optimum of Set Multi-Cover is k, then the optimum of Set Multi-Packing is n - k.
§.§ Integer Linear Programming
In this section, we prove <Ref>, i.e., we show how to
reduce an ILP with large coefficients into a (larger) ILP with only 0/1
coefficients. Furthermore, we show how to reduce an ILP with arbitrary
upper and lower bounds into one with at most (mΔ)^O(m) binary
variables. Note that this implies that Statements <ref>
and <ref> are equivalent and concludes the proof
of <ref>.
§.§.§ From large coefficients to zero-one
We will transform an ILP of the form
Ax = b
ℓ≤ x ≤ u
x ∈ℤ^n
where A∈{-Δ,…,Δ}^m× n into an equivalent
one with A'∈{0, 1}^m'× n'
for m' = O(mlogΔ).
Let k = ⌈log_2(Δ) ⌉.
For the jth row of A we introduce 4 (k + 1)
rows in A', denoted by r^+_0(j), r'^+_0(j), …, r^+_k(j), r'^+_k(j), r^-_0(j), r'^-_0(j), …, r^-_k(j), r'^-_k(j).
Intuitively, the rows r^+_i(j), r'^+_i(j) are symmetric rows that each stand for a value of 2^i in row j. Similarly, r^-_i(j), r'^-_i(j) stand for a value of -2^i in row j.
The right-hand sides of the new ILP are b_j for row r^+_0(j) and zero for all other rows affiliated with j.
For each column A_i we derive a column of A' as follows: for row j
we consider the binary encoding of A_ji and have the column in A' use this
binary encoding in r^+_0(j), r^+_1(j), r^+_k(j) if A_ji≥ 0 and in
r^-_0(j), r^-_1(j), r^-_k(j) if A_ji < 0. All other entries of this column are zero.
We now add auxiliary variables to shift the values from one power to another.
For each row j and each i=0,…,k-1 we add:
* a variable with a one in row r^+_i(j), r'^+_i(j) and r^-_i+1(j) and
* a variable with a one in row r^-_i(j), r'^-_i(j) and r^+_i+1(j).
Furthermore, for each row j and each i = 0,…,k we add:
0pt
* a variable with a one in row r'^+_i(j) and r^-_i(j),
* a variable with a one in row r'^-_i(j) and r^+_i(j), and
* a variable with a one in row r^-_i(j) and r^+_i(j).
Note that each auxiliary variable does not change the total value for row j,
i.e., the sum of 2^i times rows r^+_i(j) and r'^+_i(j) minus 2^i times r^-_i(j) and r'^-_i(j) over all i.
Thus, any solution to the new ILP must form a solution to the original ILP.
The converse is also true, since any value for one of the rows of j can always be shifted to r^+_0(j) via the auxiliary variables.
§.§.§ From bounded to binary variables
Consider an ILP of the form
Ax = b
ℓ≤ x ≤ u
x ∈ℤ^n
where A∈{-Δ,…,Δ}^m× n.
We will first transform this into an equivalent ILP of the form
A'x = b'
x ∈{0, 1}^n'
where A' ∈{-Δ,…, Δ}^m × n' for n' ≤ (mΔ)^O(m).
Assume without loss of generality that each column of A is different.
Otherwise, we can merge identical columns together by adding the respective upper and lower
bounds.
This implies that n≤ (2Δ+1)^m.
Next, we compute a vertex solution x^* to the continuous relaxation {Ax = b, ℓ≤ x ≤ u, x∈ℝ^n}.
The proximity bound by Eisenbrand and Weismantel <cit.> shows that there is an integer solution z with
z - x^*_1 ≤ m (2mΔ+1)^m if any integer solution exists.
Thus, choosing ℓ'_i = max{ℓ_i, ⌈ x^*_i ⌉ - (2Δ+1)^m}
and u'_i = min{u_i, ⌊ x^*_i ⌋ + (2Δ+1)^m} for i=1,2,…,n,
we can replace ℓ, u by ℓ', u' without affecting feasibility.
By shifting the solution, we can reduce the lower bound to zero and
we arrive at the following ILP that is equivalent to the original one.
Ax = b - Aℓ'
x_i ∈{0,1,…, u'_i - ℓ'_i} for all i=1,2,…,n
Note that u'_i - ℓ'_i ≤ 2m (2mΔ+1)^m. Thus replacing each variable by u'_i - ℓ'_i binary variables we arrive at an ILP with n' ≤ 2m (2mΔ+1)^m · n ≤ 2m (2mΔ+1)^2m binary variables.
§.§ n-Fold Integer Linear Programming
In this section, we prove <ref>. Consider the ILP
Ax = b
x ∈{0, 1}^n
with A∈{-2^m,…,2^m}^m× n.
We will show that this can be formulated as an equivalent n-Fold Integer Linear Program
with parameter m and Δ = O(1).
The reduction follows a similar idea as one in <cit.>,
which derives a lower bound based on Subset Sum.
Note that if (<ref>) had arbitrary variables (not necessarily binary)
then we can use the reduction of the previous section to transform it into a binary one.
For m ≥ 3, define
B = [ 1 0 ⋯ 0 1; 0 ⋯ 0; 2 -1 ⋯ ; 2 -1 ⋯ ⋮; ⋱ ; 2 -1 0 ]∈{-1,0,1,2}^m× m.
This matrix has the property that the system Bx = (1,0,…,0), x∈ℤ_≥ 0^m has exactly two solutions, namely x = (0,…,0,1) and x = (1,2^1,2^2,…,2^m-2,0).
Our n-Fold Integer Linear Program has one block A'_i, B'_i for each column A_i of A.
Matrix B'_i is defined as B above with right-hand side b'^(i) = (1,0,…,0).
Matrix A'_i is derived from A_i as follows: consider a coefficient A_ji.
We rewrite
A_ji = λ_0 · 2^0 + λ_1 · 2^1 + ⋯ + λ_m-2· 2^m-2 with λ∈{-2,…,2}^m-1 .
Such a λ exists since |A_ji| ≤ 2^m.
Then we set the jth row of A'_i as (λ, 0). Let x^(i) be the variables corresponding to block A'_i, B'_i. By choice of B'_i, b'^(i) there are exactly two settings of x^(i) that satisfy B'_i x^(i) = b'^(i), namely x^(i) = (0,…,0,1) and x^(i) = (2^0,2^1,…,2^m-2,0). Thus
A'_i x^(i)∈{(0,…,0), A_i}.
Hence, the n-Fold Integer Linear Program with b^(0) = b is equivalent to (<ref>).
§ ALGORITHM FOR FEW DISTINCT VARIABLES
In this section, we prove <ref>. We assume without loss of generality that in (<ref>)
we have ℓ = (0, 0,…, 0). Note that otherwise, one may obtain an equivalent ILP with
ℓ = (0, 0,…, 0), u' = u - ℓ, and b' = b - Aℓ.
We first decompose each u_i into a sum of powers of two with the properties
described in the following lemma. Such a construction is well-known in
literature, see
e.g. <cit.>.
For every integer k ∈ℕ and h ≥⌊logk⌋,
there exist integers c_0(k),…,c_h(k) ∈{0,1,2} such that
{ ∑_i=0^h 2^i x_i : 0 ≤x_i ≤c_i(k) for
all 0 ≤i ≤h
} = {0,1,…,k}
.
We call such a set c_0(k),…,c_h(k) a cover of k. This set
can be found in polynomial time in h.
For n ∈ℕ, let _i(n) ∈{0,1} denote the ith bit of the binary
representation of n. Let B := 2^h - 1 be the bottom bits, and let T
= k - B represent the top bits of k.
Now, consider the sets
S_B := {∑_i=0^h-1 2^i x_i : 0≤ x_i ≤_i(B) for all 0≤ i ≤ h-1 } and
S_T :=
{∑_i=0^h 2^i x_i : 0≤ x_i ≤_i(T) for all 0≤ i ≤ h }.
We will set c_i(k) := _i(B) + _i(T) ∈{0,1,2}. Showing that these numbers are a cover of k is equivalent to
S_B ⊕S_T := { b + t |b ∈S_B, t ∈S_T} = {0,1,…,k}.
First, observe that S_B = {0,1,…,2^h - 1}. Moreover, there is
no gap of length at least 2^h in the set S_T, that is, for every nonzero a ∈ S_T,
there exists a smaller b ∈ S_T such that a-b < 2^h. This b can be
constructed by flipping an arbitrary bit of a to 0. Therefore, S_B ⊕ S_T consists of
consecutive numbers. The smallest number in S_B ⊕ S_T is clearly 0,
and the largest is B+T = k.
We set h = ⌊log_2(‖ u ‖_∞) ⌋. Using the lemma above, for each i, we decompose
u_i = c_0(u_i) · 2^0 + c_1(u_i) · 2^1 + ⋯ + c_h(u_i) · 2^h with its cover.
We can now naturally rewrite (<ref>) as
max∑_i=0^h 2^i c z^(i)
∑_i=0^h 2^i Az^(i) = b
0 ≤ z^(k)_i ≤ c_k(u_i) k ∈{0,…,h}
z^(0),…,z^(h) ∈ℤ^n .
Each z^(j) produces a partial solution for some unknown right-hand side
b^(j) = 2^j A z^(j). We write b^(≤ j) = 2^0 Az^(0) + 2^1 Az^(1) + ⋯ + 2^j Az^(j).
Our approach is now to compute, for j=0,1,…,h and for every potential
value of b^(≤ j), a solution z^(0), z^(1),…, z^(j). To do
this, we first reduce the search space of relevant vectors b^(≤ j).
Consider A ∈ℤ^m× n and b ∈ℤ^m with
A_∞≤Δ.
Suppose that y = y^(0)· 2^0 + y^(1)· 2^1 + ⋯ +
y^(h)· 2^h satisfies A y = b and 0 ≤ y^(j)≤ u^(j)
with u^(j)∈{0,1,2}^n for each j ∈{0,…,ℓ}. Then for b^(≤ j) = 2^0 Ay^(0) + 2^1 Ay^(1) + ⋯ + 2^j Ay^(j), we have
b^(≤ j)≡ b 2^j+1
and furthermore
b^(≤ j)∈{- 2^j+2 mnΔ, …, 2^j+2 mnΔ}^m .
The first property follows from the observation that
b^(>j) := b - b^(≤ j) is a vector with each component being a multiple of 2^j+1.
Hence, b^(≤ j)≡ b - b^(>j)≡ b 2^j+1.
For the second property, observe that ‖ y^(h)‖_1 ≤‖
u^(h)‖_1 ≤ 2n for each h. Therefore,
‖ b^(≤ j)‖_∞≤∑_h=0^j 2^h+1nΔ≤ 2^j+2 nmΔ.
We say that a vector b' ∈ℤ^m is relevant for j if b' ≡ b 2^j+1 and ‖ b' ‖_∞≤ 2^j+2 mnΔ.
Clearly, the following holds:
For every j, the number of relevant vectors for j is O(mnΔ)^m.
We will now iteratively compute solutions for all vectors relevant for j+1 from the solutions for all relevant vectors for j with j ≥ 0.
Note that this will immediately establish <ref>.
Let 𝒮_j be a set of optimal solutions to (<ref>) for all
b' ∈ℤ^m that are relevant for j. Then, in time n^m+1·
O(mΔ)^m, we can compute a set 𝒮_j+1 of optimal solutions for all b”∈ℤ^m relevant for j+1.
Let V be the set of all vectors b' ∈ℤ^m with b'
≡ b 2^j+1 and ‖ b' ‖_∞≤ 2^j+2 mnΔ.
We define an edge-weighted directed acyclic graph with vertices {s}∪ V^(0)∪ V^(1)∪…∪ V^(n), where V^(0),V^(1),…,V^(n) are
n+1 copies of V, and s is a distinguished source vertex.
If j ≥ 0, there is an edge from s to every vertex v_b'∈ V^(0) such that
the vector v_b' corresponds to a relevant vector b' ∈ℤ^m for j for which (<ref>) has a solution x_b'. This edge indicates feasibility, and the weight of the edge from s to v_b' is the value c x_b' for the optimal solution x_b'.
In the base case where j < 0, there is exactly one edge of weight 0 to the vertex
in V^(0) that corresponds to the all-zero vector.
For each vertex in V^(i-1) for 0 < i ≤ n, there is an edge to the corresponding vertex in V^(i) with weight zero. Further, if
c_j+1(u_i) ≥ 1, for every vertex corresponding to b' in V^(i-1),
there is an edge to the vertex in V^(i) that corresponds to b' + 2^j A_i
(if it exists). The weight of this edge is 2^j c_i.
Similarly, if c_j+1(u_i) ≥ 2, then for every vertex in V^(i-1)
that corresponds to a relevant b', there is an edge to a vertex in V^(i)
that corresponds to b' + 2^j+1 A_i (if it exists), with a cost of 2^j+1c_i.
Finally, we compute the longest path from s to each vertex in V^(n), and we
store these as the values of the solutions to all the relevant right-hand sides for j+1.
For the running time, observe that by <ref>, we have |V| ≤
O(mnΔ)^m. Hence, the graph has n^m+1· O(mΔ)^m vertices
and edges. Thus, the longest path problem can be solved in time n^m+1· O(mΔ)^m.
For correctness, consider a path in the graph from vertex v_b_1∈
V^(0) to vertex v_b_2∈ V^(n) (corresponding to vectors b_1 and b_2
respectively). The edges of this path define a vector z^(j+1) such that 0 ≤
z^(j+1)≤ c_j+1(u). Moreover, by construction, it holds that 2^j A
z^(j+1) + b_1 = b_2. Finally, the weight of this path corresponds to the value
2^j c z^(j+1).
plain
|
http://arxiv.org/abs/2409.03248v1 | 20240905045158 | A Stochastic Approach to Reconstructing the Speed of Light in Cosmology | [
"Cheng-Yu Zhang",
"Wei Hong",
"Yu-Chen Wang",
"Tong-Jie Zhang"
] | astro-ph.CO | [
"astro-ph.CO"
] |
UTF8gbsn
firstpage–lastpage
Tensor network square root Kalman filter
for online Gaussian process regression
[
Received 16 July 2024; accepted 04 September 2024
=================================================================================
§ ABSTRACT
The Varying Speed of Light (VSL) model describes how the speed of light in a vacuum changes with cosmological redshift. Despite numerous models, there is little observational evidence for this variation. While the speed of light can be accurately measured by physical means, cosmological methods are rarely used. Previous studies quantified the speed of light at specific redshifts using Gaussian processes and reconstructed the redshift-dependent function c(z). It is crucial to quantify the speed of light across varying redshifts. We use the latest data on angular diameter distances D_A(z) and Hubble parameters H(z) from baryon acoustic oscillation (BAO) and cosmic chronometer measurements in the redshift interval z∈[0.07,1.965]. The speed of light c(z) is determined using Gaussian and deep Gaussian processes to reconstruct H(z), D_A(z), and D^'_A(z). Furthermore, we conduct comparisons across three distinct models, encompassing two renowned VSL models. We get the result of the parameters constraints in the models (1) for the “c-c" model, c_0=29492.6 ±^6.2_5.3 km s^-1. (2) For the “c-cl" model, c_0=29665.5 ±^11.2_11.4 km s^-1 and n=0.05535 ±^0.00008_0.00007. (3) For the “c-CPL" model, c_0=29555.7 ±^13.3_13.2 km s^-1 and n=-0.0607 ± 0.0001. Based on our findings, it may be inferred that Barrow's classical VSL model is not a suitable fit for our data. In contrast, the widely recognized Chevallier-Polarski-Linder (CPL) VSL model, under some circumstances, as well as the universal “c is constant" model, demonstrate a satisfactory ability to account for our findings.
Cosmology–methods: data analysis
§ INTRODUCTION
The foundation and subsequent development of the current standard cosmological model (SCM) stands as a paramount accomplishment in the field of astronomy throughout the 20th century. Such a universe model might be regarded as the “ground state" within the framework of general relativity. Within this cosmological framework, it is essential to acknowledge that the speed of light is a constant. This is an inevitable consequence due to the fundamental concept of Lorentz invariance in general relativity. Lorentz invariance, in turn, arises from two distinct postulates: the principle of relativity and the principle of constancy of the speed of light. Although the model demonstrates applicability to numerous phenomena inside our universe, there is some aspect that defies explanation <cit.>. The issues pertaining to the horizon and flatness are currently under active discussion. The inflation hypothesis is a widely accepted paradigm that aims to address these issues. Conversely, some theories suggest that the speed of light increases as the universe evolves, leading to the proposal of the varying speed of light (VSL) model as a way to address these challenges. The foundational framework of this approach was initially suggested by <cit.>. Subsequently, the contemporary form of VSL was introduced by <cit.> in 1992. Albrecht, Barrow, and Magueijo have together developed a model that demonstrates a process for transforming the Einstein de Sitter model into a cosmological attractor <cit.>. This model has been established for a certain length of time. In the subsequent study, <cit.> presents a theoretical framework that introduces concepts for covariance and local Lorentz invariance in the context of the varying speed of light. The aforementioned approach possesses the advantage of selectively preserving the elements of conventional definitions that remain unchanged under unit transformations, thereby enabling a valid representation of experimental results. In 2003, <cit.> presented a comprehensive evaluation of their research endeavors pertaining to the plausibility of VSL. The model is coming into prominence, but without enough observational evidence. Any alteration in the speed of light ultimately culminates in a dissonance between two velocities, potentially giving rise to anomalous Cherenkov radiation, a phenomenon meticulously delimited by empirical observations <cit.>.
The vastness of the universe provides a plethora of observational data. Baryonic acoustic oscillations (BAO), in conjunction with additional observational datasets such as Type Ia supernovae (SNe Ia), observational Hubble data (OHD), large-scale structures, the cosmic microwave background, among others, can serve as valuable tools for constraining cosmological parameters. An alternative approach involves the computation of the differential ages of galaxies undergoing passive evolution at various redshifts. This method yields measurements of the Hubble parameter H(z) that are not reliant on any specific model <cit.>. This approach allows for the determination of the change rate Δ z/ Δ t, which can then be used to express the Hubble parameter H(z) as H(z) ≃-1/1+zΔ z/Δ t. The technique commonly referred to as cosmic chronometers (CCs) is typically employed in this context, with the corresponding H(z) data derived from this method being denoted as CC H(z). Several galaxy redshift surveys, including the Sloan Digital Sky Survey (SDSS) <cit.>, the 6dF Galaxy Survey <cit.>, the Baryon Oscillation Sky Survey (BOSS) <cit.> provide the opportunity to measure the angular diameter distance D_A(z), and the Hubble parameter H(z) can be derived from the data of the WiggleZ Dark Energy Survey <cit.>, the third generation Slogan Digital Sky Survey (SDSS-3), strong gravitational lenses <cit.>, gravitational waves <cit.>, galaxy clusters <cit.>, etc., which makes it possible for us to use a larger D_A(z) and H(z) data set to measure the speed of light c(z).
The advancement of machine learning and its widespread application in cosmology have led to the development of various methods aimed at improving the precision of data constraints. The Gaussian Process (GP) is widely recognized as a prominent technique in the field of astronomy. It serves as a non-parametric machine learning model that effectively captures the characteristics of functions within a stochastic statistical process <cit.>. Through the utilization of this method, it becomes possible to effectively accommodate the data set and obtain the projected value at any given point. A method utilizing GP was presented in <cit.> to determine the speed of light at a specific redshift. In this study, the authors <cit.> employ a particular methodology that involves the utilization of two distinct covariance functions in order to obtain the value of c(z) at a specific redshift. Subsequently, in accordance with this viewpoint, the authors proceed to reconstruct the function c(z) inside the redshift interval z∈[0,2]. <cit.> proposes a novel approach that is independent of any specific model to address the issue of degeneracy between cosmic curvature and the speed of light. The aim is to investigate the constancy of the speed of light, denoted as c. In this study, we adopt the approach outlined in the work of <cit.> to reconstruct the function c(z) within the redshift interval z∈[0.07,1.965]. Our objective is to examine the relationship between the redshift z and the corresponding changes in the quantity c. We present a visual representation of this relationship in the form of a figure. It is important to note that our ability to enhance the amount of information utilized in this analysis is limited by the constraints imposed by the selection and combination of observational data. The inaccuracy of predictions beyond the existing observational data stems from the inherent uncertainty associated with unknown future observational outcomes. We utilized a total of 35 data points for the evaluation of H(z) using the CC approach, in addition to the 64 data points for D_A(z) obtained from BAO and other observations <cit.>. The inclusion of these data points significantly enhances the accuracy and reliability of the Gaussian Process. The GP is extensively employed in several domains. The accurate determination of its computation, encompassing hyperparameters, the number of hyperparameters, and the selection of kernels, can significantly influence the reconstruction of cosmological data and the accuracy of our predictions. Hence, it is imperative to engage in a comprehensive discussion of GP <cit.>.
The rest of the paper is organized as follows: In Section <ref>, we provide the theoretical basis for the cosmological measurement of c, along with various models of the VSL and GP. In Section <ref>, we describe how we use the GP to fit the data points. In Section <ref>, we provide the variation tendency of c and compare three models to discuss whether the trend conforms to the VSL model or not. Finally, in Section <ref>, we conclude our work and discuss some possible future work.
§ THEORETICAL BASIS
§.§ The Measurement of c from Angular Diameter Distance
The methodology employed in this paper is predicated on the literature referenced as <cit.>. Our endeavor is to constrain the speed of light by utilizing the latest dataset encompassing the angular diameter distance D_A(z), in conjunction with observational Hubble data H(z). The ensuing section will expound upon the meticulous theoretical underpinnings.
Firstly, in the VSL, the expression for the angular diameter distance can be derived as follows with assuming no spatial curvature and speed of light is no longer constant
D_A(z)=1/(1+z)∫_0^zc(z) d z/H(z).
A clear distinction can be observed between the functions H(z) and D_A(z) in that the former serves as a direct limitation on the Hubble parameter, while the latter imposes a constraint on the integral of the reciprocal of the Hubble parameter. Given that H(z) exhibits a strictly rising behavior with respect to redshift, it follows that the integral in question displays a higher sensitivity to fluctuations in H(z) in the vicinity of z = 0, whereas its sensitivity diminishes as the value of z increases. We can then proceed to differentiate the function of speed of light with respect to z
c(H(z),D_A(z),D_A^'(z);z)=H(z)[(1+z) D_A^'(z)+D_A(z)].
c(H(z),D_A(z),D_A^'(z);z)'s uncertainty can be obtained through the standard error propagation as we assume that the H(z) and D_A(z) datasets are independent of each other, and due to the lack of error in redshift data, the redshift error term is not considered
σ_c(H(z),D_A(z),D_A^'(z);z)^2=[(1+z) D_A^'(z)+D_A(z)]^2σ_H(z)^2
+[H(z)(1+z)]^2σ_D_A^'(z)^2
+H(z)^2σ_D_A(z)^2.
It should be noted that our formulas here are different from similar formulas in <cit.>, and their understanding of error propagation is unusual.
Finally, it is worth noting that D_A(z) has a maximum where D^'_A(z_m) = 0, so we assume that at the maximum point z_m, we can get
c(H(z),D_A(z),D_A^'(z);z_m)=D_A(z_m) H(z_m).
According to Equation (<ref>), <cit.> reconstructs the H(z) and D_A(z) and find the z_m to get the c(H(z),D_A(z),D_A^'(z);z_m). From a mathematical and empirical point of view, the maximum point z_m is critical to the fitting of the final curve, as it is more sensitive to the data and contains more cosmological information than other points on the D_A(z) curve <cit.>. Nevertheless, this approach alone provides the opportunity to quantify the parameter c at a single redshift value denoted as z_M. It is important to exercise caution when utilizing the variable z_M in order to facilitate the simplification of the equation, hence enabling a more precise measurement of the variable c. It is noted that equations (<ref>) and (<ref>) also apply to other c(H(z),D_A(z),D_A^'(z);z), so in our research, we try to get more c(H(z),D_A(z),D_A^'(z);z) at different redshifts according to Equation (<ref>). we reconstruct the H(z), D_A(z), and D^'_A(z), and by using Equation (<ref>), we obtain c(H(z),D_A(z),D_A^'(z);z) with errors in the redshift range [0.07,1.965].
§.§ The Model of VSL
The proposal of the VSL model emerged as an attempt to address the horizon and flatness issues within the field of cosmology. In this section, we provide a concise overview of two VSL models. The first model, referred to as the “c-cl model", is documented in the <cit.>. The second model discussed in this study is derived from the widely recognized Chevallier-Polarski-Linder (CPL) model <cit.>. The CPL model is commonly employed as the benchmark model for dynamical dark energy theories, and hence, it is referred to as the “c-CPL" model in this context.
In the minimally coupled theory, the substitution of the constant c with a field is performed inside the framework of the preferred frame for the c-cl model. Hence, the action remains as <cit.>
S=∫ d x^4(√(-g)(ψ(R+2 Λ)/16 π G+ℒ_M)+ℒ_ψ)
with ψ(x^μ)=c^4. The dynamical variables consist of the metric tensor g_μν, any matter field variables present in the matter Lagrangian ℒ_M, and the scalar field ψ itself. From this, the Friedmann, the acceleration, and the fluid equation can be expressed as
ȧ^2/a^2=8 π G(t) ρ/3-K c^2(t)/a^2,
ä=-4 π G(t)/3(ρ+3 p/c^2(t)) a,
ρ̇+3 ȧ/a(ρ+p/c^2)=-ρĠ/G+3 K c ċ/4 π G a^2,
with the remaining matter obeys an equation of state of the form
p=(γ-1) ρ c^2(t),
where ρ and p represent the density and pressure of the matter, respectively. The metric curvature parameter is denoted as K, whereas γ is a constant. Consequently, the speed of light, denoted as c, undergoes variations within the local Lorentzian frames that are associated with the cosmological expansion. Additionally, a minimal coupling arises in Einstein's equations due to the omission of surface factors, which can be attributed to a special-relativistic effect.
In order to solve the generalized conservation equation, <cit.> assumes that the rate of variation of c(t) is proportional to the expansion rate of the universe
c(t)=c_0 a(t)^n=c_0(a_0/1+z)^n,
where c_0 and n are constant, a_0=1, and z denotes the redshift. The flatness problem and the horizon problem can be resolved irrespective of the behavior of G(t) when n ⩽1/2(2-3 γ). The Lambda problem can be resolved when n<-3 γ/2 and the rate of variation G(t) is proportional to the expansion rate of the universe, expressed as G(t)=G_0 a^q, where G_0 and q are constants. However, it should be noted that the model has its limitations. If c varies, there may be potential issues with the perturbations to the isotropic expansion of the universe, which manifest as powers of v/c. If no other modifications to physics exist, this phenomenon results in alterations to the fine structure constant and other gauge couplings during the initial stages of the universe. One may need a special tuning of the initial sizes of these terms in the Friedmann equation with respect to the density term in order for their effects to just start to become significant close to the present epoch.
The second comes from the well-known CPL model <cit.>, which was introduced to solve the problem of the evolution of dark energy during the evolution of the VSL model. Based on the CPL model, the fluid equation of dark energy can be expressed as
ρ̇_DE(a)+3/a[1+w_DE(a)] ρ_DE(a)=3 K c(a)ċ(a)/4 π G a^2.
Inspired by the equation of state w(a)=w_0+w_a(1-a), a new hypothesis of variable velocity of light is introduced to solve the generalized conservation equation
c(t)=c_0[1+n(1-a(t))] =c_0[1+n(1-a_0/1+z)],
where c_0 and n are constants.
§.§ Gaussian Process
The Gaussian Process (GP) is a machine learning technique employed for regression, specifically for estimating the value at a new location based on a given set of prior values. The underlying principle of this approach is based on the assumption that all values are drawn from a joint Gaussian distribution within the context of function space <cit.>. By employing the aforementioned assumption, along with a specification of the anticipated mean and an assumption on the covariance between data points, it becomes possible to derive estimations for a given set of observational data points. More precisely, the Gaussian random variable associated with a reconstructed point z denotes the anticipated value for the GP.
In the scope of our research, it is necessary to undertake the task of reconstructing three functions, namely H(z), D_A(z), and D_A^'(z). Hence, it is advisable to organize the two sets of observational data on redshift into two vectors, denoted as X_1={ z | H(z)} and X_2={ z | D_A(z)}. In order to streamline the writing process, we have merged X_1 and X_2 into a single variable denoted as X_n, ensuring consistency throughout. The reconstructed function and predicted data points are hypothesized to originate from a multivariate Gaussian distribution, characterized by a mean vector denoted as f̅^*_n and a covariance matrix denoted as cov(f^*_n).The value was determined using the methodology described in <cit.>
f^*_n|X_n, y_n, X_n^*∼𝒩(f̅^*_n, cov(f^*_n)),
f̅^*_n =K(X^*_n, X_n)[K(X_n, X_n)+σ_n^2 ℐ]^-1y_n,
cov(f_*)=K(X^*_n, X^*_n)
-K(X^*_n, X_n)[K(X_n, X_n)+σ_n^2 ℐ]^-1 K(X_n, X^*_n),
where X_n^* represents the predicted vector of redshifts, y_n denotes the observational data vector, namely the { H(z)}, and σ_n^2=(σ_n^i)^T ·σ_n^i is the standard error of the observational data, and ℐ is the identity matrix. K(X_n, X_n) represents the covariance of the observational data, K(X^*_n, X^*_n) is the covariance of the new predicted points, and K(X_n, X^*_n) and K(X^*_n, X_n) are the covariances between these groups of points. The computation of these covariance matrices can be performed by utilizing a selected covariance function, denoted as k(·), which is commonly referred to as the kernel function. The kernel function is characterized by the hyperparameters (σ_f^2, l) <cit.>. The length scale l determines the length in the z-direction, which corresponds to a meaningful change of f(z); σ_f determines the typical change of f(z), which can be considered as the amplitude of the function. In order to reconstruct D_A^'(z) from observational data, it is necessary to modify the covariance metrics. The variables under consideration are transformed to represent the covariance between two specific points of the derivative function, as well as the covariance between a point of the observational data and the derivative function
K(X^*_n, X^*_n)=∂^2 k(X^*_n_i, X^*_n_j)/∂d̃X^*_n_i∂ẽX^*_n_j,
K(X_n, X^*_n)=∂ k(X_n_i, X^*_n_j)/∂ẽX^*_n_j,
where X^*_n_i and X^*_n_j are the corresponding redshift vectors, while d̃X^*_n_i and ẽX^*_n_j denote the value of the d-th and e-th dimensions of the redshift vectors, respectively.
It is crucial to consider the influence of hyperparameters on the construction of the covariance matrix. The best values of these hyperparameters need to be determined through training in order to achieve a comprehensive GP. The log marginal likelihood (LML) is a commonly employed technique in cosmological research for the purpose of hyperparameter training. The objective of hyperparameter optimization is to identify the optimal combination of hyperparameters that maximizes the LML. This optimal set of hyperparameters is subsequently employed in the GP to obtain the outcome. The LML can be expressed as
lnℒ= -1/2y_n^⊤[K(X_n, X_n)+σ_n^2 ℐ]^-1y_n
-1/2ln |K(X_n, X_n)+σ_n^2 ℐ|-m/2ln 2 π,
where m=dim( X_n) is the dimension of X_n. It is imperative to acknowledge that alternative approaches can also be employed for acquiring hyperparameters. When the LML reaches its maximum value, the corresponding hyperparameters produce the most probable representation of the function. In practical applications, the majority of GPs are implemented by optimizing the LML function.
In our study, we employ the approximate Bayesian computation (ABC) rejection method, which offers the advantage of not necessitating the definition of a likelihood function <cit.>, for the purpose of selecting several commonly used kernel functions: (1) Radial basis function (RBF) kernel. It is parameterized by a length-scale parameter l>0, which can take the form of either a scalar (representing the isotropic variation of the kernel) or a vector with the same number of dimensions. (2) Matérn kernel. It is a generalization of the RBF kernel and incorporates an extra parameter denoted as ν (ν=3/2,5/2,7/2,9/2, we label they as M32, M52, M72, and M92) which controls the smoothness of the resulting function. (3) Rational quadratic (RQ) kernel, also known as Cauchy kernel (CHY). It can be seen as a scale mixing, namely an infinite sum, of RBF kernels with different characteristic length-scales. (4) Exp-Sine-Squared (ESS) kernel. It allows for modeling periodic functions. It is parameterized by a length-scale parameter and a periodicity parameter. The approximation of the likelihood function in ABC rejection is achieved through the utilization of frequencies for the estimation of probabilities, hence enabling the derivation of the posterior distribution. In this study, the model's parameters are repeatedly sampled, with each sample being denoted as a particle. Next, appropriate screening criteria are established, and the proportion of particles that successfully pass the screening is computed in relation to the total number of samples. This allows us to determine the frequency and subsequently the likelihood. In order to implement the ABC rejection algorithm, the kernel function is seen as a model denoted as 𝒯. The hyperparameters σ_f^2 and l are then treated as parameters within the model 𝒯, as described by <cit.>.
The appropriate selection of a distance function is fundamental in ABC analysis, as the choice can impact the levels of statistical significance observed in comparisons between mock and observational data sets. One often employed distance functions are: (1) The likelihood function (LML). The utilization of this method is common for assessing the influence of hyperparameter values on the model's fit, hence establishing its suitability as one of the distance functions <cit.>. (2) The χ^2 estimation. The approach takes into consideration the objective of minimizing the sum of squared residuals while also accounting for the weighting of the inverse error. Hence, it offers a standard by which the model's quality may be evaluated, with a lower value of χ^2 indicating a stronger alignment between the mock and observational data <cit.>. (3) The Bias estimation. It provides the average of Euclidean distances between the mock and observational data sets and serves as an estimate for the anticipated disparity between the predicted and true values of the model, sometimes referred to as bias. The bias of a model performs the role of an indicator of its goodness of fit to the data, with a lower bias value suggesting a tighter alignment between the mean value of the mock data and the observational data <cit.>. By integrating these three distance functions, we present three distinct approaches for particle filtration. The ABC rejection outcomes derived from these approaches provide a more thorough response to the inquiry regarding the optimal kernel performance in ABC analysis.
By comparing the likelihoods of two statistical models, we may calculate the Bayes factor, denoted as ℬ_f, which involves the comparison of the likelihoods of two statistical models. This factor quantifies the extent to which we prefer one model over the other based on the ratio of their likelihoods <cit.>. In this study, the Bayes factor is employed to evaluate the degree of reliance between various data sets and the kernel. In contrast to conventional hypothesis testing, which solely permits the acceptance or rejection of a hypothesis, the Bayes factor assesses the strength of evidence in favor of a hypothesis. Therefore, the Bayes factor serves the purpose of not only determining the optimal model among a set of competing kernels but also quantifying the extent to which it outperforms the alternative models. The plausibility of two alternative models, denoted as 𝒯_1 and 𝒯_2, is assessed using the Bayes factor, given observational data y_n. The prior probability for both kernels is computed identically during the calculation of the Bayes factor. The approach solely considers the ratio of the posterior distributions of the two kernels as empirical evidence. And the scale of ℬ_f has a quantitative interpretation based on probability theory <cit.>.
§ DATA ANALYSIS
The data set includes 64 D_A(z) data points and 35 groups of H(z) data obtained from the cosmic chronometer, which are enumerated in Tables <ref> and <ref>, respectively.
We use the https://scikit-learn.org/stable/index.htmlscikit-learn module <cit.> to demonstrate the general GP reconstruction generated using LML training hyperparameters. This package provides a convenient, powerful and extensible implementation of Gaussian Process Regression (GPR) which makes it possible for us to reconstruct the speed of light more accurately as it provide simple and efficient tools for predicted data analysis. The GP method has been discussed and applied in several cosmological papers <cit.>. Figure <ref> shows that different kernel selections result in distinct curves after reconstruction, but it is challenging to infer which performs better from the graphs. In addition, we can also obviously find that different observables have different degrees of agreement with the kernel function. For example, the kernel function CHY seems to agree fairly well with an observable H(z) that shows obvious monotonicity with redshift, but not so well with an observable D_A(z) that shows non-monotonicity with redshift.
As described earlier, in order to quantify the difference between different kernel functions for different data, we use ABC rejection method with a special threshold ϵ to select kernel functions for different observables. Threshold value is very important for ABC rejection method. When the results of a single calculation of the three distances mentioned above are less than the threshold value, we believe that this method will not reject the results of this calculation. So the value of the threshold is definitely not randomly selected. Setting the threshold too high would obscure the differences between specific kernel functions, while setting the threshold too low would result in not only a small number of particles in each kernel function but also particles that are very close to one another as we reduce as much randomness as possible for sampling these kernel functions. To address this issue, we continuously adjust the threshold until we reach the final result. When the posterior distributions of the individual kernels undergo significant changes when the threshold is set to ε, but do not differ significantly when the threshold is greater than ε, we consider ε to be the appropriate threshold. When the previously observed differences are preserved when the threshold is set to a value less than ε, we consider ε to be the correct threshold. It is worth mentioning that there are no circumstances where the differences in the posterior distributions of the kernels change when the threshold is decreased further, as we want to gradually lower the threshold to conserve computational resources for the ABC rejection procedure.
Hereto, we employ three different types of data, apply the ABC rejection method to each type of data, and use three different distance functions in the computations, resulting as presented in Figure <ref>. The posterior distribution for each kernel function in Figure <ref> is derived by averaging 100 posterior probabilities. We observe that for both data sets, across different distance functions, M32 consistently shows the highest probability, while ESS consistently shows the lowest probability. In order to more clearly compare the advantages and disadvantages of the two kernel functions, we further transform the posterior distribution histogram into a Bayes factor ℬ_f between the two kernel functions displayed in the form of heatmap in Figure <ref>. And the darker the color, the larger the Bayes factor. In Figure <ref>(a) and Figure <ref>(b), the three subgraphs in the upper show all of our selected kernel functions, while the three subgraphs in the lower show the Bayes factors between the remaining six kernel functions after removing the very terrible ESS kernel function. This heatmap can be read like this, from the X-axis to the Y-axis. For example, the first row and third column in the concrete result of each graph should be interpreted as the Bayes factor of M32 (X-axis) with respect to RBF (Y-axis). And the scale of ℬ_f has a quantitative interpretation based on probability theory <cit.>, as well as the strength of evidence. We can find that: (1) for H(z) (a) with the LML distance function, M32 is at the “Decisive” level compared with other kernels. (b) With the χ^2 distance function, M32 is at the “Decisive” level compared with other kernels. (c) With the Bias distance function, M32 is at the “Decisive” level compared with other kernels. (2) For D_A(z) (a) with the LML distance function, M32 is at the “Very strong” level compared with RBF and at the “Strong" level compared with other kernels. (b) With the χ^2 distance function, M32 is at the “Strong” level compared with M52 and at the “Very strong" level compared with other kernels. (c) With the Bias distance function, M32 is at the “Very Strong” level compared with RBF and at the “Strong" level compared with other kernels. Therefore, we use M32 to reconstruct our two sets of data.
§ RESULTS AND DISCUSSIONS
We allocate a total of 1000 reconstruction bins within the redshift range of [0,2.0]. This choice is made based on the belief that the entire observation atlas provides the most comprehensive and informative dataset. We do not opt for a specific selection and combination of observational data, as it does not have increased the amount of information available. Our objective is to obtain the functions D_A(z), H(z), and D^'_A(z) using the M32 kernel function and the LML to train hyperparameters. To achieve this, we allow the GP to randomly initialize and optimize the hyperparameters 10,000 times. This approach aims to ensure that the resulting hyperparameter values fall within a reasonable range. Once the reconstructions of H(z), D_A(z), and D^'_A(z) are obtained, we proceed to fit the function c(z) using Equations (<ref>) and (<ref>).
The reconstructed results of H(z), D_A(z), D^'_A(z), and c(z) together with their corresponding 1σ errors are shown in Figure <ref>. A peculiar fluctuation is seen in the vicinity of the point z ∼ 1.5, which cannot be accounted for by any theoretical models of VSL. Consequently, we hypothesize that this anomaly is due to the absence of data for the angular diameter distance D_A(z) within the redshift range of 1.52 to 2.33. The value of the data in this redshift interval shows a clear downward trend. As evident from Equation (<ref>), this phenomenon occurs when the value of D_A(z) surpasses the maximum value D_A(z_m), resulting in a downward trajectory accompanied by a negative derivative D^'_A(z). This leads to our calculations of the speed of light reveal that there is a discernible decrease in its value at high redshift. However, it is worth noting that our technique has a distinct benefit in that it avoids the introduction of novel cosmological models and information derived from the data into the final reconstructed structure. As a consequence, our findings strive to accurately represent the inherent facts without undue influence. In addition, due to the limited amount of data available at high redshift, the derivative value obtained in the reconstruction process is is quite tiny, resulting in the phenomenon of the reconstructed speed of light decreasing, which can promote the release of BAO and OHD data at high redshift. Therefore, it is imperative that we should not overlook any potential implicit possibilities, and we must continue to give them due thought.
Then, we compare two VSL models with the universal “c is constant" model. For our analysis, we consider the following scenarios: the speed of light c is constant(“c-c" model) , c = c_0a^n(“c-cl" model) with n=0.5, c = c_0[1+n(1-a)](“c-CPL" model) with n = 0.5, and c = c_0[1+n(1-a)](“c-CPL" model) with n = -0.5. For the “c-cl" model, <cit.> has given an upper bound on n, which is -0.5; for the “c-CPL" model, we just assume two possibilities of n. Moreover, to compare the fit of four models, we provide the relative errors σ/c_model <cit.> the redshift range z∈[0.07,1.965] and the probability density function (PDF) of the relative errors in Figure <ref>, where c_model are the theoretical values of the models (2.9979 ± 0.19) × 10^5 km / s.
The upper panels provide a comparison between Barrow's traditional VSL model and the universal constant speed of light model in the Figure. <ref>. It is easy to draw the conclusion that the “c-c" model fits our results much better since the relative errors of it center on a smaller value of number. On the other hand, the classical VSL model does not fit well with our results. Furthermore, it is noteworthy to remark that the value of n = -0.5 serves as an upper limit for n in order to provide an explanation for the flatness issue, as discussed in <cit.>. If we assume a smaller value of n, the fitted result will be worse. The lower panels make a comparison between the well-known CPL model and the “c-c" model in the Figure. <ref>. If we assume n = 0.5 in the “c-CPL" model, the fitted result seems even better than the “c-c" model when using this judging method, since the relative errors of it center closer to the small value of the number; but if we assume n = -0.5 in the “c-CPL" model, the result will be no longer credible. By virtue of the result, we cannot robustly exclude the CPL model with strong confidence.
In order to provide more evidence supporting the consistency of our findings with the “c-c" model, we provide Figure <ref>. The calculation involves determining the quotient of the difference between the reconstructed speed of light, denoted as μ, and the theoretical model's speed of light, denoted as c_model, by the standard deviation σ. This is expressed as (μ-c_model)/σ. If, at a certain redshift, the measured value of μ significantly deviates from the theoretical value but the Gaussian process at that redshift yields a bigger error, it does not imply that the theoretical model significantly differs from the observed result. In this analysis, we will compute the disparity in relative error. In the "c-c" model, the proportions of frequencies for which the standardized residuals (μ-c_model)/σ lie within the intervals [-0.75, -0.3], [-0.95, -0.1], and [-1.25, 0.25] are around 68%, 95%, and 99%, respectively. These proportions closely align with the predicted values of a Gaussian distribution. This observation suggests that the findings are broadly consistent with the “c-c" model.
To check the consistency of our results with the models, we further calculate the reduced chi-square
χ_N=1/N∑_i=1^N (c_model , i-μ_i)^2/σ_i^2,
which degrees of proximity between the obtained results and the theoretical models are assessed. As the value decreases, the observed outcome approaches the theoretical model more closely. We collect a sample of N=1000 observations of the variable z in a uniform manner in order to get the statistic χ_N, which primarily serves as a tool for making comparisons. The results indicate that the “c-c" model is associated with a decreased 0.17. In contrast, the other three models, namely the “c-cl" model with n = -0.5, the “c-CPL" model with n = 0.5, and the “c-CPL" model with n = -0.5, are associated with reduced chi-square values of 2.50, 3.74, and 1.92, respectively. Based on numerical analysis, it may be inferred that the “c-c" model exhibits more consistency with our findings.
In addition, we can also use the reconstructed speed of light c(z) to constrain the parameters in the three models, so as to apply the scope of the model. However, it should be noted that our speed of light c(z) is dependent on our method and data, and other methods and data may give different results for the applicable range of the model. We assume the priors c_0∈[0,5× 10^5] and n∈[-5,5] to constrain parameters in Markov chain Monte Carlo (MCMC) respectively. Here, we use the Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (emcee) to obtain the estimated posterior <cit.>. The posteriors of mock data and reconstructed data are shown in Fig. <ref>. We find that (1) for the “c-c" model, c_0=29492.6 ±^6.2_5.3 km s^-1. (2) For the “c-cl" model, c_0=29665.5 ±^11.2_11.4 km s^-1 and n=0.05535 ±^0.00008_0.00007. (3) For the “c-CPL" model, c_0=29555.7 ±^13.3_13.2 km s^-1 and n=-0.0607 ± 0.0001. It is worth noting that, unlike GPs, the parameter constraints obtained through likelihood functions and least squares methods describe the overall information of the data and are influenced by the global data. GPs, on the other hand, focus on reflecting the local relationships between data points. This characteristic of GPs can be observed from Equation <ref>, highlighting how data varies with respect to a particular variable, such as the speed of light varying with redshift in this study. In order to compare the significance of the three models, we utilize two selection model criteria: Akaike Information Criterion (AIC) <cit.> and Bayesian Information Criterion (BIC) <cit.>. Both the AIC and the BIC estimate the quality of a model using a given dataset. AIC and BIC provide a measure of the relative quality between two models, estimate the missing information of a given model and consider both the goodness of fit and the simplicity of the model. A model with smaller values of AIC and BIC indicates less information loss and higher model quality. Both AIC and BIC suffer from overfitting, which they solve by adding a penalty term to the model. The difference is that the penalty term in BIC is larger than in AIC. The definitions of AIC and BIC are AIC=2 k-2 ln (L̂), BIC=k ln (n)-2 ln (L̂). Where L̂ is the maximum value of the likelihood function of the model, k is the number of the estimated parameters of the model, n is the sample size. Combined with the reduced chi-square given in Equation <ref>, we show the reduced chi-square, AIC and BIC of the three models with the change of parameter n in Figure <ref>. Since the parameter n is not included in the “c-c" model, its reduced chi-square does not change with n. It can be seen from Figure <ref>(a) that the parameter n of the “c-CPL" model is lower than the reduced chi-square of the “c-c" model in only a small range, which is more consistent with the data. The “c-cl" model is slightly less consistent with the data than the other two models in its parameter range. The heatmaps in Figure <ref>(b) and (c) can be read like this, from the Y-axis to the X-axis. For example, the first row and second column in the concrete result of each graph should be interpreted as the AIC or BIC of c-c (Y-axis) with respect to c-cl (X-axis) AIC/BIC=Y-X. It can be concluded from both Figure <ref>(b) and (c) that: In the three models, “c-c" model is most consistent with the data, “c-CPL" model is slightly less consistent with the data, and `c-cl" model is least consistent with the data.
It is interesting to study the applicability of the model by discussing cutting out data with high redshifts. We pointed out earlier that the reconstructed results fluctuate downward around redshift 1.5, and one possible explanation is the lack of high redshift data. Therefore, we intercept the data with high redshift, so that the redshift range of the H(z) data becomes [0.07,1.53], and the redshift range of the D_A(z) data becomes [0.009783,1.52]. Repeating the MCMC, reduced chi-square, and AIC/BIC calculations above, we can obtain the parameter constraint results and model selection results for the three models. We find that from Figure <ref>(a), (b), and(c): (1) for the “c-c" model, c_0=29424.1 ±^6.14_6.13 km s^-1. (2) For the “c-cl" model, c_0=29720.9 ±^12.6_12.0 km s^-1 and n=0.37113 ±^0.00009_0.00009. (3) For the “c-CPL" model, c_0=2996954.0 ±^13.5_13.4 km s^-1 and n=-0.4768 ± 0.0001. Since the parameter n is not included in the “c-c" model, its reduced chi-square does not change with n. It can be seen from Figure <ref>(d) that the parameter n of the “c-CPL" model is lower than the reduced chi-square of the “c-c" model in only a small range, which is more consistent with the data. The “c-cl" model is slightly less consistent with the data than the other two models in its parameter range. The heatmaps in Figure <ref>(e) and (f) can be read like Figure <ref>. It can be concluded from both Figure <ref>(e) and (f) that: In the three models, “c-c" model is most consistent with the data, “c-CPL" model is slightly less consistent with the data, and `c-cl" model is least consistent with the data. Compared with the results without high redshift data interception: (1) the speed of light constraint results are reduced. (2) The reduced chi-square of “c-c" model increased. The reduced chi-square of “c-cl" and “c-CPL" models decreased when the parameter n was negative, and the gap between the two narrowed. (3) The AIC/BIC gap between the three models was increased.
The constancy of fundamental physical constants is not always guaranteed, either in the terms of spatial or temporal variations. Despite the apparent simplicity of the aforementioned proposition, it bears profound implications for numerous physical phenomena and interactions, subject to scrutiny through diverse observational methodologies. The rules governing natural phenomena are contingent upon certain fundamental constants, which include but are not limited to Newton's constant, denoted as G, the speed of light, denoted as c, and the elementary charge of an electron, denoted as e. The values of these constants have been obtained by empirical experimentation, but ideally they should be derived directly from the fundamental theory. Therefore, it is unwarranted to make the assumption that the locally established values of the fundamental constants may be directly applied to other regions of the universe or to other time periods in cosmic history <cit.>. The exploration of fundamental constants and their potential spatiotemporal fluctuations holds profound significance within the discipline. Such studies provide valuable insights into physics beyond the standard model, perhaps revealing the existence of supplementary scalar fields and their interactions with the standard sector. The conceptualization not only aids in elucidating the speed of light but also facilitates the determination of several other fundamental physical constants. Leveraging Gaussian processes, alongside artificial neural networks, not only enables the reconstruction of observables but also promises a gradual refinement in the precision of constraints as observational datasets accumulate.
§ CONCLUSION
In this paper, we employ GPR to reconstruct the functions H(z), D_A(z), and D^'_A(z). By doing so, we are able to obtain the values of c(z) at different redshifts. We then proceed to compare these results with several theoretical models and derive the constraints on the model parameters. We find that (1) for the “c-c" model, c_0=29492.6 ±^6.2_5.3 km s^-1. (2) For the “c-cl" model, c_0=29665.5 ±^11.2_11.4 km s^-1 and n=0.05535 ±^0.00008_0.00007. (3) For the “c-CPL" model, c_0=29555.7 ±^13.3_13.2 km s^-1 and n=-0.0607 ± 0.0001. To acquire the outcomes of the speed of light measurement, the approximate Bayesian computation rejection technique is employed. This method facilitates the selection of the Gaussian kernel function suitable for two distinct observables, namely H(z) and D_A(z). Additionally, the likelihood function method is utilized to train the hyperparameters of the GP. After ensuring that each kernel function carries equally sampling weight when considering three different distance functions, we conclude that M32 is the most appropriate kernel function for the two observables. This determination is based on the approximate Bayesian computation rejection posterior distribution and the Bayes factor ℬ_f. Based on the assumption of a constant speed of light c, it may be inferred that the fitted outcome exhibits superior performance compared to the traditional VSL model, as provided by <cit.>. Nevertheless, it is important to consider the theoretical limitations on the parameter n inside the c(z) function of the CPL model before completely dismissing its relevance.
Currently, it can be inferred that it is possible for us to constrain the speed of light roughly based on OHD data and D_A(z) data, and the results are basically consistent with the speed of light being constant and also with some other VSL models (which cannot be ruled out), but it has been possible to rule out some of the VSL model parameters. It is evident that the reconstruction of c(z) does not exhibit the anticipated constant, which may be due to the scarcity of data points and the fact that we do not introduce additional cosmological models and cosmological information in the reconstruction from the beginning of the data to the result. Moreover, the curve of c(z) shows an aberrant decline. This is an unexpected result that is in disagreement with the constancy of c and even runs counter to most of the famous VSL models that are being investigated. This phenomenon arises because of the observed sensitivity of various kernel functions to the reconstruction outcomes of D^'_A(z) during reconstruction. When the redshift z evolves to the redshift z_m's neighborhood of the maximum of D^'_A(z_m), large derivative estimates will cause an obvious shake in the measurement value of the speed of light c(z). It is hypothesized that enhancing the reconstruction of variable c may be achieved by the acquisition of additional data points within the interval z∈[1.52,2.33]. This indicates that our observations of OHD and BAO data at high redshifts are still inadequate. Therefore, we should still enhance the scale and precision of our galaxy surveys to obtain richer and more accurate D_A(z) and OHD observations. In addition to the traditional D_A(z) and OHD data obtained from galactic observations, gravitational waves and fast radio bursts can also provide D_A(z) and OHD from the standard siren and the dispersion measure of the intergalactic medium. These data can be used as a source of new cosmological observations, providing a wider choice of constraints for the speed of light and other cosmological parameters.
In forthcoming research, our intention is to employ artificial neural networks for the purpose of reconstructing the desired observables' functions. Additionally, we aim to investigate the measurement outcomes of various physical constants under a reconstruction hypothesis that deviates from the Gauss process. This endeavor is undertaken with the objective of minimizing the occurrence of peculiar phenomena resulting from the reconstruction methodology. Furthermore, our research aims to develop a versatile observation design capable of accommodating multiple observations at a consistent redshift. This approach will effectively mitigate the intricate systematic errors that arise when comparing datasets from different observations. Additionally, this design will simplify the calculation of covariance and eliminate the need for reconstructing the function to obtain the final result.
§ ACKNOWLEDGEMENTS
We sincerely appreciate Kang Jiao, Jing Niu, and Hao Zhang for their kind help. This work was supported by the National SKA Program of China (2022SKA0110202),China Manned Space Program through its Space Application System and National Science Foundation of China (Grants No. 11929301).
§ DATA AVAILABILITY
The data underlying this article are available in the article from Table 1 and 2.
mnras
|
http://arxiv.org/abs/2409.03314v1 | 20240905074031 | Monotonicity Formulas for Capillary Surfaces | [
"Guofang Wang",
"Chao Xia",
"Xuwen Zhang"
] | math.DG | [
"math.DG",
"53C42, 53A10, 49Q15"
] |
Monotonicity Formulas]Monotonicity Formulas for Capillary Surfaces
Wang]Guofang Wang
[G.W]Mathematisches Institut
Universität Freiburg
Ernst-Zermelo-Str.1
79104
Freiburg
Germany
[email protected]
Xia]Chao Xia
[C.X]School of Mathematical Sciences
Xiamen University
361005, Xiamen, P.R. China
[email protected]
Zhang]Xuwen Zhang
[X.Z]Mathematisches Institut
Universität Freiburg
Ernst-Zermelo-Str.1
79104
Freiburg
Germany
[email protected]
§ ABSTRACT
In this paper, we establish monotonicity formulas for capillary surfaces in the half-space ^3_+ and in the unit ball ^3 and extend the result of Volkmann <cit.> for surfaces with free boundary.
As applications, we obtain Li-Yau-type inequalities for the Willmore energy of capillary surfaces, and extend Fraser-Schoen's optimal area estimate for minimal free boundary surfaces in ^3 <cit.>
to the capillary setting,
which is different to another optimal area estimate proved by Brendle in <cit.>.
Keywords: monotonicity formula, Li-Yau inequality, Willmore energy, optimal area estimate, minimal surface, capillary surface
MSC 2020: 53C42, 53A10, 49Q15
[
[
Received 16 July 2024; accepted 04 September 2024
=====================================================
§ INTRODUCTION
For a 2-dimensional immersed, open surface ⊂^n+1, it is proved by Simon <cit.> that for 0<σ<ρ<∞, a∈ℝ^n+1,
g_a(ρ)-g_a(σ)
=1/π∫_∩B_ρ(a)∖B_σ(a)1/4H⃗+(x-a)^⊥/x-a^2^2^2,
where
g_a(r)
^2(∩B_r(a))/πr^2+1/16π∫_∩B_r(a)H⃗^2^2
+1/2πr^2∫_∩B_r(a)H⃗·(x-a)^2.
This is known as Simon's monotonicity identity and is later generalized to hold for integral 2-varifolds by Kuwert-Schätzle <cit.>.
As an interesting application, the monotonicity formula yields an alternative proof (by taking ρ∞ and σ0^+) of the Li-Yau inequality (<cit.>):
πΘ_max
≤1/16∫_^n+1H⃗^2μ,
where Θ_max denotes the maximal density of an integral 2-varifold μ, whose generalized mean curvature in ^n+1 is square integrable.
Note that for an immersion of a 2-dimensional compact orientable closed smooth surface F:^n+1,
let μ_g be the induced area measure of with respect to the pull-back metric g=F^∗ g_ euc, then its image as integral 2-varifold is given by
μF(μ_g)
=(x↦^0(F^-1(x)))^2⌞F().
Hence
from the Li-Yau inequality, we easily see that the Willmore energy
(μ)
1/4∫_^n+1H⃗^2μ≥4π,
and F:^n+1 is an embedding
if
(μ)<8π.
We refer the interested readers to the monographs KS12,MN14 for an overview of the study of Willmore energy.
Recently, Volkmann <cit.> generalized this theory to free boundary surfaces in the unit ball ^n+1={x∈^n+1:x<1} (indeed, he proved it in the context of integral free boundary 2-varifolds).
He obtained a monotonicity identity similar to (<ref>) (see Section <ref>) and consequently a Li-Yau-type inequality.
As an application, he established the Willmore energy for integral free boundary 2-varifolds in ^n+1, and proved that its lower bound is given exactly by 2π.
§.§ Main Result
In this paper, we deal with the capillary counterpart of the above results.
We first prove a Simon-type monotonicity formula for capillary hypersurfaces in the half-space
^3_+:={x∈^3:x_3<1}
that are formulated in the weak sense, see Section <ref> for the precise definition.
Given θ∈(0,π), let V be a
rectifiable 2-varifold supported on ^3_+ with its weight measure denoted by μ and let W a rectifiable 2-varifold supported on ^3_+ with its weight measure denoted by η, satisfying a contact angle condition as in Definition <ref> (and adopt the notations in Section <ref> below).
For any a∈^3, consider the functions g_a(r),ĝ_a(r) defined by
g_a(r)μ(B_r(a))/πr^2-cosθη(B_r(a))/πr^2+1/16π∫_B_r(a)H⃗^2μ+1/2πr^2∫_B_r(a)H⃗·(x-a)μ
,
ĝ_a(r)μ(B̂_r(a))/πr^2-cosθη(B̂_r(a))/πr^2+1/16π∫_B̂_r(a)H⃗^2μ+1/2πr^2∫_B̂_r(a)H⃗·(x-ã)μ.
Then for any 0<σ<ρ<∞, we have
(g_a(ρ)+ĝ_a(ρ))-(g_a(σ)+ĝ_a(σ))
=1/π∫_B_ρ(a)∖B_σ(a)1/4H⃗+(x-a)^⊥/x-a^2^2μ+1/π∫_B̂_ρ(a)∖B̂_σ(a)1/4H⃗+(x-ã)^⊥/x-ã^2^2μ
-2cosθ∫_B_ρ(a)∖B_σ(a)a_3^2/x-a^4η-2cosθ∫_B̂_ρ(a)∖B̂_σ(a)ã_3^2/x-ã^4η,
where (x-a)^⊥ν_·(x-a)ν_.
As an application, a Li-Yau-type inequality together with the characterization of the equality case is obtained for capillary immersions in the half-space.
For the precise definition of capillary immersions, see Section <ref>. We will adopt the notations that are used in Section <ref>.
The Willmore energy of capillary immersion F is defined in the usual sense, that is,
Given θ∈(0,π) and a θ-capillary immersion F:^3_+.
The Willmore energy of the capillary immersion F is defined as
(F)
1/4∫H⃗^2μ.
(F) is a conformal invariant with respect to conformal diffeomorphisms. Detailed discussions regarding the definition of the Willmore functional for capillary surfaces are provided in Section <ref>.
Given θ∈(0,π), let F:^3_+ be a compact θ-capillary immersion, satisfying (<ref>).
Then we have
* the Li-Yau-type inequality
(F)
≥4(1-cosθ) πΘ^2(μ,x_0)
holds for every x_0∈ F(), where Θ^2(μ,x) denotes the density of μ at x_0.
*
the sharp estimate on Willmore energy:
(F)≥2 (1-cosθ) π,
and if
(F)<4 (1-cosθ) π,
then F:⊂^3_+ must be an embedding.
* For θ∈[π/2,π), if
W(F)<4π,
then F:^3_+ is an embedding.
Moreover,
equality in (<ref>) holds if and only if F() is a θ-spherical cap.
Inequality (<ref>) was proved using a flow method by the first and the second author in <cit.>*Corollary 1.3 for convex capillary surfaces in ^3_+, while the contact angle θ is restricted within the range (0,π/2].
Therefore Theorem
<ref> complements the full range of contact angle and moreover removes the assumption of the convexity.
Due to the conformal invariance of the Willmore functional, Theorem <ref>
implies the same inequality as (<ref>) for capillary surfaces in the unit ball, which interestingly implies a new optimal area estimate for minimal capillary surfaces in ^3.
Given θ∈(0,π), let F:^3 be a θ-capillary minimal immersion, satisfying (<ref>).
Then there holds
2-cosθT
≥2(1-cosθ)π.
Equality holds if and only if F() is a totally geodesic disk in ^3. Here |T| is the area of the “wetting” part defined in Section <ref>.
As a special case of the isoperimetric inequality for minimal submanifolds proved by Brendle <cit.>, any capillary minimal surface in the unit ball satisfies the following optimal estimate
≥sin^2θπ=sin^2θ^2
with equality holding if and only if is a flat disk.
In fact, the higher dimensional counterparts also hold, see <cit.>*Theorem 5.5.
Restricting θ=π/2, i.e., in the free boundary case, this area lower bound was obtained in <cit.> and <cit.>.
(<ref>) now provides a new optimal area estimate for minimal capillary surfaces in the unit ball. It seems that both area estimates do not imply each other.
In view of <cit.>, it would be interesting to seek for a higher dimensional generalization of (<ref>).
In the last part of the paper, Section <ref>, we establish a monotonicity formula (<ref>) for capillary
surfaces in ^3.
This monotonicity formula is a generalization of Volkmann's theorem for surfaces with free boundary in the unit ball <cit.>*Theorem 3.1.
Due to the extra 2-varifold W supported on the unit sphere (as in Theorem <ref>), our monotonicity formula (<ref>) becomes more involved than his.
We refer the readers to Theorem <ref> below.
Theorem <ref> can also be proved directly by
this monotonicity formula.
We end the introduction with a short summary of the fruitful results of the study of
capillary surfaces. The existence, regularity, and geometric properties of capillary surfaces have attracted more and more attention from
differential geometers and geometric analysts, see the nice book of Finn <cit.> for a through introduction.
Here we mention some recent progress on this topic.
In ^3, optimal boundary regularity for solutions to the capillarity problems (volume-constrained local minimizers of the free energy functional) was obtained by Taylor <cit.>, while the partial regularity was obtained recently by De Philippis-Maggi DePM15,DePM17 in ^n (n≥4), even in the anisotropic setting.
Quite recently, using the Min-Max method, the existence of capillary minimal or CMC hypersurfaces in compact 3-manifolds with boundary has been shown independently by De Masi-De Philippis <cit.> and Li-Zhou-Zhu <cit.>.
In terms of the classical differential geometry, Hong-Saturnino <cit.> carried out curvature and index estimates for compact and non-compact capillary surfaces.
In <cit.>, we proved a Heintze-Karcher-type inequality and give the characterization of smooth CMC capillary hypersurfaces in the half-space or in a wedge, namely, the Alexandrov-type theorem.
The anisotropic counterpart was tackled in JWXZ23,JWXZ23b as well.
See also <cit.> for a non-smooth generalization.
§.§ Organization of the Paper
In Section <ref>
we first introduce the capillary surface in the half-space in the setting of varifolds and then
provide the proof of Theorem <ref>,
In Section <ref> we use the monotonicity formula to estimate the Willmore energy for capillary surfaces in the half-space and prove
the Li-Yau inequality (<ref>) and Theorem <ref>. The Willmore energy for capillary surfaces in the unit ball is discussed in Section <ref>, together with one proof of Theorem <ref>.
Finally, in Section <ref>, we prove the Simon-type monotonicity formulas in the unit ball, and then provide an alternative proof of Theorem <ref>.
§ MONOTONICITY FORMULA IN THE HALF-SPACE
§.§ Set-ups
We will be working on the Euclidean space ^3, with the Euclidean metric denoted by g_ euc and the corresponding Levi-Civita connection denoted by . ^3_+={x:x_3>0} is the open upper half-space and E_3(0,0,1).
Let ξ:^3^3_+ denote the unique point projection onto the hyperplane ^3_+.
It is easy to see that for any x∈^3, ξ(x)=x-x_3E_3, and hence ξ is a smooth map.
Define
x̃2ξ(x)-x
=x-2x_3E_3
to be the reflection of x across ^3_+.
Given a point a∈^3, define
r=x-a, r̃=x̃-a,
it is clear that r̃=x̃-a
=x-ã.
We refer to <cit.> for the background materials in Geometric Measure Theory and consider the following weak formulation of capillary surfaces.
Given θ∈(0,π), let V be a rectifiable 2-varifold supported on ^3_+ with its weight measure denoted by μ and let W a rectifiable 2-varifold supported on ^3_+ with its weight measure denoted by η.
(V,W) is said to satisfy a contact angle condition θ if there exists a μ-measurable vector field H⃗∈^1(^3,μ) with H⃗(x)∈ T_x^3_+ for μ-a.e. x∈^3_+,
such that for every X∈ C_c^1(^3;^3) with X
tangent to ^3_+,
∫_^3div_Xμ-cosθ∫_^3_+div_^3_+Xη=-∫_^3H⃗·Xμ,
where spt μ is countably 2-rectifiable, for simplicity we also set T spt η.
In particular, if X vanishes along ^3_+, there holds that
∫_^3div_Xμ=-∫_^3H⃗·Xμ.
This definition was introduced in <cit.>, which follows from the one initiated in <cit.>,
with the boundary part weakened to be just a 2-varifold.
Notice that the first variation formula (<ref>) is valid if X is merely a Lipschitz vector field.
In this section, we consider those V,W that are integral 2-varifolds with μ(^3_+)=0
and H⃗∈^2(^3,μ).
It follows that H⃗⊥ for μ-a.e. x∈ thanks to the well known Brakke's perpendicular theorem (<cit.>*Sect. 5.8), so that for any vector v∈^3, one has
21/4H⃗+v^⊥^2
=1/8H⃗^2+2v^⊥^2+H⃗·v,
where v^⊥ denotes the normal part of v with respect to the approximate tangent space T_x.
§.§ A Monotonicity Formula
A general monotonicity formula for the pairs of varifolds satisfying a contact condition has been established in <cit.> (see also <cit.>) with none sharp constants, which generalizes a previous result in <cit.> for rectifiable free boundary varifolds.
We are interested in the Simon-type monotonicity formula (<ref>) for the capillary case with optimal constants.
We follow closely the reflection idea in <cit.>. See also <cit.>.
Define l(s)((1/s_σ)^2-1/ρ^2)_+,
where s_σmax{s,σ}.
l(s) is then a Lipschitz cut-off function and it is easy to see that
l(s)
=
1/σ^2-1/ρ^2, 0<s≤σ,
1/s^2-1/ρ^2, σ<s≤ρ,
0, ρ<s.
We wish to find a suitable vector field to test (<ref>). The construction is inspired by <cit.> and is nowadays quiet standard.
Precisely,
we define
X_1(x)=((1/x-a_σ)^2-1/ρ^2)_+(x-a),
X_2(x)=((1/x-ã_σ)^2-1/ρ^2)_+(x-ã),
X(x)X_1(x)+X_2(x).
For x∈^3_+, since x=x̃ we simply have
x-a_σ=x̃-a_σ=x-ã_σ,
and hence
X(x)=((1/x-a_σ)^2-1/ρ^2)_+(2x-(a+ã))∈^3_+.
Let B̂_r(a)={x:x-ã<r}
and consider the partitions of ^3:
_1{B_σ(a),B_ρ(a)∖B_σ(a),^3∖B_ρ(a)},
_2{B̂_σ(a),B̂_ρ(a)∖B̂_σ(a),^3∖B̂_ρ(a)}.
We wish to test (<ref>) by X(x). To this end, we compute
∫_Adiv_X_iμ-cosθ∫_Adiv_^3_+X_iη, and ∫_AH⃗·X_iμ
for all sets A∈_i, i=1,2, separately.
For X_2: on 0≤x-ã≤σ, direct computation shows that
X_2(x)=(1/σ^2-1/ρ^2)Id,
and hence
∫_B̂_σ(a)div_X_2μ-cosθ∫_B̂_σ(a)div_^3_+X_2η =(2/σ^2-2/ρ^2)(μ(B̂_σ(a))-cosθη(B̂_σ(a))),
∫_B̂_σ(a)H⃗·X_2μ =(1/σ^2-1/ρ^2)∫_B̂_σ(a)H⃗·(x-ã)μ.
On σ<x-ã≤ρ, we have X_2(x)=(1/x-ã^2-1/ρ^2)(x-ã). By noticing that
(1/x-ã^2)
=-21/x-ã^4(x-ã),
we have
X_2(x)
=(1/x-ã^2-1/ρ^2)Id-21/x-ã^4(x-ã)⊗(x-ã),
divX_2(x)
=3(1/x-ã^2-1/ρ^2)-2/x-ã^2
=1/x-ã^2-3/ρ^2,
div_X_2(x)
=divX_2(x)-X_2[ν_]·ν_=-2/ρ^2+2(x-ã)^⊥/x-ã^2^2,
div_^3_+X_2(x) =divX_2(x)-X_2[E_3]·E_3
=-2/ρ^2+2ã_3^2/x-ã^4.
It follows that
∫_B̂_ρ(a)∖B̂_σ(a)div_X_2μ-cosθ∫_B̂_ρ(a)∖B̂_σ(a)div_^3_+X_2η
= -2/ρ^2(μ(B̂_ρ(a)∖B̂_σ(a))-cosθη(B̂_ρ(a)∖B̂_σ(a)))
+2∫_B̂_ρ(a)∖B̂_σ(a)(x-ã)^⊥/x-ã^2^2μ
-2cosθ∫_B̂_ρ(a)∖B̂_σ(a)ã_3^2/x-ã^4η,
∫_B̂_ρ(a)∖B̂_σ(a)H⃗·X_2μ=-1/ρ^2∫_B̂_ρ(a)∖B̂_σ(a)H⃗·(x-ã)μ+∫_B̂_ρ(a)∖B̂_σ(a)H⃗·x-ã/x-ã^2μ.
Combining these computations, we obtain
∫_^3div_X_2μ-cosθ∫_^3div_^3_+X_2η
= 2/σ^2(μ(B̂_σ(a))-cosθη(B̂_σ(a)))-2/ρ^2(μ(B̂_ρ(a))-cosθη(B̂_ρ(a)))
+2∫_B̂_ρ(a)∖B̂_σ(a)(x-ã)^⊥/x-ã^2^2μ-2cosθ∫_B̂_ρ(a)∖B̂_σ(a)a_3^2/x-ã^4η,
and
∫_^3H⃗·X_2μ= 1/σ^2∫_B̂_σ(a)H⃗·(x-ã)μ-1/ρ^2∫_B̂_ρ(a)H⃗·(x-ã)μ
+∫_B̂_ρ(a)∖B̂_σ(a)H⃗·x-ã/x-ã^2μ.
Similar computations hold for X_1 and we obtain
∫_^3div_X_1μ-cosθ∫_^3div_^3_+X_1η
= 2/σ^2(μ(B_σ(a))-cosθη( B_σ(a)))-2/ρ^2(μ( B_ρ(a))-cosθη( B_ρ(a)))
+2∫_ B_ρ(a)∖B_σ(a)(x- a)^⊥/x- a^2^2μ-2cosθ∫_ B_ρ(a)∖B_σ(a)a_3^2/x- a^4η,
and
∫_^3H⃗·X_1μ= 1/σ^2∫_B_σ(a)H⃗·(x- a)μ-1/ρ^2∫_B_ρ(a)H⃗·(x-a)μ
+∫_ B_ρ(a)∖B_σ(a)H⃗·x-a/x-a^2μ.
On the other hand, thanks to (<ref>), we have
∫_ B_ρ(a)∖B_σ(a)H⃗·x-a/x-a^2μ=2∫_ B_ρ(a)∖B_σ(a)1/4H⃗+(x-a)^⊥/x-a^2^2μ
-1/8∫_ B_ρ(a)∖B_σ(a)H⃗^2μ-2∫_ B_ρ(a)∖B_σ(a)(x-a)^⊥/x-a^2^2μ,
∫_ B̂_ρ(a)∖B̂_σ(a)H⃗·x-ã/x-ã^2μ=2∫_ B̂_ρ(a)∖B̂_σ(a)1/4H⃗+(x-ã)^⊥/x-ã^2^2μ
-1/8∫_ B̂_ρ(a)∖B̂_σ(a)H⃗^2μ-2∫_ B̂_ρ(a)∖B̂_σ(a)(x-ã)^⊥/x-ã^2^2μ.
Putting these facts into the first variation formula (<ref>) and recalling the definition of g_a(r),ĝ_a(r), we deduce (<ref>).
We remark that in Theorem <ref>:
* B_r (a)∩^3_+=B̂_r(a)∩^3_+, and hence η (B_r(a))=η(B̂_r(a)).
* a_3^2=ã_3^2 and hence the corresponding terms involving them are the same.
Under the assumptions of Theorem <ref>, the following statements hold:
* Given θ∈[π/2,π). For every a∈^3, the tilde-density
Θ̃^2(μ-cosθη,a)
lim_r↘0((μ-cosθη)(B_r(a))/πr^2+(μ-cosθη)(B̂_r(a))/πr^2)
exists.
Moreover, the function x↦Θ̃(μ-cosθη,x) is upper semi-continuous in ^3;
* Given θ∈(0,π).
For every a∈^3, the tilde-density Θ̃^2(μ-cosθη,a) exists, and the function x↦Θ̃(μ-cosθη,x) is upper semi-continuous in ^3_+.
Define
R(r)
1/2πr^2∫_ B_r(a)H⃗·(x- a)μ+1/2πr^2∫_B̂_r(a)H⃗·(x-ã)μ,
and
G_θ(r)
(μ-cosθη)(B_r(a))/πr^2+(μ-cosθη)(B̂_r(a))/πr^2
+1/16π∫_B_r(a)H⃗^2μ+1/16π∫_B̂_r(a)H⃗^2μ+R(r),
then from (<ref>) and thanks to θ∈[π/2,π), we know that G_θ(r) is monotonically nondecreasing, so that
lim_r0^+G_θ(r)
exists.
For R(r), we estimate with Hölder inequality:
R(r)
≤ (μ(B_r(a))/πr^2)^1/2(1/4π∫_B_r(a)H⃗^2μ)^1/2
+(μ(B̂_r(a))/πr^2)^1/2(1/4π∫_B̂_r(a)H⃗^2μ)^1/2.
Moreover, for 1/4<<1/2, Young's inequality gives
R(r)
≤ μ(B_r(a))/πr^2+1/16π∫_B_r(a)H⃗^2μ
+μ(B̂_r(a))/πr^2+1/16π∫_B̂_r(a)H⃗^2μ.
By virtue of the monotonicity of G_θ(r), we obtain
(μ-cosθη)(B_σ(a))/πσ^2+(μ-cosθη)(B̂_σ(a))/πσ^2
≤ (μ-cosθη)(B_ρ(a))/πρ^2+(μ-cosθη)(B̂_ρ(a))/πρ^2
+1/16π∫_B_ρ(a)H⃗^2μ+1/16π∫_B̂_ρ(a)H⃗^2μ+R(ρ)+R(σ)
≤ (1+)((μ-cosθη)(B_ρ(a))/πρ^2+(μ-cosθη)(B̂_ρ(a))/πρ^2)
+1+2^-1/16π(∫_B_ρ(a)H⃗^2μ+∫_B̂_ρ(a)H⃗^2μ)
+((μ-cosθη)(B_σ(a))/πσ^2+(μ-cosθη)(B̂_σ(a))/πσ^2).
In particular, since 1/4<<1/2, this yields for 0<r<R<∞
(μ-cosθη)(B_r(a))/πr^2+(μ-cosθη)(B̂_r(a))/πr^2
≤ 3((μ-cosθη)(B_R(a))/πR^2+(μ-cosθη)(B̂_R(a))/πR^2)+9/8π∫_^3H⃗^2μ<∞,
which, together with (<ref>), gives
lim_r0^+R(r)=0,
implying that the tilde-density Θ̃(μ-cosθη,a) exists, and also
Θ̃(μ-cosθη,a)
≤((μ-cosθη)(B_ρ(a))/πρ^2+(μ-cosθη)(B̂_ρ(a))/πρ^2)
+1/16π(∫_B_ρ(a)H⃗^2μ+∫_B̂_ρ(a)H⃗^2μ)+R(ρ).
Thus for a fixed R>0 and a sequence of points in ^3 such that x_j a, we have: for 0<ρ<R, it holds that
(μ-cosθη)(B_ρ(a))/πρ^2+(μ-cosθη)(B̂_ρ(a))/πρ^2
≥lim sup_j∞((μ-cosθη)(B_ρ(x_j))/πρ^2+(μ-cosθη)(B̂_ρ(x_j))/πρ^2)
≥lim sup_j∞(Θ̃(μ-cosθη,x_j)-1/16π(∫_B_ρ(x_j)H⃗^2μ+∫_B̂_ρ(x_j)H⃗^2μ)-R(ρ))
(<ref>)≥lim sup_j∞Θ̃(μ-cosθη,x_j)
-C((μ-cosθη)(B_R(a))+(μ-cosθη)(B̂_R(a))/πR^2+(μ))^1/2H⃗_L^2(B_2ρ(a)).
Letting ρ0^+, this gives
Θ̃^2(μ-cosθη,a)
≥lim sup_j∞Θ̃(μ-cosθη,x_j),
which completes the proof of (1).
To prove (2), notice that for any a∈^3_+, a_3=ã_3=0, and hence as θ∈(0,π), the function G_θ(r) is also monotonically nondecreasing for any such a.
Following the proof of (1), we conclude (2).
Suppose that μ,η are compactly supported, we may thus use (<ref>) to find that: for θ∈[π/2,π), a∈^3,
lim_r∞R(r)=0,
and obtain
lim_r∞(g_a(r)+ĝ_a(r))
=1/8π∫_^3H⃗^2μ.
Thus by letting σ0^+ and ρ∞ in (<ref>), we obtain
1/π∫_^31/4H⃗+(x-a)^⊥/x-a^2^2μ+1/π∫_^31/4H⃗+(x-ã)^⊥/x-ã^2^2μ
-2cosθ∫_^3a_3^2/x-a^4η-2cosθ∫_^3ã_3^2/x-ã^4η
= 1/8π∫_^3H⃗^2μ-Θ̃^2(μ-cosθη,a).
This is also true for any a∈^3 and any θ∈(0,π/2), provided that Θ̃^2(μ-cosθη,a) exists.
Similarly, for θ∈(0,π) and a∈^3_+, we have
2/π∫_^31/4H⃗+(x-a)^⊥/x-a^2^2μ=1/8π∫_^3H⃗^2μ-Θ̃^2(μ-cosθη,a).
§ WILLMORE ENERGY AND LI-YAU-TYPE INEQUALITY FOR CAPILLARY IMMERSIONS
§.§ Capillary Immersions
Given a compact orientable smooth surface , with non-empty boundary .
Let F:^3_+ be an orientation preserving proper immersion, that is, F is smooth on the interior of and C^2 up to the boundary, such that F( int)⊂^3_+ and F()⊂^3_+.
In this way, F induces an immersion of into ^3_+.
Fix a global unit normal field ν to along F, which determines an orientation on and an induced orientation on given by a tangential vector field τ along .
Denote by μ the unit conormal to in so that {τ,ν, μ} is compatible with {E_1, E_2, E_3} of ^3. Let ν̅ be the unit normal to in ^3_+ so that {ν̅, -E_3} compatible with {ν,μ}.
Given θ∈(0,π), we say that the proper immersion F:^3_+ is a capillary immersion (or θ-capillary immersion) with contact angle θ if the angle determined by μ and ν̅ is constant and equals to θ along , that is, for any x∈,
μ(x)=sinθ(-E_3)+cosθν̅(x).
Abuse of terminology, for the immersion F: →^3_+, we use T to denote the “enclosed domain" by F(), which means the topological boundary T is given by F() and it has induced orientation by τ. We use T to denote the oriented area of T, which is defined to be
T=∑_i sgn(T_i)|T_i|,
where each T_i is a bounded domain such that T_i is a simply closed curve and sgn(T_i)=+1 if ν̅ points outward T_i, while
sgn(T_i)=-1 if ν̅ points inward T_i. |T| is the signed area of the so-called “wetting part”.
Now given a θ-capillary immersion F:^3_+ for θ∈(0,π)∖{π/2}.
Let g=F^∗ g_ euc be the pull-back metric and μ_g be the induced area measure on .
The induced varifold of is given by V_g=μ_g⊗_T_p, where T_p is the tangent space and
its image as varifold is given by V=F_♯(V_g), which is an integral 2-varifold in ^3 with weight measure μ(x↦^0(F^-1(x)))^2⌞ F(), see <cit.>*Section 15.
Define the following measure:
η wind(x)^2,
where wind(x) is the winding number of F() about x∈^3.
Throughout the paper, in terms of capillary immersion, we always assume that
wind(x)≥0 for any x∈^3_+.
With this assumption, η is a positive Radon measure,
and we may relate to it naturally the 2-varifold:
W=η⊗_T_x^3_+.
Let T= sptη⊂^3_+.
By definition of winding number, the mass (W) of W is exactly the oriented area of T (In fact, sgn(T_i) will be always +1 thanks to (<ref>)).
The pair (V,W) above satisfies the contact angle condition θ with square integrable generalized mean curvature as in Definition <ref>.
In the case that is an open surface and F is a properly immersion, the fact that V has square integrable generalized mean curvature in ^3 has already been observed in <cit.>*Section 2.1.
Here we prove the capillary version.
Let us consider any X∈ C_c^1(^3;^3) which is tangent to ^3_+, and let {f_t:^3^3}_t< be the induced variation of X, that is,
f_0=id, f_t(^3_+)⊂^3_+,
and /t|_t=0f_t=X,
which induces a family of immersions: {ψ_t}_t<, where ψ_t=f_t∘ F:^3_+.
Ψ:(-,)×^3_+, given by Ψ(t,p)=ψ_t(p), is called an admissible variation.
Let ξ(p)=Ψ/ t(0,p) for p∈, then ξ(p)=X(F(p)).
For this family of immersions, let μ_g_t be the induced area measure of with respect to the pull-back metric g_t=ψ^*_tg_ euc, the area functional is then given by
:(-,), (t)=∫_μ_g_t.
The wetted area functional (t):(-,) is defined by
(t)
=∫_[0,t]×Ψ^∗,
where is the volume form of ^3_+.
We define the free energy functional by
:(-,), (t)=(t)-cosθ(t),
from the well-known first variation formula (see e.g., <cit.>*(2.1)) and the capillary condition, we find (recall that ξ(p)=X(F(p)))
'(0)
= -∫_𝐇_(p)<ξ(p),ν(p)>μ_g
= -∫_F()∑_p∈F^-1(x)𝐇_(p)<ξ(p),ν(p)>^2(x)
-∫<X,𝐇⃗>μ.
On the other hand, it is not difficult to see that
A(t)
=((f_t)_♯V),
and because (W) is the oriented area,
'(0)
=/t|_t=0((f_t)_♯W).
By virtue of the well-known first variation formula for varifolds <cit.>, we get
(V-cosθW)[X]
='(0)
=-∫<X,𝐇⃗>μ,
as desired.
We point out that condition (<ref>) is irredundant, because without this constraint, one could simply construct an example as Fig. <ref>, so that we could find some point x∈ F() at which we no longer expect the upper-semi continuity of the tilde density (Proposition <ref>(1)) to be valid.
Precisely, given θ∈(π/2,π), let F:^3_+ be a θ-capillary immersion with F() as in Fig. <ref>.
At the point x, because of the orientation of F(), we have
lim_r0^+η(B_r(x))/πr^2
=-1/2,
and hence
Θ̃^2(μ-cosθη,x)
=1+cosθ<1,
implying that the function ·↦Θ̃(μ-cosθη,·) is not upper semi-continuous at x.
§.§ Willmore Energy in the Half-Space
The Willmore energy (F) of a smooth immersed compact connected orientable surface F:^3 with boundary , which may have several connected components, is usually proposed by
_0(F)
1/4∫_ H_^2μ_g+∫_κ_g,
where κ_g denotes the geodesic curvature of as a submanifold of .
The reason for proposing (<ref>) is to keep the peculiarity of the Willmore functional—the invariance under the conformal transformation of the ambient space.
Thanks to the well-known Gauß-Bonnet theorem, the Willmore energy may be rewritten as
_0(F)=1/4 ∫_ΣA
^2 +2πχ(),
where A is the traceless second fundamental form and χ() is the Euler characteristic,
it is clear that the first term in (<ref>) is conformal invariant, while the second term is topological invariant. See for example <cit.>.
The point in the above argument is that, in order to keep the conformal invariance, one may indeed add or subtract a topological quantity to the Willmore energy functional.
This insight motivates us to drop the term ∫_κ_g when considering the Willmore energy of the surface that is capillary immersed into ^3_+, since this term is just a multiple of some topological quantity.
Indeed,
for any θ-capillary immersion F:^3_+, it holds locally
κ_g
=_τμ_·τ=_τ(cosθν̅-sinθE_3)·τ=cosθκ̃_g,
where κ̃_g is the geodesic curvature of F() as an immersion in ^3_+,
using the Gauß-Bonnet theorem, one sees
∫_κ_g
=
cosθ∫_κ̃_g
=2πcosθ ind(F()),
where ind(F()) is rotation index of immersed plane curve F(), which is again a topological invariant.
Therefore, in this paper, we regard the Willmore functional for capillary hypersurfaces in ^n+1_+ as
(F) = _0(F)-2πcosθ ind(F())
=1/4∫_ H_^2μ_g.
W(F) is again a conformal invariant with respect to conformal diffeomorphisms of ^3.
It matches the Willmore functional for capillary hypersurfaces in the unit ball as well, which will be discussed in the next subsection.
We may rewrite the Willmore energy for the θ-capillary immersion as:
(F)
=1/4∫_H_^2μ_g
=1/4∫H⃗^2μ.
The first part of Theorem <ref> amounts to be a direct application of the monotonicity formula we obtained above.
Let us consider the point x_0∈⊂^3_+.
In this case, x̃_0=x_0 and hence B_r(x_0)=B̂_r(x_0). It follows that
lim_r↘0^+η(B_r(x_0))+ η( B̂_r(x_0))/πr^2=N(x_0),
for some positive integer N(x_0) =^0(F^-1(x_0)),
since F:^3_+ is an immersion.
Thus we find
Θ̃^2(μ-cosθη,x_0)
=2Θ^2(μ,x_0)-cosθ·N(x_0).
Moreover, since μ is supported on the upper half-space, Θ^2(μ,x_0)=N(x_0)/2.
Hence
Θ̃^2(μ-cosθη,x_0)
=(1-cosθ) N(x_0).
By virtue of the above observation, we infer easily from the monotonicity identity (<ref>) that
(F)
=4∫_^31/4H⃗+(x-x_0)^⊥/x-x_0^2^2μ+4πΘ^2(μ,x_0)-2cosθπN(x_0).
Thus, we obtain the Li-Yau-type inequality:
(F)
≥4(1-cosθ)πΘ^2(μ,x_0)=
2(1-cosθ)πN(x_0),
which implies easily that
(F)
≥2(1-cosθ) π,
the sharp estimate on the lower bound of Willmore energy.
If W(F)<4(1-cosθ)π, for any x_0∈, the Li-Yau-type inequality (<ref>) implies that N(x_0)=1, and hence F:^3_+ must be an embedding.
Let us now consider the point y_0∈∖.
Observe that
Θ̃^2(μ-cosθη,y_0)
=Θ^2(μ,y_0).
Letting a=y_0 in (<ref>) and recalling the definition of (F), since θ∈[π/2,π), we obtain readily
2πΘ^2(μ,y_0)
≤(F),
and hence Θ^2(μ,y_0)=1 if (F)<4π, in which case F:^3_+ must be an embedding.
This completes the proof.
When F:Σ^3_+ is an embedding, one can infer easily from (<ref>) that
Θ̃^2(μ-cosθη,x_0) =1-cosθ,
for x_0∈.
Therefore, it may be convenient to define
Θ̃^2(μ-cosθη,x_0)/1-cosθ
as the capillary density of at a boundary point x_0∈.
A further investigation gives the following characterization of the situation when (F) attains its minima, which also reveals the sharpness of (<ref>).
Given θ∈(0,π), let F:^3_+ be a capillary immersion.
If (F)=2π(1-cosθ),
then F() must be a θ-cap in ^3_+, i.e., a part of the sphere interesting
^n+1_+ at the angle θ.
We begin by noticing that F:^3_+ is an embedding thanks to Theorem <ref>.
In particular we learn from (<ref>)
that
1/4H⃗(x)+(x-x_0)^⊥/x-x_0^2=0
for μ-a.e. x∈ F() and any x_0∈ F().
It is not difficult to observe that there must be some y∈ F(∖) such that H⃗(y)≠0, otherwise for μ-a.e. x∈ F(), one has (up to a translation, we assume that x_0=0)
x·ν(x)=0,
implying that the enclosing region of F() with ^3_+ is a cone (see e.g. <cit.>*Proposition 28.8), which is not possible.
Consequently, by letting x=y in (<ref>), we obtain for any x_0∈ F() that
x_0-(y-2ν(y)/H⃗(y))^2
=4/H⃗(y)^2,
which shows that F() is indeed a 1-dimensional sphere on ^3_+, say 𝔰.
To proceed, we consider the unique spherical cap in ^3_-, say C_π-θ, which intersects ^3_+ along 𝔰 with the constant angle (π-θ).
Let V_C_π-θ denote the naturally induced varifold of C_π-θ, with weight measure denoted by μ_C_π-θ, which is exactly given by μ_C_π-θ=^2⌞ C_π-θ.
Write μ̃=μ+μ_C_π-θ, = spt(μ̃)=F()∪ C_π-θ.
Since F is a θ-capillary immersion with F:^3_+ an embedding, and because of the construction of C_π-θ, we see that the integral varifold μ̃ has the following variational structure:
for any X∈ C_c^1(^3;^3),
∫div_ Xμ̃=-∫_X·H⃗μ̃,
where H⃗ is the generalized mean curvature vector of , which is square integrable on .
Moreover, a direct computation shows that for the spherical cap C_π-θ,
(C_π-θ)
=1/4∫_C_π-θ H⃗^2^2
=2(1-cos(π-θ)
)π=2(1+cosθ)π,
which, together
with (F)=2(1-cosθ)π, yields
(μ̃)
=1/4∫H⃗^2μ̃=4π.
Here (μ̃) is the Willmore energy of μ̃ in ^3, see <cit.>*Appendix A.
It is worth noting that a simple modification of <cit.>*Proposition 4.3 in conjunction with <cit.>*Proposition 2.1.1 shows that for any compactly supported integral 2-varifold with square integrable generalized mean curvature, if its Willmore energy is 4π, then its support is exactly a closed round 2-sphere.
Therefore we conclude that must be a closed round 2-sphere, and hence F() is a θ-capillary spherical cap as desired.
The last part of the assertion in Theorem <ref>
is simply given by Theorem <ref>.
§.§ Willmore Energy in the Unit Ball
Due to the conformal invariance of the Willmore functional, the results above can be transferred directly to hold for capillary surfaces in the unit ball.
It is well-known that _0, and hence defined in (<ref>), are invariant under conformal transformations.
Therefore the Willmore functional for capillary surfaces in the unit ball is the same as ,
i.e.,
(F)
1/4∫_ H⃗^2μ_g+∫_ κ_g-2πcosθ ind(F()) .
Here ind(F()) is the rotation index of the immersed plane curve ϕ∘ F() where ϕ: ^3∖{p}→^3 is the stereographic projection for some p∉.
Now in this case, the geodesic curvature
is given by
κ_g
=_τμ_·τ=_τ(cosθν̅+sinθN̅(x))·τ=cosθκ̃_g+sinθ,
where N̅(x)=x is the outer unit normal of ^3 and κ̃_g is the geodesic curvature of F:→^2. By the Gauß-Bonnet theorem, we have
2π ind(F())
=∫_κ̃_g +T,
where T is the oriented area of T.
Altogether implying that
(F)
=1/4∫_ H⃗^2μ_g+
sinθΣ -cosθT.
Given θ∈(0,π), let F:^3 be a θ-capillary immersion.
Then, there holds
(F)
≥2(1-cosθ)π.
Equality holds if and only if F() is a spherical cap or a totally geodesic disk in ^3.
As explained above, the inequality holds thanks to Theorem <ref> and it suffices to consider the characterization of equality case.
Observe that in the half-space case, equality is achieved if and only if the capillary surfaces are spherical.
Since umbilicity preserves under conformal transformations, we readily see that in the half-ball case, F() should be either a spherical cap or a totally geodesic disk.
If θ=π/2, Corollary <ref> was proved in <cit.>.
Now we show how it simply implies Theorem <ref>.
From Corollary <ref>,we find
sinθ-cosθT ≥2(1-cosθ)π.
From div _ x=2 in we have
2Σ =∫_x, μ_
=∫_cosθν̅+sinθN̅(x),x
=sinθ.
(<ref>) follows clearly.
§ MONOTONICITY IDENTITIES IN THE UNIT BALL
§.§ Set-ups
Let ^3⊂^3 be the Euclidean unit ball centered at the origin, ^2^3 denotes the corresponding unit sphere.
In this section we adopt the following notations:
let ξ:^3∖{0}^3 denote the spherical inversion with respect to ^2.
Fix x_0∈^3 and r>0, B_r(x_0) denotes the open ball of radius r centered at x_0, and
B̂_r(x_0)
=B_r/x_0(ξ(x_0))
denotes the ball of radius r/x_0 centered at ξ(x_0).
A direct computation then shows that: for x∈^2,
x_0x-ξ(x_0)
=x-x_0,
in other words, B_r(x_0)∩^2=B̂_r(x_0)∩^2.
Similar with Definition <ref>, we consider the following weak formulation of capillary surfaces in the unit ball.
Given θ∈(0,π), let V be a rectifiable 2-varifold supported on ^3 with its weight measure denoted by μ, let W be a rectifiable 2-varifold supported on ^2 with its weight measure denoted by η.
(V,W) is said to satisfy the contact angle condition θ if there exists a μ-measurable vector field H⃗∈^1(^3,μ) with H⃗(x)∈ T_x^2 for μ-a.e. x∈^2, such that for every X∈ C_c^1(^3;^3) with X tangent to ^2, it holds that
∫_^3div_Xμ-cosθ∫_^2div_^2Xη=-∫_^3H⃗·Xμ,
where spt(μ) is countably 2-rectifiable.
An important proposition for the pairs of varifolds satisfying contact angle condition is that they have
bounded first variation and satisfy the following first variation formula (see <cit.>*Proposition 3.1).
Given θ∈[π/2,π), let (V,W) be as in Definition <ref>.
Then V-cosθ W has bounded first variation.
More precisely, there exists a positive Radon measure σ_V on ^2 such that
∫_^3div_Xμ-cosθ∫_^2div_^2Xη=-∫_^3H⃗·Xμ
+2∫_^2 X(x)·x(μ-cosθη)+∫_^2X(x)·xσ_V
for every X∈ C_c^1(^3;^3).
If is in fact a smooth capillary surface embedded in ^3, then we may use a classical computation to see that the above formula is true with σ_V=sinθ^2⌞.
This motivates us to define γ1/sinθσ_V as the generalized boundary measure of V and sptγ as the generalized boundary of V (which are different from the ones obtained from Lebesgue's decomposition theorem).
In all follows, we consider only those V that are integral 2-varifolds with μ(^2)=0 and H⃗∈^2(^3,μ),
it follows that (<ref>) holds.
Moreover, we may rewrite (<ref>) as
∫_^3div_Xμ-cosθ∫_^2div_^2Xη=-∫_^3H⃗·Xμ
-2cosθ∫_^2X(x)·xη+sinθ∫_^2X(x)·xγ
for every X∈ C_c^1(^3;^3).
An immediate consequence is that we may use a compactly supported vector field which coincides with the position vector field in a neighborhood of ^3 to test (<ref>) and obtain
2μ(^3)
=-∫_^3H⃗·xμ+sinθγ(^2).
§.§ Simon-Type Monotonicity Formulas
Let us first recall that in <cit.>, a Simon-type monotonicity formula is proved for integral 2-varifold μ with μ(^2)=0, that has free boundary in ^3.
Precisely, it is proved that for x_0≠0,
1/π∫_B_ρ(x_0)∖B_σ(x_0)1/4H⃗+(x-x_0)^⊥/x-x_0^2^2μ+1/π∫_B̂_ρ(x_0)∖B̂_σ(x_0)1/4H⃗+(x-ξ(x_0))^⊥/x-ξ(x_0)^2^2μ
= (g_x_0(ρ)+ĝ_x_0(ρ))-(g_x_0(σ)+ĝ_x_0(σ)),
where
g_x_0(r)
μ(B_r(x_0))/πr^2+1/16π∫_B_r(x_0)H⃗^2μ+1/2πr^2∫_B_r(x_0)H⃗·(x-x_0)μ,
ĝ_x_0(r)
g_ξ(x_0)(r/x_0)-x_0^2/πr^2∫_B̂_r(x_0)(x-ξ(x_0)^2+(x-ξ(x_0))^T·x)μ
-x_0^2/2πr^2∫_B̂_r(x_0)H⃗·(x-ξ(x_0)^2x)μ+1/2π∫_B̂_r(x_0)H⃗·xμ+μ(B̂_r(x_0))/π,
and (·)^T denotes the orthogonal projection of a vector onto the approximate tangent space T_x.
For x_0=0, he obtained
1/π∫_B_ρ(0)∖B_σ(0)1/4H⃗+1/πx^⊥/x^2^2μ=(g_0(ρ)+ĝ_0(ρ))-(g_0(σ)+ĝ_0(σ)),
where
g_0(r)
μ(B_r(0))/πr^2+1/16π∫_B_r(0)H⃗^2μ+1/2πr^2∫_B_r(0)H⃗·xμ,
ĝ_0(r) -min(r^-2,1)/2π(2μ(^3)+∫_^3H⃗·xμ).
Volkmann's proof is based on using a properly chosen vector field to test the first variation of a free boundary varifold. This particular choice can be dated back to <cit.>.
We follow to use this vector field to test (<ref>) and obtain the following result.
Given θ∈[π/2,π), let (V,W) be as in Definition <ref>.
Suppose in addition that V is an integral 2-varifold with μ(^2)=0, then it holds that: for x_0≠0, and for every 0<σ<ρ<∞,
1/π∫_B_ρ(x_0)∖B_σ(x_0)1/4H⃗+(x-x_0)^⊥/x-x_0^2^2μ+1/π∫_B̂_ρ(x_0)∖B̂_σ(x_0)1/4H⃗+(x-ξ(x_0))^⊥/x-ξ(x_0)^2^2μ
-cosθ/π∫_B_ρ(x_0)∖B_σ(x_0)(x-x_0/x-x_0^2·x)^2η-cosθ/π∫_B̂_ρ(x_0)∖B̂_σ(x_0)(x-ξ(x_0)/x-ξ(x_0)^2·x)^2η
= (g_x_0,θ(ρ)+ĝ_x_0,θ(ρ))-(g_x_0,θ(σ)+ĝ_x_0,θ(σ)),
where
g_x_0,θ(r)
g_x_0(r)-cosθη(B_r(x_0))/πr^2,
ĝ_x_,θ(r)
ĝ_x_0(r)-cosθx_0^2η(B̂_r(x_0))/πr^2
+cosθx_0^2/πr^2∫_B̂_r(x_0)x-ξ(x_0)^2η-cosθη(B̂_r(x_0))/π.
And for x_0=0,
1/π∫_B_ρ(0)∖B_σ(0)1/4H⃗+1/πx^⊥/x^2^2μ
=(g_0(ρ)+ĝ_0,θ(ρ))-(g_0(σ)+ĝ_0,θ(σ)),
where
ĝ_0,θ(r)
-min(r^-2,1)/2πsinθγ(^2).
For every 0<σ<ρ<∞, we continue to use the Lipschitz cut-off function l defined in the proof of Theorem <ref>.
For x_0∈^3, let
X_1(x)
l(x-x_0)(x-x_0).
Case 1. x_0≠0.
Define
X_2(x)
((1/x-ξ(x_0)_σ/x_0)^2-x_0^2/ρ^2)_+(x-ξ(x_0))
+(min(x_0x-ξ(x_0),ρ)^2/ρ^2-min(x_0x-ξ(x_0),σ)^2/σ^2)x,
and
XX_1+X_2.
As verified in <cit.>*(7), X is an admissible vector field to test (<ref>) for a.e. 0<σ<ρ<∞.
Moreover, the terms ∫_^3 div_ Xμ and -∫_^3H⃗· Xμ are explicitly computed. That is, if θ=π/2, testing (<ref>) with such X, one gets exactly (<ref>).
Therefore for the case θ∈(π/2,π), it suffices to compute
-cosθ∫_^2div_^2Xη.
To this end, we consider the following decomposition of ^3:
_1{B_σ(x_0),B_ρ(x_0)∖B_σ(x_0),^3∖B_ρ(x_0)},
_2{B̂_σ(x_0),B̂_ρ(x_0)∖B̂_σ(x_0),^3∖B̂_ρ(x_0)},
and we shall compute
-cosθ∫_Adiv_^2X_iη
for all sets A∈_i, i=1,2, separately.
For X_1, a direct computation shows that
X_1
=
(1/σ^2-1/ρ^2)Id, 0≤x-x_0≤σ,
(1/x-x_0^2-1/ρ^2)Id-2x-x_0/x-x_0^2⊗x-x_0/x-x_0^2, σ<x-x_0≤ρ
0, ρ<x-x_0,
so that on S^2,
div_^2X_1(x)
= divX_1(x)-X_1[x]·x
=
2/σ^2-2/ρ^2, 0≤x-x_0≤σ,
-2/ρ^2+2(x-x_0/x-x_0^2·x)^2, σ<x-x_0≤ρ,
0 ρ<x-x_0,
and it is easy to deduce that
-cosθ∫_^2div_^2X_1η
= -cosθ(2/σ^2η(B_σ(x_0))-2/ρ^2η(B_ρ(x_0))+2∫_B_ρ(x_0)∖B_σ(x_0)(x-x_0/x-x_0^2·x)^2η).
For X_2, we may compute directly to find:
As 0≤x-ξ(x_0)≤σ/x_0,
X_2
=
x_0^2(1/σ^2-1/ρ^2)(x-ξ(x_0))-x_0^2x-ξ(x_0)^2(1/σ^2-1/ρ^2)x;
as σ/x_0<x-ξ(x_0)≤ρ/x_0,
X_2
=(1/x-ξ(x_0)^2-x_0^2/ρ^2)(x-ξ(x_0))-x+x_0^2x-ξ(x_0)^2/ρ^2x;
and X_2≡0 as ρ/x_0<x-ξ(x_0).
Thus X_2 vanishes on ^3∖B̂_ρ(x_0); as 0<x-ξ(x_0)≤σ/x_0,
X_2
=
(1/σ^2-1/ρ^2)x_0^2{(1-x-ξ(x_0)^2)Id-2x⊗(x-ξ(x_0))},
and as σ/x_0≤x-ξ(x_0)≤ρ/x_0,
X_2
= (1/x-ξ(x_0)^2-x_0^2/ρ^2)Id-2x-ξ(x_0)/x-ξ(x_0)^2⊗x-ξ(x_0)/x-ξ(x_0)^2
-(1-x_0^2x-ξ(x_0)^2/ρ^2)Id
+2x_0^2/ρ^2x⊗(x-ξ(x_0).
Therefore it is not difficult to compute that: as 0≤x-ξ(x_0)≤σ/x_0,
div_^2X_2(x)
=
2x_0^2(1-x-ξ(x_0)^2)(1/σ^2-1/ρ^2),
and as σ/x_0<x-ξ(x_0)<ρ/x_0,
div_^2X_2(x)
=-2x_0^2(1-x-ξ(x_0)^2)/ρ^2-2+2(x-ξ(x_0)/x-ξ(x_0)^2·x)^2.
It follows that
-cosθ∫_^2div_^2X_2η
= -cosθ[(2x_0^2/σ^2η(B̂_σ(x_0))-2x_0^2/σ^2∫_B̂_σ(x_0)x-ξ(x_0)^2η+2η(B̂_σ(x_0)))
-(2x_0^2/ρ^2η(B̂_ρ(x_0))-2x_0^2/ρ^2∫_B̂_ρ(x_0)x-ξ(x_0)^2η+2η(B̂_ρ(x_0)))
+2∫_B̂_ρ(x_0)∖B̂_σ(x_0)(x-ξ(x_0)/x-ξ(x_0)^2·x)^2η].
Recall that if θ=π/2, testing (<ref>) with such X, one gets exactly (<ref>).
Now for θ∈(π/2,π), taking (<ref>) and (<ref>) into consideration, we thus obtain (<ref>) for a.e. σ and ρ, and an approximation argument shows that this indeed holds for every σ and ρ.
Case 2. x_0=0.
We continue to use X_1 defined in Case 1 (with x_0=0) and define
X_2(x)
(min(1,ρ)^2/ρ^2-min(1,σ)^2/σ^2)x,
X
X_1+X_2.
A direct computation shows that X is an admissible vector field to test (<ref>).
Indeed, we have X=X_1+X_2≡0 on ^2 for every 0<σ<ρ<∞.
Thus
-cosθ∫_^2div_^2Xη=0.
Recall that as θ=π/2, testing (<ref>) with such X one gets exactly (<ref>), therefore for θ∈(π/2,π), we shall get the same identity.
Moreover,
thanks to (<ref>) and invoking the definition of ĝ_0, we may rearrange this and deduce (<ref>) as desired.
Given θ∈[π/2,π). For every x_0∈^3, the tilde-density
Θ̃^2(μ-cosθη,x_0)
lim_r↘0((μ-cosθη(B_r(x_0))/πr^2+(μ-cosθη)(B̂_r(x_0))/πx_0^-2r^2), x_0≠0,
lim_r↘0μ(B_r(0))/πr^2, x_0=0,
exists.
Moreover, the function x↦Θ̃(μ-cosθη,x) is upper semi-continuous in ^3.
We prove for the first assertion:
Case 1. x_0≠0.
Define
R_x_0(r)
1/2πr^2∫_B_r(x_0)H⃗·(x-x_0)μ+x_0^2/2πr^2∫_B̂_r(x_0)H⃗·(x-ξ(x_0))μ
-x_0^2/πr^2∫_B̂_r(x_0)(x-ξ(x_0)^2+(x-ξ(x_0))^T·x)μ
-x_0^2/2πr^2∫_B̂_r(x_0)H⃗·(x-ξ(x_0)^2x)μ,
R_x_0,θ(r)
R_x_0(r)+cosθx_0^2/πr^2∫_B̂_r(x_0)x-ξ(x_0)^2η-cosθη(B̂_r(x_0))/π
+1/2π∫_B̂_r(x_0)H⃗·xμ+μ(B̂_r(x_0))/π,
then
G_x_0,θ(r)
(μ-cosθη)(B_r(x_0))/πr^2+x_0^2(μ-cosθη)(B̂_r(x_0))/πr^2
+1/16π∫_B_r(x_0)H⃗^2μ+1/16π∫_B̂_r(x_0)H⃗^2μ+R_x_0,θ(r)
is monotonically non-decreasing
thanks to (<ref>) and the fact that θ∈[π/2,π).
Therefore
lim_r0^+G_x_0,θ(r)
exists.
Now we estimate with Hölder inequality:
R_x_0,θ(r)
≤ (μ(B_r(x_0))/πr^2)^1/2
(1/4π∫_B_r(x_0)H⃗^2μ)^1/2
+(μ(B̂_r(x_0))/πx_0^-2r^2)^1/2
(1/4π∫_B̂_r(x_0)H⃗^2μ)^1/2+μ(B̂_r(x_0))/π
+(μ(B̂_r(x_0))/πx_0^-2r^2)^1/2(μ(B̂_r(x_0))/π)^1/2+(μ(B̂_r(x_0))/π)^1/2(1/4π∫_B̂_r(x_0)H⃗^2μ)^1/2
-2cosθη(B̂_r(x_0))/π+1/2π∫_B̂_r(x_0)H⃗μ+μ(B̂_r(x_0))/π,
where we have used the fact that sptμ⊂^3 so that x≤1.
Moreover, for 1/8<<1/4, Young's inequality gives
R_x_0,θ(r)
≤ μ(B_r(x_0))/πr^2+1/16π∫_B_r(x_0)H⃗^2μ
+μ(B̂_r(x_0))/πx_0^-2 r^2+1/16π∫_B̂_r(x_0)H⃗^2μ
+μ(B̂_r(x_0))/πx_0^-2r^2+1/4μ(B̂_r(x_0))/π+μ(B̂_r(x_0))/4π+1/4π∫_B̂_r(x_0)H⃗^2μ
+2(μ-cosθη)(B̂_r(x_0))/π+1/2π∫_B̂_r(x_0)H⃗μ
≤ 2(μ(B_r(x_0))/πr^2+μ(B̂_r(x_0))/πx_0^-2r^2)+^-1/16π(∫_B_r(x_0)∪B̂_r(x_0)H⃗^2μ)
+11+^-1/4π(μ-cosθη)(B̂_r(x_0))+3/8π∫_B̂_r(x_0)H⃗^2μ,
where we have used again θ∈[π/2,π) and the fact that
1/2π∫_B̂_r(x_0)H⃗μ≤μ(B̂_r(x_0))/2π+1/8π∫_B̂_r(x_0)H⃗^2μ.
By virtue of the monotonicity of G_x_0,θ(r), we obtain for 0<σ<ρ<∞
(μ-cosθη)(B_σ(x_0))/πσ^2+(μ-cosθη)(B̂_σ(x_0))/πx_0^-2σ^2
≤ (μ-cosθη)(B_σ(x_0))/πσ^2+(μ-cosθη)(B̂_σ(x_0))/πx_0^-2σ^2
+1/16π∫_B_ρ(x_0)H⃗^2μ+1/16π∫_B̂_ρ(x_0)H⃗^2μ+R_x_0,θ(ρ)+R_x_0,θ(σ)
≤ (1+2)((μ-cosθη)(B_ρ(x_0))/πρ^2+(μ-cosθη)(B̂_ρ(x_0))/πx_0^-2ρ^2)
+1+2^-1/16π∫_B_ρ(x_0)∪B̂_r(x_0)H⃗^2μ+11+^-1/2π(μ-cosθη)(^3)
+3/4π∫_^3H⃗^2μ+2((μ-cosθη)(B_σ(x_0))/πσ^2+(μ-cosθη)(B̂_σ(x_0))/πx_0^-2σ^2).
In particular, since 1/8≤≤1/4, this yields for a fixed R>0 that
(μ-cosθη)(B_r(x_0))/πr^2+(μ-cosθη)(B̂_r(x_0))/πx_0^-2r^2
≤ 3((μ-cosθη)(B_R(x_0))/πR^2+(μ-cosθη)(B̂_R(x_0))/πx_0^-2R^2)
+23/4π∫_^3H⃗^2μ+19/π(μ-cosθη)(^3)<∞
for every 0<r<R.
This, in conjunction with (<ref>), yields
lim_r0^+R_x_0,θ(r)=0,
and also implies that the tilde-density Θ̃(μ-cosθη,x_0) exists. Consequently
Θ̃(μ-cosθη,x_0)
≤((μ-cosθη)(B_r(x_0))/πr^2+(μ-cosθη)(B̂_r(x_0))/πx_0^-2r^2)
+1/16π∫_B_r(x_0)H⃗^2μ+1/16π∫_B̂_r(x_0)H⃗^2μ+R_θ,x_0(ρ).
Case 2. x_0=0.
Define
R_0,θ(r)
1/2πr^2∫_B_r(0)H⃗·xμ,
then
G_0,θ(r)
μ(B_r(0))/πr^2+1/16π∫_B_r(0)H⃗^2μ-min(r^-2,1)/2πsinθγ(^2)
+R_0,θ(r)
is monotonically non-decreasing thanks to (<ref>), thus
lim_r0^+G_0,θ(r)
exists.
Using Hölder inequality and Young's inequality, we obtain for 1/4≤≤1/2,
R_0,θ(r)
≤ (μ(B_r(0))/πr^2)^1/2(1/4π∫_B_r(0)H⃗^2μ)^1/2
≤ μ(B_r(0))/πr^2+1/16π∫_B_r(0)H⃗^2μ.
By virtue of the monotonicity of G_0,θ(r), we obtain for 0<σ<ρ≤1
μ(B_σ(0))/πσ^2
≤ μ(B_ρ(0))/πρ^2+1/16π∫_B_ρ(0)H⃗^2μ+R_0,θ(ρ)+R_0,θ(σ)
≤ (1+)μ(B_ρ(0))/πρ^2+1+2^-1/16π∫_B_ρ(0)H⃗^2μ+μ(B_σ(0))/πσ^2,
where we have used that ĝ_0,θ(r)≡-sinθ/2πγ(^2) for every 0<r≤1 in the first inequality.
In particular, since 1/4≤≤1/2 this yields for any 0<r<1:
μ(B_r(0))/πr^2
≤3μ(B_1(0))/π+9/8π∫_^3H⃗^2μ<∞,
which, together with (<ref>), yields
lim_r0^+R_0,θ(r)=0,
and also implies that the tilde-density Θ̃(μ-cosθη,0) exists.
Consequently for 0<r<1,
Θ̃(μ,0)
≤μ(B_r(0))/πr^2+1/16π∫_B_r(0)H⃗^2μ+R_0,θ(r).
Now we verify the upper semi-continuity by definition,
for a sequence of points x_j x_0 and for 0<ρ<1/2, we have
(μ-cosθ)(B_ρ(x_0))/πρ^2+(μ-cosθ)(B̂_ρ(x_0))/πx_0^-2ρ^2
≥ lim sup_j∞((μ-cosθ)(B_ρ(x_j))/πρ^2+(μ-cosθ)(B̂_ρ(x_j))/πx_j^-2ρ^2)
≥ lim sup_j∞(Θ̃(μ-cosθη,x_j)-1/16π(∫_B_ρ(x_j)H⃗^2μ+∫_B̂_ρ(x_j)H⃗^2μ)-R_x_j,θ(ρ))
≥ lim sup_j∞Θ̃(μ-cosθη,x_j)-C((μ-cosθη)(B_1/2(x_0)∪B̂_1/2(x_0))/π(1/2)^2
+∫_^3H⃗^2μ+(μ-cosθη)(^3))
·(H⃗_L^2(B_2ρ(x_0))+(μ(B̂_ρ(x_0))/π)^1/2)
-2(μ-cosθη)(B̂_ρ(x_0))/π,
where we have used (<ref>), (<ref>) for the last inequality, here we interpret B̂_r(0)=∅ and (μ-cosθ)(B̂_ρ(0))/π0^-2ρ^2=0.
Letting ρ0^+, this gives
Θ̃(μ-cosθη,x_0)≥lim sup_j∞Θ̃(μ-cosθη,x_j),
which completes the proof of the second assertion.
Since sptμ and sptη are compact, we see from the definition that
lim_r∞R_x_0,θ(r)=1/2π∫_^3H⃗·xμ+(μ-cosθη)(^3)/π, x_0≠0,
lim_r∞R_0,θ(r)=0,
and obtain from the Simon-type monotonicity formulas (letting σ0^+,ρ∞)
1/π∫_^31/4H⃗+(x-x_0)^⊥/x-x_0^2^2+1/4H⃗+(x-ξ(x_0))^⊥/x-ξ(x_0)^2^2μ
-cosθ/π∫_^3(x-x_0/x-x_0^2·x)^2+(x-ξ(x_0)/x-ξ(x_0)^2·x)^2η
= lim_ρ∞G_x_0,θ(ρ)-lim_σ0^+G_x_0,θ(σ)
= 1/8π∫_^3H⃗^2μ+1/2π∫_^3H⃗·xμ+(μ-cosθη)(^3)/π-Θ̃(μ-cosθη,x_0)
= 1/8π∫_^3H⃗^2μ+sinθγ(^2)-2cosθη(^3)/2π-Θ̃(μ-cosθη,x_0)
for x_0≠0, and for x_0=0:
1/π∫_^31/4H⃗+x^⊥/x^2^2μ
=1/16π∫_^3H⃗^2μ+sinθγ(^2)/2π-Θ̃(μ,0).
§.§ Applications of Eqn. <ref>
As a by-product of the monotonicity formula, we may establish the lower bound for the Willmore functional, which is conformal invariant and was proved in the previous Section. Moreover, we may obtain the optimal area estimate for minimal capillary surfaces in the unit ball in a rather direct way, which needs not go through the
Willmore functional.
In the case that x_0∈^2, we clearly have ξ(x_0)=x_0.
Besides, for any x∈^2, there holds x-x_0^4=4(1-(x· x_0))^2, and hence a direct computation shows that
(x-x_0/x-x_0^2·x)^2+(x-ξ(x_0)/x-ξ(x_0)^2·x)^2
=1/2 on ^2.
Taking this into account, the Simon-type monotonicity formula (<ref>) then reads
1/π∫_^31/4H⃗+(x-x_0)^⊥/x-x_0^2^2+1/4H⃗+(x-ξ(x_0))^⊥/x-ξ(x_0)^2^2μ
=1/8π∫_^3H⃗^2μ+sinθγ(^2)-cosθη(^3)/2π-Θ̃(μ-cosθη,x_0).
Now since F: Σ→^3 be a capillary minimal immersion, and also recall (<ref>), we obtain
1/π∫_^3(x-x_0)^⊥/x-x_0^2^2+(x-ξ(x_0))^⊥/x-ξ(x_0)^2^2μ=2-cosθT/2π-Θ̃(μ-cosθη,x_0).
As (<ref>) we have Θ̃(μ-cosθη,x_0)=(1-cosθ) N(x_0), where N(x_0) = ^0(F^-1(x_0)).
Theorem <ref> then follows.
alpha
|
http://arxiv.org/abs/2409.02588v1 | 20240904101417 | Multiview Random Vector Functional Link Network for Predicting DNA-Binding Proteins | [
"A. Quadir",
"M. Sajid",
"M. Tanveer"
] | cs.LG | [
"cs.LG",
"q-bio.BM"
] |
Magnon spin transport in the van der Waals antiferromagnet CrPS4 for non-collinear and collinear magnetization
Bart J. van Wees
September 9, 2024
==============================================================================================================
§ ABSTRACT
The identification of DNA-binding proteins (DBPs) is a critical task due to their significant impact on various biological activities. Understanding the mechanisms underlying protein-DNA interactions is essential for elucidating various life activities. In recent years, machine learning-based models have been prominently utilized for DBP prediction.
In this paper, to predict DBPs, we propose a novel framework termed a multiview random vector functional link (MvRVFL) network, which fuses neural network architecture with multiview learning.
The proposed MvRVFL model combines the benefits of late and early fusion, allowing for distinct regularization parameters across different views while leveraging a closed-form solution to determine unknown parameters efficiently.
The primal objective function incorporates a coupling term aimed at minimizing a composite of errors stemming from all views.
From each of the three protein views of the DBP datasets, we extract five features. These features are then fused together by incorporating a hidden feature during the model training process. The performance of the proposed MvRVFL model on the DBP dataset surpasses that of baseline models, demonstrating its superior effectiveness. Furthermore, we extend our assessment to the UCI, KEEL, AwA, and Corel5k datasets, to establish the practicality of the proposed models. The consistency error bound, the generalization error bound, and empirical findings, coupled with rigorous statistical analyses, confirm the superior generalization capabilities of the MvRVFL model compared to the baseline models.
Keywords: Multiview learning, Support vector machine, Random vector functional link network, Extreme learning machine, DNA-binding protein.
§ INTRODUCTION
The protein capable of interacting with DNA is known as a DNA-binding protein (DBP) <cit.>. The DBP plays a crucial role in numerous vital biological processes, including transcriptional regulation, DNA replication and repair, cellular development, and chromatin organization <cit.>. Therefore, precise prediction of DBPs is of considerable significance for proteome annotation as well as endeavors in synthetic biology. To identify DBPs, extensive work has been carried out in wet lab settings, including techniques such as X-ray crystallography <cit.>, genetic analysis <cit.>, and chromatin immunoprecipitation on microarrays <cit.>. While results obtained from wet-lab methods are likely the most reliable, it's important to note that this approach is also associated with significantly high time and labor costs. In contrast to wet-lab methods, computational approaches can substantially decrease resource requirements and expedite the identification of DBPs. Furthermore, in the postgenomic era, there's an escalating demand for the development of efficient and swift computational methods for accurately identifying DBPs, thereby underscoring its increasing significance in the field of bioinformatics.
Currently, computational methods are extensively employed for predicting DBPs <cit.>. These methods are typically categorized into two groups: structure-based and sequence-based methods. Structure-based methods necessitate information obtained from the three-dimensional structure of the protein under examination. In the early stages, structure-based methods excelled in the domain of DBP prediction, exemplified by models like LBi-DBP <cit.>, StackDPP <cit.>, DBPboost <cit.>, and ULDNA <cit.>. Accurately representing a protein sequence as a vector is widely recognized as one of the sequence-based methods' most essential and challenging tasks <cit.>. DNABinder <cit.> uses evolutionary information encoded in a position-specific scoring matrix (PSSM), generated via the PSI-BLAST multiple sequence alignment tool <cit.>. A support vector machine (SVM) is employed for classification, using these features as input, marking the first instance of this approach being used to identify DNA-binding proteins (DBPs). In iDNA-Prot <cit.>, pseudo amino acid composition (PseAAC) features <cit.>, extracted from protein sequences using the grey model, are amalgamated with random forest (RF) for the identification of DBPs. iDNAProt-ES <cit.> utilizes evolutionary data like PSSM alongside structural information predicted by SPIDER2 <cit.> to characterize the features of a given protein. These features are trained to identify DBPs using SVM with a linear kernel. DBP datasets, derived from various sources or views, offer complementary information essential for improving task performance. Each view contributes unique perspectives and features that, when combined, enhance the overall understanding and efficiency of the model. Nevertheless, existing multiview learning approaches often necessitate complex architectures or extensive preprocessing to manage these diverse views effectively <cit.>. The growing complexity of data from different sources poses significant challenges in extracting meaningful insights and achieving high performance. Multiview information fusion has gained widespread use in the field of bioinformatics in recent years <cit.>. The fuzzy kernel ridge regression model based on multi-view sequence features (FKRR-MVSF) <cit.> is proposed for identifying DNA-binding proteins. In FKRR-MVSF, the initial step is to extract multi-view sequence features from the protein sequences. Then, a multiple kernel learning (MKL) algorithm is used to combine these multiple features. In MSFBinder <cit.>, a stacking framework is proposed to predict DBPs by combining features from multiple views. The multi-view hypergraph restricted kernel machines (MV-H-RKM) <cit.> model is proposed to extract multiple features from protein sequences. These features are connected via a common hidden feature, and multi-hypergraph regularization is applied to merge the multi-view features, maintaining structural consistency between the original and hidden features.
Despite the extensive use of hyperplane-based classifiers like SVMs and their variants in predicting DNA-binding proteins, a significant gap remains in exploring the potential of shallow neural networks for this task. Shallow neural networks, with their ability to learn complex data representations and capture non-linear relationships through hidden layers, present a compelling alternative to traditional machine learning models. However, conventional neural network models often face challenges like slow convergence, difficulties with local minima, and sensitivity to learning rates and initialization points, resulting in suboptimal outcomes. This challenge underscores the need for innovative yet simple approaches that can effectively manage diverse data types and leverage their full potential. At this juncture, the random vector functional link (RVFL) network emerges as a promising solution. Known for its simplicity and effectiveness across various machine learning tasks, RVFL can bridge the gap left by traditional neural networks.
Building on the previous discussion of MVL and the advantages of RVFL over conventional ANNs and ML models, we see a clear path forward in fusing these methods. The primary reason for this fusion is that the resulting model will be simple, scalable, efficient, and highly effective for DNA-binding protein prediction. Hence, we propose a novel multiview random vector functional link network (MvRVFL). MvRVFL adopts the framework of artificial neural networks combined with MVL, mirroring the structure of RVFL. To ensure the proposed model remains simple and effective, we incorporate two views at a time in the MvRVFL model. This approach strikes a balance between complexity and performance, leveraging complementary information from two views to enhance effectiveness while maintaining simplicity.
The following are the key features of the proposed MvRVFL model:
* Integrating features from diverse perspectives enhances the proposed MvRVFL model's capability to capture intricate patterns and relationships within the data, consequently enhancing the overall generalization performance of the model.
* In the primary objective function of the proposed MvRVFL model, we encompass a coupling term, aiming to minimize a composite of errors originating from all views. By amalgamating the advantages of both early and late fusion, the model can assimilate information from all views during the training phase, yet still permits some flexibility to model the views distinctly.
* The integration of multiple views in the proposed MvRVFL model helps to mitigate the impact of missing or noisy data in any single view (a common problem in biomedical datasets <cit.>), thereby enhancing the model's robustness and reliability.
The paper's main highlights are as follows:
* We propose a novel multiview random vector functional link (MvRVFL) network. The proposed MvRVFL leverages multiple distinct feature sets using the MVL and the simplicity cum effectiveness of RVFL to predict DBP.
* We provide rigorous mathematical frameworks for MvRVFL, leveraging the RVFL topology. We utilized the Rademacher complexity theory to examine the consistency error bound and the generalization error bound of the proposed MvRVFL.
* Following <cit.>, in this work, we partition each protein sequence into five equal-length segments, creating a total of 14 continuous or discontinuous regions. Each region is characterized using 63-dimensional features (7 + 21 + 35), leading to a comprehensive 882-dimensional (63 x 14) feature representation of the entire protein sequence. We extract five types of features from the sequences: NMBAC, MCD, PSSM-AB, PSSM-DWT, and PsePSSM.
Experiments showcased that our method surpasses traditional models in effectively identifying DNA-binding proteins.
* Moreover, to test the generalization performance of the proposed model, we further tested it over the UCI, KEEL, AwA, and Corel5k datasets from various domains. The empirical results illustrate that the proposed MvRVFL model exhibits superior performance compared to numerous baseline models.
The remaining structure of the paper is organized as follows. We discuss the literature on the baseline model in Section <ref>. Section <ref>, provides an overview of related work. Section <ref> covers the method for extracting features from DNA-binding protein sequences. We present the detailed mathematical formulation of the proposed MvRVFL model in Section <ref>. Experimental results and analyses of proposed and existing models are discussed in Section <ref>. Finally, Section <ref> presents the conclusions and potential future research directions.
§ BACKGROUND
In this section, we discuss the background of DNA-binding proteins, artificial neural networks, and multi-view learning in detail.
First, we explore the importance and function of DNA-binding proteins, which play a crucial role in various biological processes by interacting with DNA sequences. Next, we delve into the fundamentals of artificial neural networks (ANNs), a type of machine-learning model inspired by the human brain, and examine their architecture, learning mechanisms, and applications. Finally, we cover the concept of multi-view learning, a technique that integrates multiple sets of features or perspectives to improve the performance of the predictive model.
§.§ DNA binding protein
A DBP is a protein that can physically interact with DNA through its internal binding domain. As one of the most common intracellular proteins, DBPs play a crucial role in influencing genome function by participating in transcription, DNA replication, and DNA repair <cit.>. The active role of DBPs in cellular processes underscores their importance. As new proteins continue to be discovered in the post-genomic era, the identification of DBPs across various sequences has gained considerable attention. How can DBPs be identified among the multitude of newly discovered proteins? Experimental detection methods, which are resource-intensive, are not as cost-effective as computational approaches. As a result, recent years have seen the development of numerous computational predictive models for identifying DBPs. <cit.> introduced iDNA-Prot, which utilizes the pseudo amino acid composition (PseACC) <cit.> for feature extraction and applies a random forest (RF) classifier. Subsequently, Liu et al. developed three successive predictors: iDNA-Prot|dis <cit.>, PseDNA-Pro <cit.>, and iDNAPro-PseAAC <cit.>. These predictors combined features from various extraction algorithms and used the integrated features as input for SVM to make predictions. Similarly, StackPDB predicts DNA-binding proteins through a three-step process: feature extraction, feature selection, and model construction are the key stages. StackPDB extracts features from protein sequences based on amino acid composition and evolutionary information. Evolutionary information is captured using the position-specific scoring matrix (PSSM), which is generated by the PSI-BLAST <cit.> program. In the StackPDB method, PsePSSM, PSSM-TPC, EDT, and RPT are employed for extracting features from PSSM. They subsequently used extreme gradient boosting combined with recursive feature elimination to select the most effective features. Finally, the selected optimal feature subset is input into a stacked ensemble classifier comprising XGBoost, SVM, and LightGBM. Previous studies <cit.> demonstrate that protein sequences can be characterized through different representations, including amino acid composition and PSSM. As fusion methods can combine information from multiple representations to improve model performance, various fusion techniques are employed in the identification of DBPs.
Multiple kernel learning (MKL) is a popular early fusion technique that focuses on learning the optimal weights for kernels. The optimal kernel is created by linearly combining multiple base kernels according to their respective weights. CKA-MKL <cit.> seeks to maximize the cosine similarity between the optimal kernel and the ideal kernel. Additionally, CKA-MKL includes a Laplacian term associated with the weights in the objective function to avoid extreme scenarios. However, CKA-MKL primarily emphasizes global kernel alignment and does not account for the differences between local samples. Therefore, HKAM-MKL <cit.> strives to maximize both local and global kernel alignment scores. Both CKA-MKL and HKAM-MKL utilize SVM as their classifier. In contrast, HSIC-MKL <cit.> aims to maximize the independence between trained samples and labels within the Reproducing Kernel Hilbert Space (RKHS). The optimal kernel is then used as input for a hypergraph-based Laplacian SVM, an extension of the standard SVM. In contrast, CKA-MKL focuses solely on global alignment. Moreover, HKAM-MKL takes into account both global and local aspects. As a result, HKAM-MKL outperforms CKA-MKL in predicting DNA-binding proteins. Unlike the previously mentioned MKL methods, MLapSVM-LBS <cit.> integrates multiple types of information throughout the training process. It uses the multiple local behavior similarity graph as a regularization term. Given that the objective function of MLapSVM-LBS is non-convex, an alternating algorithm is applied. The key advantage of MLapSVM-LBS is its ability to fuse various sources of information during training while providing flexibility to model different views distinctly.
§.§ Artificial neural networks
Artificial neural networks (ANNs) are a category of machine learning models inspired by the neural structure of the human brain. ANNs comprise interconnected nodes, or neurons, which employ mathematical operations to process and transmit information. ANNs are engineered to identify patterns and correlations within data, leveraging this acquired knowledge to make predictions. ANNs have showcased efficacy across diverse domains, including brain age prediction <cit.>, fault diagnosis of drilling pumps <cit.>, detection of sickle cell disease <cit.>, rainfall forecasting <cit.>, diagnosis of Alzheimer’s disease <cit.>, and so on.
Conventional ANNs rely on gradient descent (GD) based iterative methods, which present several inherent challenges in parameter calculation. These include a tendency to converge to local rather than global optima, heightened sensitivity to the choice of learning rate and initial parameters, and a sluggish convergence rate.
To circumvent the limitations of GD-based neural networks, randomized neural network (RNN) <cit.> is proposed. In RNNs, certain network parameters remain fixed, with only the parameters of the output layer being computed using a closed-form solution throughout the training phase <cit.>. The random vector functional link (RVFL) neural network <cit.> is a shallow feed-forward RNN distinguished by randomly initialized hidden layer parameters, which remain constant during the training process. RVFL stands out among other RNNs because of its direct connections between output and input layers. These direct links serve as a type of implicit regularization <cit.> within RVFL, contributing to enhanced learning capabilities. Through methods such as the least-squares technique or Pseudo-inverse, RVFL provides a closed-form solution for optimizing output parameters. This characteristic leads to efficient learning with fewer adjustable parameters. For more details, interested readers can refer to the comprehensive review paper of RVFL <cit.>.
§.§ Multiview learning
Multiview learning (MVL), as a prominent research field, showcases the ability to substantially improve generalization performance across diverse learning tasks by integrating multiple feature sets that encompass complementary information <cit.>. MVL emerges in response to the common occurrence of diverse types of data in practical scenarios. Consider an image, which can be characterized by its color or texture features, and a person, who can be identified through facial characteristics or fingerprints. In real-world situations, samples from different perspectives might exist in separate spaces or display notably different distributions because of the significant variation between views. However, the naive methods address this data type by employing a cascade strategy <cit.>. This entails transforming the multiview data into new single-view data by consolidating the heterogeneous feature space into a homogeneous feature space. However, the cascading strategy overlooks the unique statistical properties of each view and is plagued by the curse of dimensionality. MVL techniques <cit.> are applied across various tasks, including transfer learning <cit.>, clustering <cit.>, dimensionality reduction <cit.>, and classification <cit.>. SVM-2K, a two-view SVM learning model that combines SVMs with the kernel canonical correlation analysis (KCCA) distance minimization version, was first introduced by <cit.>. Using the consensus principle, this method makes use of two points of view. Multiview twin SVM (MvTSVM) <cit.> represents the initial endeavor to integrate a best-fitting hyperplane classifier with MVL. In recent years, various variants of MvTSVM have been introduced such as multiview one-class SVM method with LUPI (MOCPIL) <cit.>, multiview large margin distribution machine (MVLDM) <cit.>, multiview restricted kernel machine (MVRKM) <cit.>, and multiview robust double-sided TSVM (MvRDTSVM) <cit.>.
§ RELATED WORK
This section begins with establishing notations and then reviews the mathematical formulation along with the solution of the RVFL network.
§.§ Notations
Consider the sample space denoted as 𝒯, which is a product of two distinct feature views, A and B, expressed as 𝒯 = 𝒯^A ×𝒯^B, where 𝒯^A ⊆ℝ^m_1, 𝒯^B ⊆ℝ^m_2 and 𝒴 = {-1, +1} denotes the label space. Here n represents the number of samples, m_1 and m_2 denote the number of features corresponding to view A (Vw-A) and view B (Vw-B), respectively. Suppose ℋ = {(x_i^A, x_i^B, y_i) | x_i^A ∈𝒯^A, x_i^B ∈𝒯^B, y_i ∈𝒴}_i=1^n represent a two-view dataset. Let X_1 ⊆ℝ^n × m_1 and X_2 ⊆ℝ^n × m_2 be the input matrix of Vw-A and Vw-B, respectively and one-hot encoding matrix of the labels is denoted by Y ⊆ℝ^n × 2. Z_1 ⊆ℝ^n × h_l and Z_2 ⊆ℝ^n × h_l are hidden layer matrices, which are obtained by applying a nonlinear activation function, denoted as ϕ, to the input matrices X_1 and X_2 after transforming them with randomly initialized weights and biases. Here h_l represents the number of hidden layer nodes and (.)^t represents the transpose operator.
§.§ Random Vector Functional Link (RVFL) Network
The RVFL <cit.> is a single-layer feed-forward neural network composed of three layers: the input layer, the hidden layer, and the output layer. The weights connecting the input and hidden layers, as well as the biases in the hidden layer, are randomly initialized and remain unchanged during the training process. The original features of input samples are also directly connected to the output layer. The determination of output layer weights involves analytical techniques such as the least square method or the Moore-Penrose inverse. The architecture of the RVFL model is shown in Fig. <ref>.
Consider T={(x_i, y_i), i=1, 2, …, n } be the training dataset, where y_i ∈{+1,-1} represents the label of x_i ∈ℝ^1 × m. Let X=(x_1^t, x_2^t, …, x_n^t)^t ∈ℝ^n × m and Y=(y_1^t, y_2^t, …, y_n^t)^t ∈ℝ^n × 2 be the collection of all input and target vectors, respectively. H_1 is the hidden layer matrix, acquired by projecting the input matrix using randomly initialized weights and biases and then subjecting it to the non-linear activation function ϕ. It is defined as:
H_1 = ϕ(XW_1 + b_1) ∈ℝ^n × h_l,
where W_1 ∈ℝ^m × h_l represents the weights vector which is initialized randomly, and drawn from a uniform distribution spanning [-1, 1], and b_1 ∈ℝ^n × h_l is the bias matrix. Thus, H_1 is given as:
H_1= [ ϕ(x_1w_1+b^(1)) … ϕ(x_1w_h_l+b^(h_l)); ⋮ ⋮ ⋮; ϕ(x_nw_n+b^(1)) … ϕ(x_nw_h_l+b^(h_l)) ],
here w_k ∈ℝ^m × 1 represents the k^th column vector of the weights matrix of W_1, x_i ∈ℝ^1 × m denotes the i^th sample of matrix X and b^(j) signifies the bias term of the j^th hidden node. The weights of the output layer are determined through the following matrix equation:
[ X H_1 ]W_2 = Ŷ.
Here, W_2 ∈ℝ^(m + h_l) × 2 denotes the weights matrix, which connects the input with the concatenation of the hidden nodes to the output nodes, while Ŷ represents the predicted output. The resulting optimization problem of Eq. (<ref>) is formulated as:
(W_2)_min = W_2min𝒞/2H_2W_2 - Y^2 + 1/2W_2^2,
where H_2 = [ X H_1 ]. The optimal solution of Eq. (<ref>) is defined as follows:
(W_2)_min={[ (H_2^t H_2+1/𝒞 I)^-1H_2^tY, (m+h_l) ≤ n,; H_2^t(H_2H_2^t+1/𝒞 I)^-1Y, n<(m+h_l), ].
where 𝒞>0 is a tunable parameter and I represents the identity matrix of conformal dimensions.
§ DATA PREPROCESSING AND FEATURE EXTRACTIONS
In this section, we begin by outlining the DBP sequence from three distinct perspectives. Next, we utilize five different algorithms to extract features from these perspectives <cit.>. Finally, we use MvRVFL to combine these features and develop the predictor for identifying DBPs. Figure <ref> illustrates the flowchart of the model construction process.
§.§ Features of DBP
In this subsection, we detail the DBP sequence from three different perspectives: physicochemical properties, evolutionary information, and amino acid composition. These views are transformed into feature matrices using various extraction algorithms: Multi-scale Continuous and Discontinuous (MCD) <cit.> for amino acid composition; Normalized Moreau-Broto Autocorrelation (NMBAC) <cit.> for physicochemical properties; and Pseudo Position-Specific Scoring Matrix (PsePSSM) <cit.>, PSSM-based Discrete Wavelet Transform (PSSM-DWT) <cit.>, and PSSM-based Average Blocks (PSSM-AB) <cit.>, for evolutionary information.
§.§ Multi-Scale Continuous and Discontinuous for Protein Sequences
Multi-scale Continuous and Discontinuous (MCD) uses multi-scale decomposition techniques to first segment the protein sequence into equal-length sections, followed by the characterization of each continuous and discontinuous region. Three descriptors are used for each region: distribution (D), composition (C), and transition (T). The following outlines the detailed process of feature extraction for each descriptor.
For distribution (D), the protein sequence is categorized into 7 different types of amino acids. The total count of amino acid occurrences in each category is represented by M_i, i ∈{1, 2, …, 7}. Subsequently, within each amino acid category, we determine the positions of the 1^st, 25^th percentile, 50^th percentile, 75^th percentile, and last amino acids of that category in the entire protein sequence. These positions are then normalized by dividing by the total sequence length L. Therefore, each amino acid category can be represented by a 5-dimensional feature vector. For the entire protein sequence, this results in a 35-dimensional feature vector.
For composition (C), the 20 standard amino acids are grouped into seven categories based on the dipoles and volumes of their side chains, with each group sharing similar characteristics. Table <ref> provides a detailed list of the amino acids in each category. Let S = {s_1, s_2, …, s_L} denotes a protein sequence, where s_i represents the i^th residue and L is the total length of the sequence. Based on Table <ref>, we can represent S as S = {s_1, s_2, …, s_L}, where each residue s_i is classified into one of the categories {1, 2, 3, …, 7}. We then calculate the proportion of residues in each category throughout the entire protein sequence. For composition (C), it is represented as a 7-dimensional vector, where each dimension corresponds to one of the seven amino acid categories.
For transition (T), the entire protein sequence is divided into seven groups of amino acids according to the previously defined classification. We can calculate the frequency of transitions between different amino acid categories, which will be used to characterize the transition (T) properties of the protein. For transition (T), it can be represented as a 21-dimensional vector, accounting for all possible transformations between the seven amino acid categories.
In this study, each protein sequence is partitioned into 5 uniform-length segments, creating a total of 14 continuous or discontinuous regions. Each region is characterized by the 63-dimensional features mentioned earlier (7 for composition, 21 for transition, and 35 for distribution). Consequently, the entire protein sequence is described using an 882-dimensional feature vector (63 × 14).
§.§ Normalized Moreau-Broto Autocorrelation for Protein Sequences
Normalized Moreau-Broto Autocorrelation (NMBAC) is a feature based on the physicochemical properties of amino acids. We use the same approach <cit.> to extract features from protein sequences. In this study, six physicochemical properties of amino acids are taken into account: hydrophobicity (H), solvent accessible surface area (SASA), polarizability (P1), polarity (P2), volume of side chains (VSC), and net charge index of side chains (NCISC). The values for these six physicochemical properties for each amino acid are listed in Table <ref>. We normalize these values for each physicochemical property using the following formula:
M̂_i, j = M_i, j - M_j/S_j,
where M_i,j represents the value of the i^th amino acid for physicochemical property j, and M_j is the mean value of the 20 amino acids for property j. S_j is the standard deviation of the physicochemical property j among the 20 amino acids. Hence, NMBAC can be computed using the following formula:
P(lg, j) = 1/L-lg∑_i=1^L-lg(M̂_i, j×M̂_i+lg, j),
where j ∈{1, 2, …, 6} represents one of the six physicochemical properties, i ∈{1, 2, …, 20} denotes the i^th amino acid in the protein sequence, L is the length of the protein sequence, and lg is the distance between residues. We use the same lg values as those in <cit.>, ranging from 1 to 30. Consequently, this results in a 180-dimensional vector (30 × 6). Furthermore, we incorporate the frequency of the 20 amino acids found in the protein sequence into the NMBAC feature. As a result, the final NMBAC feature vector is 200-dimensional (180 + 20).
§.§ PSSM-Based Average Blocks for Protein Sequences
As PSSM includes evolutionary information about proteins, it has become a popular tool for various protein function prediction tasks in recent years. In this study, we extracted three features from PSSM: Pseudo Position-Specific Scoring Matrix (PsePSSM), PSSM-based Discrete Wavelet Transform (PSSM-DWT), and PSSM-based Average Blocks (PSSM-AB). The original PSSM profile, denoted as Q_PSSM, is represented as:
Q_PSSM =
[ Q_1, 1 Q_1, 2 ⋯ Q_1, 20; ⋮ ⋮ ⋱ ⋮; Q_L, 1 Q_L, 2 ⋯ Q_L, 20; ]_L × 20,
where Q_i,j represents the score for the transition from the i^th residue to the j^th residue class throughout the evolutionary process. For every protein, the PSSM feature's dimensionality is L × 20.
The PSSM is initially divided into segments by PSSM-AB, where each block covers 5% of the protein sequence. Consequently, the PSSM is divided into 20 fundamental blocks, each of which consists of 20 columns, independent of the length of the protein. The formula for extraction is as follows:
E_j = 1/A_j∑_i=1^A_jQ_i^(j),
where E_j represents the feature vector of the j^th block, with a dimension of 1 × 20. The sequence length of the j^th block is represented by A_j, and the PSSM value for the i^th residue in the j^th block, which also has a dimension of 1 × 20, is indicated by Q_i^(j). The feature vector derived from each protein sequence has a dimension of 400 (20 × 20), with a total of 20 blocks.
§.§ PSSM-Based Discrete Wavelet Transform for Protein Sequences
We apply the Discrete Wavelet Transform (DWT) to extract features from the PSSM, referred to as PSSM-based DWT (PSSM-DWT). In this feature extraction method, each column of the PSSM profile of the protein serves as the input signal. Following the approach in <cit.>, we apply a 4-level discrete wavelet transform for analyzing the PSSM, as illustrated in Fig <ref>.
We use high-pass and low-pass filters to break down the approximation function at each step of the process, followed by downsampling. For every level, this method yields low-frequency coefficients L_i and high-frequency coefficients H_i, where i denotes the i^th layer. We calculate the mean, median, maximum, and minimum values for both the low-frequency and high-frequency coefficients at each layer. Furthermore, we extract additional specific information from each layer of the low-frequency component by extracting the first five discrete cosine coefficients. Thus, for a protein's PSSM profile, we obtain a 1040-dimensional feature vector ((4 + 4 + 5) × 4 × 20).
§.§ Pseudo PSSM for Protein Sequences
The dimensions of the PSSM features vary depending on the length of the protein sequence. When extracting features from the PSSM profile, the PsePSSM is often used to ensure that PSSM features have uniform dimensions and include sequence order information. The features derived from PsePSSM are organized as follows:
Q_PsePSSM = [Q_1 Q_2 … Q_20 J_1^ζ_1 J_2^ζ_1 … J_20^ζ_1 … J_1^ζ_N J_2^ζ_N … J_20^ζ_N]^T,
where
Q_j = 1/L∑_i=1^L Q_i, j, j = 1, 2, …, 20,
and
J_j^ζ_l = 1/L - ζ_l∑_i=1^L - ζ_l [Q_i, j - Q_(i+ζ_l),j]^2, ζ_l < L; 1 ≤ l ≤ N,
where L denotes the length of the protein sequence, l represents the distance between residues, Q_j indicates the average score of residues in the protein sequence that evolve into the j^th residue, and J_j^ζ_l reflects the PSSM scores for two residues that are adjacent with a distance of l. We adopt the same range of l values as those used in <cit.>, where l varies from 1 to 15. Thus, for a protein sequence, we generate a 320-dimensional vector (20 + 20 × 15). We can extract PSSM features with a fixed length using this technique while preserving sequence order information.
§.§ Feature Selection for Protein Sequences
In this section, we discuss feature selection and our use of the Minimum Redundancy–Maximum Relevancy (mRMR) method <cit.>. This approach ranks the importance of input features by minimizing redundancy among them and maximizing their relevance to the target. In mRMR, mutual information is utilized to compute the redundancy and relevance mentioned above. The calculations are performed as follows:
J(g, h) = ∬ p(g, h)logp(g, h)/p(g) p(h)dg dh,
where g and h are two vectors, p(g) and p(h) represent the marginal probability densities of g and h respectively, and p(g, h) denotes the joint probability density of g and h. We use T to denote the full set of features, T_t to represent a sorted subset with m features, and T_n to indicate an unsorted subset with n features. Thus, the relevance L and redundancy D of features k within the subset T_n are calculated as follows:
L(k) = J(k, s),
D(k) = 1/m∑_k_i ∈ T_t J(k, k_i),
where s represents the target associated with feature k. To maximize the relevance L and minimize the redundancy D, we derive:
k_i ∈ T_nmin[D(k_i) - L(k_i)], i = 1, 2, 3, …, n.
By applying mRMR, we generate a reordered set of features, with each feature ranked according to its importance. For each feature, we subsequently select the optimal subset to be used in further experiments. Among the five feature types used in this paper, we first ranked the importance of features within each type using mRMR. Among the five feature types examined in this paper, we first ranked the importance of features within each category using mRMR. We then selected the top 1/2, 3/4, 7/8, and 15/16 of the features from the ranked list to form new feature subsets, respectively.
§ PROPOSED MULTIVIEW RANDOM VECTOR FUNCTIONAL LINK (MVRVFL) NETWORK
This section offers an in-depth explanation of the proposed MvRVFL model. Initially, we outline the generic mathematical framework of the proposed MvRVFL model, specifically tailored to handle data originating from two distinct views. During training on one view, the influence of other views is incorporated by introducing a coupling term into the primal form of the proposed optimization problem. An intuitive illustration of the MvRVFL model is shown in Fig. <ref>. Let Z_1 and Z_2 represent the nonlinear projection of the class samples corresponding to Vw-A and Vw-B, as defined by: Z_1=[X_1 H_1] and Z_2=[X_2 H_2]. Here X_1 and X_2 represent the input matrix corresponding to Vw-A and Vw-B, and H_1 and H_2 denote the hidden layer matrix corresponding to Vw-A and Vw-B, respectively. These matrices are obtained by transforming X_1 and X_2 using randomly initialized weights and biases and then applying a nonlinear activation function. The target matrix is denoted by Y. The proposed optimization problem of MvRVFL model is articulated as follows:
β_1, β_2min 1/2β_1^2 + θ/2β_2^2 + 𝒞_1/2ξ_1^2 + 𝒞_2/2ξ_2^2 + ρξ_1^tξ_2
s.t. Z_1β_1 - Y = ξ_1,
Z_2β_2 - Y =ξ_2.
Here, β_1 and β_2 are the output weight matrix corresponding to Vw-A and Vw-B, ξ_1 and ξ_2 represent the error corresponding to Vw-A and Vw-B, respectively. 𝒞_1, 𝒞_2, θ and ρ are the regularization parameters.
Each component of the optimization problem of MvRVFL has the following significance.
* The terms β_1 and β_2 are regularization components for Vw-A and Vw-B, respectively. These terms are employed to mitigate overfitting by constraining the capacities of the classifier sets for both views.
* The error variables ξ_1 and ξ_2 are pertinent to both views, enabling tolerance for misclassifications in situations of overlapping distributions.
* The primal optimization function comprises two distinct classification objectives for each view, linked by the coupling term ρξ_1^t ξ_2. Here, ρ represents an additional regularization constant, referred to as the coupling parameter. This term serves to minimize the product of the error variables for both views as well as trade-off parameters between both views.
The Lagrangian corresponding to the problem Eq. (<ref>) is given by
L = 1/2β_1^2 + θ/2β_2^2 + 𝒞_1/2ξ_1^2 + 𝒞_2/2ξ_2^2 + ρξ_1^tξ_2 -α_1^t(Z_1β_1 - Y - ξ_1) -α_2^t(Z_2β_2 - Y -ξ_2),
where α_1 ∈ℝ^n × 1 and α_2 ∈ℝ^n × 1 are the vectors of Lagrangian multipliers.
Using the Karush-Kuhn-Tucker (K.K.T.) conditions, we have
β_1 -Z_1^tα_1 = 0,
θβ_2 - Z_2^tα_2 = 0,
𝒞_1ξ_1 + ρξ_2 + α_1 = 0,
𝒞_2ξ_2 + ρξ_1 + α_2 = 0,
Z_1β_1 - Y - ξ_1 = 0,
Z_2β_2 - Y - ξ_2 = 0.
Using Eqs. (<ref>), (<ref>) and ( <ref>) in Eq. (<ref>), we get
β_1 = Z_1^tα_1,
β_1 = -Z_1^t(𝒞_1ξ_1 + ρξ_2 ),
β_1 = -Z_1^t(𝒞_1(Z_1β_1 - Y) + ρ(Z_2β_2 - Y)),
(I + 𝒞_1 Z_1^tZ_1)β_1 + ρ Z_1^tZ_2β_2 = Z_1^t(𝒞_1+ρ)Y.
Using Eqs. (<ref>), (<ref>) and (<ref>) in Eq. (<ref>), we get
ρ Z_2^tZ_1β_1 + (θ I + 𝒞_2 Z_2^t Z_2)β_2 = Z_2^t(𝒞_2+ρ)Y.
Using Eqs. (<ref>) and (<ref>), the solution of Eq. (<ref>) is given by
[ β_1; β_2 ] = [ (I + 𝒞_1 Z_1^tZ_1) ρ Z_1^tZ_2; ρ Z_2^tZ_1 (θ I + 𝒞_2 Z_2^t Z_2); ]^-1[ Z_1^t(𝒞_1+ρ); Z_2^t(𝒞_2+ρ) ] Y.
After computing the optimal values of β_1 and β_2, the classification of a new input data point x into either the +1 or -1 class can be determined as follows:
* Firstly, the decision function of Vw-A and Vw-B can be articulated as follows:
class^A(x^A) = i∈{1,2}max{ y_A_i},
where y_A = [x^A ϕ(x^AW^A + b^A)]β_1 and y_A = (y_A_1, y_A_2)
and
class^B(x^B) = i ∈{1,2}max{ y_B_i},
where y_B = [x^B ϕ(x^BW^B + b^B)]β_2 and y_B = (y_B_1, y_B_2).
* The decision function, which combines two views, can be articulated in the following manner:
class(x) = i∈{1,2}max{ y_c_i},
where
y_c = 1/2( [x^A ϕ(x^AW^A + b^A)]β_1 + [x^B ϕ(x^BW^B + b^B)]β_2 ) and y_c = (y_c_1, y_c_2).
Here, W^A (W^B) and b^A (b^B) are the randomly generated weights and biases corresponding to Vw-A (Vw-B), respectively.
Remark: The proposed MvRVFL model is referred to as MvRVFL-1 if the classification of the test sample is determined by the functions outlined in Eq. (<ref>). Also, we employ majority voting to determine the final anticipated output of the proposed MvRVFL model by aggregating predictions from Eqs. (<ref>), (<ref>) and (<ref>), and we refer to it as MvRVFL-2 model. In the majority voting technique, the class label with the highest number of votes is selected as the final prediction.
The algorithm of the proposed MvTPSVM model is briefly described in Algorithm <ref>. Furthermore, evaluating the generalization performance of MvRVFL requires a theoretical examination of its generalization error. To achieve this, we apply Rademacher's complexity theory. In the Appendix (Section <ref>), we delve into the consistency error bound and the generalization error bound of the MvRVFL model.
§.§ Complexity Analysis of the Proposed MvRVFL Model
Let (X_1, X_2, Y) denote the training set, where X_1 ∈ℝ^n × m_1, X_2 ∈ℝ^n × m_2 and Y ∈ℝ^n × 2. Here, m_1 and m_2 denote the number of features corresponding to Vw-A and Vw-B and n represents the number of samples, respectively. In RVFL-based models, the complexity is governed by the necessity to compute matrix inverses for optimizing the output layer weights. Thus, the size of the matrices requiring inversion dictates the model's complexity. Therefore, the time complexity of the RVFL model is 𝒪(n^3) or 𝒪((m+h_l)^3), where h_l represents the number of hidden nodes. Hence, the proposed MvRVFL model is required to inverse the matrix of dimension (m_1 + m_2 + 2h_l)× (m_1 + m_2 + 2h_l). Therefore, the time complexity of the MvRVFL model is 𝒪((m_1 + m_2 + 2h_l)^3).
§ EXPERIMENTS AND RESULTS
To evaluate the effectiveness of the proposed MvRVFL model, we conduct a comparison with baseline models DNA-binding proteins dataset <cit.>. Furthermore, we evaluate our proposed model using publicly available AwA[<http://attributes.kyb.tuebingen.mpg.de>] and Corel5k[<https://wang.ist.psu.edu/docs/related/>] benchmark datasets. We compare our proposed MvRVFL models with SVM2K <cit.>, MvTSVM <cit.>, ELM (Extreme learning machine, also known as RVFL without direct link (RVFLwoDL)) <cit.>, RVFL <cit.>, and MVLDM <cit.>. We denote the ELM model as ELM-Vw-A and ELM-Vw-B if it is trained over Vw-A and Vw-B of the datasets, respectively. Similar nomenclature is followed for RVFL-Vw-A and RVFL-Vw-B.
§.§ Experimental Setup
The experimental hardware setup comprises a PC equipped with an Intel(R) Xeon(R) Gold 6226R CPU operating at 2.90GHz and 128 GB of RAM. The system runs on Windows 11 and utilizes Python 3.11. The dataset is randomly split into training and testing sets, with a distribution of 70% for training and 30% for testing. We employ a five-fold cross-validation and grid search approach to optimize the models' hyperparameters, utilizing the following ranges: 𝒞_i = θ = ρ = {10^-5, 10^-4, …, 10^5} for i=1,2. The number of hidden nodes is chosen from the range 3:20:203. The generalization performance of the proposed MvRVFL model has been evaluated by comparing it with baseline models across various metrics including accuracy (Acc.), sensitivity, precision, and specificity rates. Mathematically,
Accuracy (Acc.) = 𝒯𝒩 + 𝒯𝒫/ℱ𝒫+ℱ𝒩 + 𝒯𝒫+𝒯𝒩,
Sensitivity (Seny.)= 𝒯𝒫/ℱ𝒩 + 𝒯𝒫,
Precision (Pren.)= 𝒯𝒫/ℱ𝒫 + 𝒯𝒫,
Specificity (Spey.) = 𝒯𝒩/𝒯𝒩 + ℱ𝒫,
where true positive (𝒯𝒫) represents the count of patterns belonging to positive class that are accurately classified, while false negative (ℱ𝒩) signifies the count of patterns belonging to positive class that are inaccurately classified, false positive (ℱ𝒫) denotes the count of patterns belonging to negative class that are inaccurately classified, and true negative (𝒯𝒩) describes the number of data points of negative class that are correctly classified.
§.§ Evaluation on DNA-binding Proteins Dataset
We conduct comparison of the proposed MvRVFL model by utilizing two benchmark datasets, namely PDB186 and PDB1075. The training dataset, consisting of 1075 protein samples, is derived from the PDB1075 <cit.> dataset. Within this dataset, 550 sequences are labeled as negative (non-DBPs), while 525 sequences are categorized as positive (DBPs). The test set, comprising 186 protein samples, is derived from the PDB186 dataset <cit.>). It consists of an equal number of negative and positive sequences. The features partially dictate the upper-performance limit of the model. To evaluate the impact of various features and their combination on DBP prediction, we evaluate each individual feature utilized in the proposed MvRVFL model. The DBP sequence is represented through three distinct views: physicochemical property, evolutionary information, and amino acid composition. These views are translated into feature matrices using extraction algorithms. Specifically, the amino acid composition is processed through Multi-scale Continuous and Discontinuous (MCD) <cit.>, the physicochemical property undergoes Normalized Moreau-Broto Autocorrelation (NMBAC), while evolutionary information is transformed using PSSM-based Discrete Wavelet Transform (PSSM-DWT), PSSM-based Average Blocks (PSSM-AB) and Pseudo Position-Specific Scoring Matrix (PsePSSM) methods <cit.>. Five distinct types of features extracted from the sequences are utilized, encompassing PsePSSM, PSSM-DWT, NMBAC, MCD, and PSSM-AB.
§.§.§ Comparison of the proposed model with the existing state-of-the-art models (non-DNA binding prediction models)
The performance for the proposed MvRVFL model, compared to baseline models for DBPs prediction, are outlined in Table <ref>. The proposed MvRVFL-1 and MvRVFL-2 models achieved the first and second positions, respectively, with average Acc. of 76.32% and 74%, respectively. In contrast, the average Acc. of the baseline SVM2K, MvTSVM, ELM-Vw-A, ELM-Vw-B, RVFL-Vw-A, RVFL-Vw-B, and MVLDM models are 68.45%, 69.04%, 68.06%, 68.12%, 73.49%, 70.06% and 63.82%, respectively. Compared to the third-top model, RVFL-Vw-A, the proposed models (MvRVFL-1 and MvRVFL-2) exhibit average Acc. of approximately 2.83% and 0.51% higher, respectively. For the MCD & PsePSSM, NMBAC & PsePSSM, PsePSSM & PSSM-AB and PsePSSM & PSSM-DWT cases, the proposed MvRVFL-1 model archives the Acc. of 80.11%, 80.80%, 75.65%, and 74.41%, respectively, emerging as the top performers. The results suggest the significance of PsePSSM as an important feature for predicting DBPs. Hence, the proposed MvRVFL consistently exhibits superior performance by achieving high Acc. across various cases, establishing its prominence among the models. Moreover, to visually compare the proposed MvRVFL models in terms of Spey., Seny., and Pren., we depicted bar graphs as shown in Fig. <ref>. From Table <ref> and Fig. <ref>, we can see that our proposed MvRVFL-1 model has Seny. 89.25 which is the second highest among all the datasets. The MvRVFL-1 model is highly effective at correctly detecting true positive cases in the MCD & NMBAC dataset, thus demonstrating its superior performance in identifying relevant instances compared to other models. Also, the Seny. for the MCD & PSSM-DWT dataset reaches a peak value of 91.40, indicating that our proposed model excels in predicting MCD features, achieving the best performance among all compared models. The Spey. of our proposed MvRVFL-2 model achieves the highest value of 97.85 on the MCD & NMBAC datasets, indicating its exceptional ability to correctly identify negative instances and avoid false positives. This high Spey. demonstrates that the MvRVFL-2 model is highly effective at accurately distinguishing true negative cases, outperforming baseline models. Furthermore, the MvRVFL-2 model employs a majority voting mechanism, combining predictions from multiple classifiers to enhance its efficiency and robustness. This approach leads to more reliable and accurate predictions, underscoring the superior performance of the MvRVFL-2 model in predicting MCD features compared to baseline models. The Pren. of the proposed MvRVFL-1 model achieves 91.40 on the MCD & PSSM-DWT dataset, which is the highest among all datasets. Pren. measures the model's accuracy in identifying true positive instances among the predicted positive cases, highlighting its effectiveness in correctly classifying DNA-binding proteins. The high precision value indicates that the MvRVFL-1 model excels in minimizing false positives, ensuring that most of the identified DNA-binding proteins are indeed true positives. This is particularly important for predicting DNA-binding proteins, where accurate identification is crucial for understanding protein-DNA interactions. The MCD feature plays a vital role in this context, as it enhances the model's ability to discriminate between DNA-binding proteins, thereby improving the overall prediction performance.
§.§.§ Comparison of the proposed model with the existing DNA binding prediction models
We compare our proposed MvRVFL model with existing DNA binding protein prediction models, including MvLSSVM via HSIC <cit.>, HKAM-MKM <cit.>, MV-H-RKM <cit.>, MLapSVM-LBS <cit.>, and LapLKA-RKM <cit.>. Table <ref> demonstrates that all five features (MCD, NMBAC, PSSM-DWT, PsePSSM, and PSSM-AB) are beneficial for predicting DBPs using MvRVFL. Among these, MCD_vs_PsePSSM achieves the best result (Acc. = 80.11%). This suggests that PsePSSM is a crucial feature for predicting DBPs. From Table <ref>, the average accuracy of the proposed MvRVFL-1 and MvRVFL-2 models are 76.32% and 74%, respectively. The average Acc. of the proposed models surpasses that of the baseline models. The proposed MvRVFL-1 and MvRVFL-2 exhibit exceptional generalization performance, marked by consistently higher Acc., indicating a high level of confidence in their learning process. It is found that the MvRVFL model is comparable to the baseline models in most cases. By incorporating the coupling term, MvRVFL can integrate information from both views. This approach allows for a larger error variable in one view if it is compensated by the other view by minimizing the product of the error variables. The Seny., Spey., and Pren. indicate that the proposed MvRVFL-1 model performs competitively compared to the baseline models. The MvRVFL-1 model demonstrates strong performance in all these areas, showing its robustness and reliability in comparison with existing models.
Figure <ref> illustrates the ROC curve, showcasing the superior performance of the proposed MvRVFL model compared to baseline models on the DNA-binding proteins datasets. The ROC curve provides a comprehensive evaluation of the model's diagnostic capabilities by plotting the true positive rate (Seny.) against the false positive rate (1 - Spey.) across various threshold settings. The area under the ROC curve (AUC) for the proposed MvRVFL model is significantly higher, indicating a better balance between Seny. and Spey..
A higher AUC demonstrates that the MvRVFL model is more effective at distinguishing between positive and negative instances, leading to more accurate predictions. This increased AUC signifies the model's enhanced ability to correctly identify true positives while minimizing false negatives, thereby ensuring more reliable detection.
The superior performance of the MvRVFL model can be attributed to the inclusion of crucial features, particularly PsePSSM. PsePSSM captures essential evolutionary information from the protein sequences, providing a more comprehensive representation that significantly contributes to the model's predictive accuracy. By leveraging PsePSSM, the model benefits from detailed sequence order information, which enhances its ability to correctly classify DBPs.
These results underscore the robustness and effectiveness of the MvRVFL model in classification tasks, surpassing the performance of baseline models. The importance of PsePSSM as a feature is evident, as it plays a pivotal role in improving the model's overall accuracy and reliability in identifying true positives and reducing false negatives.
§.§.§ Ablation study
To validate our claim that the coupling term acts as a bridge among multiple views, enhancing the coordination between different features and thereby improving the training efficiency of the proposed MvRVFL model, we conducted an ablation study. In this study, we set the coupling term to zero (ρ = 0) in the optimization (<ref>) to assess its true impact. The results, depicted in Fig. <ref>, demonstrate that the MvRVFL model consistently outperforms the model without coupling term across most datasets, confirming the effectiveness of the coupling term.
This outcome verifies the critical role of the coupling term ρ in the model's architecture. The coupling term allows the MvRVFL model to effectively integrate information from both the image views and the caption view. By minimizing the product of the error variables from both views, the model can tolerate a larger error in one view if it is offset by a smaller error in the other view. This balancing mechanism enhances the overall reliability and robustness of the model, leading to more accurate and dependable results.
The ablation study clearly demonstrates that incorporating the coupling term enables the MvRVFL model to better manage the complementary information from different views, resulting in improved performance and more reliable predictions.
§.§.§ Sensitivity of C1 and rho combinations
In this subsection, we evaluate the impact of the regularization parameters 𝒞_1 and ρ of MvRVFL. In Fig. <ref>, the values of 𝒞_1 and ρ are varied from 10^-5 to 10^5, keeping the other parameters is fixed at their optimal values and the corresponding Acc. values are recorded. The performance of MvRVFL is depicted under different parameter settings 𝒞_1 and ρ. The parameter ρ serves as a coupling term designed to minimize the product of error variables between Vw-A and Vw-B. When 𝒞_1 lies between 10^-1 and 10^5 with the same value of ρ, there is a corresponding improvement observed in the Acc. values. These findings suggest that when evaluating parameters 𝒞_1 and ρ, the model's performance is primarily influenced by 𝒞_1 rather than ρ. Therefore, meticulous selection of hyperparameters for the proposed MvRVFL model is essential to achieve optimal generalization performance.
§.§.§ Influence of the number of hidden nodes h1
To fully comprehend the robustness of the proposed MvRVFL model, it's crucial to analyze their sensitivity to the number of hidden nodes h_l.
The influence of the hyperparameter h_l is depicted in Fig. <ref>. Fig. <ref> and <ref> show that the performance peaks at h_l=23 and then gradually declines as h_l increases further. Therefore, to achieve the best performance from the MvRVFL, we recommend using h_l=23. The performance peaks at h_l=23 and h_l=83 and then gradually declines as h_l increases further depicted in Fig. <ref>. From Fig. <ref>, the performance consistently improves with an increase in the number of hidden nodes until reaching a plateau. Optimal performance is typically achieved with higher values of h_l. We recommend fine-tuning the hyperparameters to attain the best performance from the proposed models for specific tasks.
§.§ Evaluation on UCI and KEEL Datasets
In this section, we present a comprehensive analysis, including a comparison of the proposed MvRVFL model with baseline models across 27 UCI <cit.> and KEEL <cit.> benchmark datasets. Considering that the UCI and KEEL datasets do not inherently possess multiview characteristics, we use the 95% principal component to reduce the feature from the original data and it is given as Vw-B, and we refer to the original data as Vw-A <cit.>. The performance of the proposed MvRVFL model along with the baseline models is evaluated using Acc. metrics along with the corresponding optimal hyperparameters., as depicted in Table <ref> of Appendix. The average Acc. for the proposed MvRVFL-1 and MvRVFL-2 models along with the baseline SVM2K, MvTSVM, ELM-Vw-A, ELM-Vw-B, RVFL-Vw-A, RVFL-Vw-B, and MVLDM models are 85.15%, 83.18%, 76.65%, 67.24%, 82.66%, 81.7%, 83.14%, 81.26%, and 79.9%, respectively. In terms of average Acc., the proposed MvRVFL-1 achieved the top position, while the proposed MvRVFL-2 ranked second. This demonstrates that the proposed MvRVFL-1 and MvRVFL-2 models exhibit a significant level of confidence in their predictive capabilities. Average Acc. can be misleading because it might obscure a model's superior performance on one dataset by offsetting its inferior performance on another. To address the limitations of average Acc. and ascertain the significance of the results, we employed a suite of statistical tests recommended by <cit.>. These tests are tailor-made for comparing classifiers across multiple datasets, especially when the conditions required for parametric tests are not satisfied. We utilized the following tests: ranking test, Friedman test, Nemenyi post hoc test, and win-tie loss sign test. By integrating statistical tests, our objective is to comprehensively evaluate the performance of the models, facilitating us to draw broad and unbiased conclusions regarding their effectiveness. In the ranking scheme, each model receives a rank according to its performance on individual datasets, allowing for an evaluation of its overall performance. Higher ranks are attributed to the worst-performing models, while lower ranks are assigned to the best-performing models. By employing this methodology, we consider the potential compensatory effect wherein superior performance on one dataset offsets inferior performance on others. For evaluation of q models across N datasets, the rank of the j^th model on the i^th dataset can be denoted as ℜ_j^i. Then the j^th model's average rank is given by ℜ_j = 1/N∑_i=1^Nℜ_j^i. The rank of the proposed MvRVFL-1 and MvRVFL-2 models along with the baseline SVM2K, MvTSVM, ELM-Vw-A, ELM-Vw-B, RVFL-Vw-A, RVFL-Vw-B, and MVLDM models are 2.04, 3.20, 6.35, 8.65, 4.70, 5.26, 4.22, 5.09, and 5.48, respectively. Table <ref> displays the average ranks of the models. The MvRVFL-1 model attained an average rank of 2.04, which is the lowest among all the models. While the proposed MvRVFL-2 attained the second position with an average rank of 3.20. Given that a lower rank signifies a better-performing model, the proposed MvRVFL-1 and MvRVFL-2 models emerged as the top-performing model. The Friedman test <cit.> compares whether significant differences exist among the models by comparing their average ranks. The Friedman test, a nonparametric statistical analysis, is utilized to compare the effectiveness of multiple models across diverse datasets. Under the null hypothesis, the models' average rank is equal, implying that they give equal performance. The Friedman test adheres to the chi-squared distribution (χ_F^2) with (q-1) degree of freedom (d.o.f) and its calculation involves: χ^2_F = 12N/q(q+1)[∑_jℜ_j^2 - q(q+1)^2/4]. The F_F statistic is calculated as: F_F = (N - 1)χ_F^2/N(q-1) - χ_F^2, where F- distribution has (q-1) and (N-1)× (q-1) degrees of freedom. For N=27 and q=9, we obtained χ_F^2 = 100.983 and F_F = 22.8276. From the F-distribution table at a significance level of 5%, the value of F_F(8, 208) = 1.9831. As F_F > 1.9831, the null hypothesis is rejected. Hence, notable discrepancies are evident among the models. Consequently, we proceed to employ the Nemenyi post hoc test <cit.> to assess the pairwise differences between the models. C.D. = q_α×√(q(q+1)/6N) is the critical difference (C.D.). Here, q_α denotes the critical value obtained from the distribution table for the two-tailed Nemenyi test. Referring to the statistical F-distribution table, where q_α = 3.102 at a 5% significance level, the C.D. is computed as 2.3120. The average rank differences between the proposed MvRVFL-1 and MvRVFL-2 models with the baseline SVM2K, MvTSVM, ELM-Vw-A, ELM-Vw-B, RVFL-Vw-A, RVFL-Vw-B, and MVLDM models are (4.31, 3.15), (6.61, 5.45), (2.66, 1.50), (3.22, 2.06), (2.18, 1.02), (3.05, 1.89), and (3.44, 2.28). The Nemenyi post hoc test validates that the proposed MvRVFL-1 model exhibits statistically significant superiority compared to SVM2K, MvTSVM, ELM-Vw-A, ELM-Vw-B, RVFL-Vw-B, and MVLDM. The proposed MvRVFL-2 models observed a statistically significant compared to the SVM2K and MvTSVM models.
Furthermore, to evaluate the models, we employ the pairwise win-tie-loss sign test. This test assumes, under the null hypothesis, that two models perform equivalently and are expected to win in N/2 datasets, where N represents the dataset count. If the model wins on approximately N/2 + 1.96√(N)/2 datasets, then the model is deemed significantly better. If the number of ties between the two models is even, these ties are distributed equally between them. However, if the number of ties is odd, one tie is excluded, and the remaining ties are distributed among the classifiers. In this case, with N = 27, if one of the models records at least 18.59 wins, it indicates a significant difference between the models. Table <ref> presents a comparative analysis of the proposed MvRVFL-1 and MvRVFL-2 models alongside the baseline models. In Table <ref>, the entry [x, y, z] denotes the number of times the model listed in the row wins x, ties y, and loses z when compared to the model listed in the corresponding column. The proposed MvRVFL-2 model attains a statistically significant difference from the baseline models, except RVFL-Vw-A and RVFL-Vw-B. The winning percentage of MvRVFL-2 continues to demonstrate its effectiveness over the RVFL-Vw-A and RVFL-Vw-B models. The evidence demonstrates that the proposed MvRVFL-1 and MvRVFL-2 models exhibit significant superiority when compared to the baseline models.
§.§ Evaluation on AwA and Corel5k Dataset
In this subsection, we perform an in-depth analysis, comparing the proposed MvRVFL model with baseline models using the AwA dataset, which comprises 30,475 images of 50 animal classes, each represented by six preextracted features for every image. For our evaluation, we utilize ten test classes: Persian cat, leopard, raccoon, chimpanzee, humpback whale, giant panda, pig, hippopotamus, seal, and rat, totaling 6180 images. The 2000-dimensional L_1 normalized speeded-up robust features (SURF) are denoted as Vw-A, while the 252-dimensional histogram of oriented gradient features descriptors is represented as Vw-B. For each combination of class pairs, we employ the one-against-one strategy to train 45 binary classifiers. The Corel5k dataset consists of 50 categories, each comprising 100 images representing various semantic topics, such as bus, dinosaur, beach, and more. The 512-dimensional GIST features are denoted as Vw-A, while the 100-dimensional DenseHue features are denoted as Vw-B. In the experiments, we utilize a one-versus-rest scenario for each category individually, training 50 binary classifiers accordingly. We randomly choose 100 images from the other classes and include 100 images from the target class in each binary dataset.
The performance evaluation of the proposed MvRVFL models, alongside the baseline models, is conducted using Acc. metrics and the corresponding optimal hyperparameters, which are reported in Tables <ref> and <ref> on AwA and Corel5K datasets of the Appendix and the average Acc. is reported in Table <ref>. We can infer the following conclusions: Firstly, MvRVFL models attain the highest average Acc., the lowest average rank, and the most wins, indicating their superior performance. Secondly, although the MvRVFL models performance is slightly inferior in some instances, it remains competitive, with results closely approaching the best outcomes. Thirdly, across most datasets, MvRVFL demonstrates higher accuracies compared to SVM-2K. This highlights the capability of MvRVFL models to effectively utilize two views by adhering to the coupling term that minimizes the product of the error variables for both views, leading to enhanced classification performance. In the Appendix, we perform several sensitivity analyses on different aspects of the proposed models. This involves examining how the parameters 𝒞_1 and 𝒞_2 affect the proposed models in subsection <ref>. We conduct experiments with varying numbers of training samples on the AwA dataset, as discussed in subsection <ref>. Finally, we conduct the sensitivity of θ and ρ discussed in subsection <ref>.
§ CONCLUSION AND FUTURE WORK
In this paper, we proposed a novel multiview random vector functional link (MvRVFLs) network for the prediction of DBP. The proposed MvRVFL models not only extract rich feature representations through the hidden layers of multiple views but also serve as a weighting network. It allocates weights to features from all hidden layers, including the original features acquired via direct links. The coupling of different views in the MvRVFL models is achieved by incorporating the coupling term in the primal formulation of the model.
The outstanding performance of MvRVFL (compared to the baseline models) in DBP is primarily attributed to the fusion of features extracted from protein sequences, including PsePSSM, PSSM-DWT, NMBAC, MCD, and PSSM-AB.
Furthermore, we conducted experiments on UCI, KEEL, AwA, and Corel5K datasets. The experimental results, along with the statistical analyses, indicate that the proposed MvRVFL models beat the baseline models in terms of generalization performance. In future research, we aim to extend the proposed model to tackle class-imbalanced problems with multiple views (more than two views). Also, we intend to enhance the feature representation method and devise predictive models that synergistically integrate diverse features more effectively.
tmlr
§ APPENDIX
Here, we begin with a discussion on the generalization capability of the proposed MvRVFL model, followed by sensitivity analyses of the key hyperparameters involved. Lastly, we present detailed tables reporting the accuracy and the best hyperparameters.
§.§ Generalization Capability Analysis
Here, we discuss the generalization error bound for the MvRVFL model. In this analysis, we used the label y_i ∈{-1, +1} instead of employing one-hot encoding. The optimization problem of the MvRVFL model is reformulated as follows:
β_1, β_2min 1/2β_1^2 + θ/2β_2^2 + 𝒞_1/2∑_i=1^nξ_1_i^Tξ_1_i + 𝒞_2/2∑_i=1^nξ_2_i^Tξ_2_i + ρ∑_i=1^nξ_1_i^tξ_2_i
s.t. Z_1^(x_i)β_1 - y_i = ξ_1_i,
Z_2^(x_i)β_2 - y_i =ξ_2_i, i=1, 2, …, n.
If we multiply by y_i on both sides of the constraints of Eq. (<ref>), it results in
y_iZ_1^(x_i)β_1 - 1 = y_iξ_1_i,
y_iZ_2^(x_i)β_2 - 1 =y_iξ_2_i, i=1, 2, …, n,
Eqs. (<ref>) and (<ref>) are used in the theorem <ref>.
To start, we define the Rademacher complexity <cit.> as follows:
For the set of samples, S={x_1, …, x_n}, consisting of n independent samples from the distribution 𝒟 and the function set 𝒢 on S, we define the empirical Rademacher complexity on 𝒢 as:
R̂_n(𝒢) = 𝔼_σ[ g ∈𝒢sup | 2/n∑_i=1^n σ_i g(x_i) |: x_1, x_2,…, x_n ],
where σ = (σ_1, …, σ_n) are independently uniform valued {+1,-1} (Rademacher) random variables. The Rademacher complexity of 𝒢 is given by:
R_n(𝒢) = 𝔼̂_S[R̂_n(𝒢)] =𝔼̂_Sσ[ g ∈𝒢sup | 2/n∑_i=1^n σ_i g(x_i) |].
Choose θ from the interval (0, 1) and consider 𝒢 as a class of functions mapping from an input space S to [0, 1]. Suppose {x_i}_i=1^n are drawn independently from a probability distribution 𝒟. Then, with a probability of at least 1 - θ over random samples of size n, every g ∈𝒢 satisfies:
𝔼_𝒟[g(x)] ≤𝔼̂_𝒟[g(x)] + R̂_n(𝒢) + 3√(ln(2/θ)/2n).
Let S = {(x_i, y_i)}_i=1^n be a sample set and Z^(x) be the enhanced feature matrix. For the function class 𝒢_B = {g|g: x →| Z^(x)β|, β≤ B} then, the empirical Rademacher complexity of 𝒢_B satisfies:
R̂_n (𝒢_B) = 2B/n√(∑_i=1^n Z^(x_i)^TZ^(x_i))
Consider 𝒜 as a Lipschitz function with a Lipschitz constant ℒ, mapping the real numbers to the real numbers, satisfying 𝒜(0) = 0. The Rademacher complexity of the class 𝒜∘𝒢 is:
R̂_n (𝒜∘𝒢) ≤ 2ℒR̂_n (𝒢).
We define the difference between the final predictors of the two views as g_m(x) = | Z_1^(x)β_1 - Z_2^(x)β_2 |. Thus, the true expectation bound of g_m(x) can be derived using the following theorem.
Given N ∈ℝ^+, θ∈ (0, 1), and a training set T = {(x_i, y_i)}_i=1^n drawn independently and identically from probability distribution 𝒟, where y_i ∈{-1, +1} and x_i = (x_i^A; x_i^B). Define the function class 𝒢_N = {g|g: x →| Z^(x)β|, β≤ N} and 𝒢̂_N = {ĝ|ĝ: x → Z^(x)β, β≤ N}, where β = (β_1; β_2), Z^(x) = ( Z_1^(x); -Z_2^(x)) = ( [x^A ϕ(x^AW^A + b^A)]; -[x^B ϕ(x^BW^B + b^B)]) and g_m(x) = | Z_1^(x)β_1 - Z_2^(x)β_2 | = | Z^(x)β|∈𝒢_N. Then, with a probability of at least 1 - θ over T, every g_m(x) ∈𝒢_N satisfies
𝔼_𝒟[g_m(x)] ≤ 2N + 3N 𝒦_m√(ln(2/θ)/2n)
+ 4N/n√(∑_i=1^n(Z_1^(x_i)^2 + Z_2^(x_i)^2)),
where
𝒦_m = x_i ∈sup(𝒟)max√(Z_1^(x_i)^2 + Z_2^(x_i)^2).
Let's consider a loss function Ω: ℝ→ [0, 1] defined as:
Ω={[ -x/N𝒦_m, if -N𝒦_m ≤ x < 0,; x/N𝒦_m , if 0 ≤ x ≤ N𝒦_m,; 1, otherwise. ].
For an independently drawn sample (x_i, y_i) from the probability distribution 𝒟, combining β≤ N with Eq. (<ref>), we obtain:
g_m(x_i) = | Z^(x_i)β|≤ NZ^(x_i)
= N √(Z_1^(x_i)^2 + Z_2^(x_i)^2)≤ N𝒦_m.
In that case, g_m(x) ranges from 0 to N 𝒦_m, while ĝ_m(x) ranges from -N 𝒦_m to N 𝒦_m. According to Lemma <ref> and with Ω(ĝ_m(x)) ranging from 0 to 1, the following inequality holds with a probability of at least 1 - θ over T:
𝔼_𝒟[Ω(ĝ_m(x))] ≤𝔼̂_T [Ω(ĝ_m(x))] + R̂_n(Ω∘𝒢̂_N) + 3√(ln(2/θ)/2n).
Given that Ω(x) is a Lipschitz function with a constant of 1/N 𝒦_m, passes through the origin, and is uniformly bounded, we can assert the following inequality based on Lemmas <ref> and <ref>:
R̂_n(Ω∘𝒢̂_N) ≤4/n𝒦_m√(∑_i=1^n Z^(x_i)^2)
= 4/n𝒦_m√(∑_i=1^n(Z_1^(x_i)^2 + Z_2^(x_i)^2)).
Hence,
𝔼_𝒟[Ω(ĝ_m(x))] ≤𝔼̂_T [Ω(ĝ_m(x))] + 3√(ln(2/θ)/2n) + 4/n𝒦_m√(∑_i=1^n(Z_1^(x_i)^2 + Z_2^(x_i)^2)).
Since g_m(x_i) = N 𝒦_m Ω(ĝ_m(x)), we have
𝔼_𝒟[g_m(x)] = N𝒦_m 𝔼_𝒟[Ω(ĝ_m)]
≤𝔼̂_T [g_m(x)] + 3N𝒦_m √(ln(2/θ)/2n) + 4N/n√(∑_i=1^n(Z_1^(x_i)^2 + Z_2^(x_i)^2)).
Also,
| Z^(x_1)β|≤ N| Z^(x_i)| = N| Z_1^(x_i) - Z_2^(x_i)|≤ N(| Z_1^(x_i)| + | Z_2^(x_i)|) ≤ 2N
We can obtain,
𝔼̂_T [g_m(x)] = 𝔼̂_T [Z^(x)β] ≤ 2N.
From Eqs. (<ref>) and (<ref>), we can conclude the theorem.
Consistency plays a crucial role in MVL, aiming to reduce the discrepancy in predictions across different perspectives or views. A lower consistency error typically results in improved generalization performance. We represent g_m(x) as the consistency error function between the two views. Next, by applying the empirical Rademacher complexity of the function classes 𝒢̂_m along with the empirical expectation of g_m, we obtain a margin-based estimate for the consistency error expectation. MvRVFL demonstrates a tight consensus error bound as n becomes sufficiently large. As the consistency error during training decreases, the generalization error likewise decreases. This theoretical assurance underscores MvRVFL's robust generalization performance concerning consistency.
Next, we examine the generalization error bound. According to (<ref>), we use the weighted combination of predictions from the two views to define the prediction function for MvRVFL. Therefore, the generalization error bound for MvRVFL can be determined using the following theorem.
Given N ∈ℝ^+, θ∈ (0, 1), and a training set T = {(x_i, y_i)}_i=1^n drawn independently and identically from probability distribution 𝒟, where y_i ∈{-1, +1} and x_i = (x_i^A; δ x_i^B). Define the function class classes 𝒢 = {g|g: x → Z^(x)β, β≤ N} and 𝒢̂ = {ĝ|ĝ: (x, y) → yg(x), g(x) ∈𝒢}, where β = (β_1; β_2), Z^(x) = ( Z_1^(x); Z_2^(x)) = ( [x^A ϕ(x^AW^A + b^A)]; [x^B ϕ(x^BW^B + b^B)]) and g(x) = ( Z_1^(x)β_1 + δ Z_2^(x)β_2 ) = Z^(x)β∈𝒢_N. Then, with a probability of at least 1 - θ over T, every g(x) ∈𝒢 satisfies
ℙ_𝒟[yg(x)≤0] ≤1/n(1+δ)∑_i=1^n(ξ_1_i + δξ_2_i) + 3√(ln(2/θ)/2n) + 4N/n(1+δ)√(∑_i=1^n(Z_1^(x_i)^2 + δ^2 Z_2^(x_i)^2)).
Let's consider a loss function Ω: ℝ→ [0, 1] defined as:
Ω(x)={[ 1, if x < 0,; 1-x/1+δ , if 0 ≤ x ≤ 1+δ,; 0, otherwise. ].
Then, we have
ℙ_𝒟(yg(x)≤0) ≤𝔼_𝒟[Ω(ĝ(x, y))].
Using Lemma <ref>, we have
𝔼_𝒟[Ω(ĝ(x, y))-1] ≤𝔼̂_T[Ω(ĝ(x, y))-1] + 3√(ln(2/θ)/2n) + R̂_n((Ω-1)∘𝒢).
Therefore,
𝔼_𝒟[Ω(ĝ(x, y))] ≤𝔼̂_T[Ω(ĝ(x, y))] + 3√(ln(2/θ)/2n) + R̂_n((Ω-1)∘𝒢).
By using Eq. <ref> and Eq. <ref>, we deduce:
𝔼̂_T[Ω(ĝ(x, y))] ≤1/n(1+δ)∑_i=1^n [1 + δ - y_ig(x_i)]_+
= 1/n(1+δ)∑_i=1^n [1-y_ig_A(x_i^A) + δ(1 - y_if_B(x_i^B)) ]_+
≤1/n(1+δ)∑_i=1^n {[1-y_ig_A(x_i^A)]_+ + δ[1 - y_if_B(x_i^B))]_+ }
≤1/n(1+δ)∑_i=1^n [y_i(ξ_1_i + δξ_2_i)]_+
≤1/n(1+δ)∑_i=1^n (ξ_1_i + δξ_2_i).
Given that (Ω - 1)(x) is a Lipschitz function with a constant of 1/1 + δ, passes through the origin, and is uniformly bounded, we can derive the following inequality based on Lemma <ref>:
R̂_n((Ω-1) ∘𝒢̂) ≤2/1+δR̂_n(𝒢̂).
Using the Definition <ref>, we obtain:
R̂_n(𝒢̂) = 𝔼_σ[ ĝ∈𝒢̂sup| 2/n∑_i=1^n σ_i ĝ(x_i, y_i) | ]
= 𝔼_σ[ g ∈𝒢sup| 2/n∑_i=1^n σ_i y_ig(x_i) | ]
= 𝔼_σ[ g ∈𝒢sup| 2/n∑_i=1^n σ_i g(x_i) | ]
= R̂_n(𝒢).
Combining this with Lemma <ref>, we have:
R̂_n((Ω-1) ∘𝒢̂) ≤2/1+δR̂_n(𝒢)
= 4N/n(1+δ)√(∑_i=1^n Z^(x_i)^TZ^(x_i))
= 4N/n(1+δ)√(∑_i=1^n (Z_1^(x_i)^TZ_1^(x_i) + δ^2 Z_2^(x_i)^TZ_2^(x_i)))
= 4N/n(1+δ)√(∑_i=1^n(Z_1^(x_i)^2 + δ^2 Z_2^(x_i)^2)).
Moreover, by combining equations Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we can obtain inequality, which demonstrates the generalization error bound of MvRVFL.
We define the classification error function g(x) by employing the integrated decision function as specified in MvRVFL (<ref>). By integrating the empirical Rademacher complexity of 𝒢 with the empirical expectation of 𝒢̂, we derive a margin-based estimate of the misclassification probability. Clearly, MvRVFL provides a robust generalization error bound for classification as n becomes sufficiently large. As the training error decreases, the generalization error correspondingly reduces. This theoretical result ensures that MvRVFL exhibits improved generalization performance.
§.§ Sensitivity Analysis
In this section, we perform the sensitivity analysis of several key hyperparameters of the proposed MvRVFL model. These analyses covered various factors, including hyperparameters 𝒞_1 and 𝒞_2 discussed in subsection <ref>. Performance with different numbers of training samples on the AwA dataset is discussed in subsection <ref>. Finally, we conduct the sensitivity of θ and ρ discussed in subsection <ref>.
§.§.§ Effect of the parameter C1 and C1 on the performance of the proposed MvRVFL model on AwA dataset
The performance of the proposed MvRVFL model is assessed by adjusting the values of 𝒞_1 and 𝒞_2. This thorough analysis helps us pinpoint the configuration that enhances predictive accuracy and improves the model's robustness against new data samples. Fig. <ref> illustrates significant variations in the model's accuracy across different values of 𝒞_1 and 𝒞_2, underscoring the model's sensitivity to these specific hyperparameters.
According to the findings presented in Fig. <ref>, optimal performance of the proposed model is observed within the 𝒞_1 and 𝒞_2 ranges of 10^-4 to 10^4. These findings indicate that both 𝒞_1 and 𝒞_2 significantly impact the model's performance. Hence, it is advisable to meticulously select the hyperparameters 𝒞_1 and 𝒞_2 in the MvRVFL model to achieve superior generalization performance.
§.§.§ Performance with different number of training samples on AwA dataset
We assess how the proposed MvRVFL model's performance varies with different numbers of training samples. Fig. <ref> shows how the Acc. changes as the number of training samples ranges from 86 to 336. The x-axis displays the number of training samples, and the y-axis shows the corresponding Acc. values. It is observed that the Acc. value generally increases with the rise in the number of training samples. This is because an increase in training samples provides more data for the model to learn from, leading to improved accuracy in the classification results.
§.§ Effect of hyperparameters C1 and sigma
§.§.§ Sensitivity of theta and rho combinations
We explore the sensitivity of the MvRVFL model to various values of parameters θ and ρ. The parameter θ regulates the gap between two views, while ρ is linked to the coupling term ξ_1^tξ_2. We vary the values of θ and ρ from 10^-5 to 10^5 and record the corresponding Acc. results. With other parameters held constant at their optimal settings, Fig. <ref> illustrates how the performance of MvRVFL changes as the values of parameters θ and ρ are varied.
From the perspective of hyperparameters, θ and ρ, the parameter θ regulates the gap between view A and view B. With the same value of ρ, when θ > 10^-1, it indicates that view B plays a more significant role than view A in learning the overall model. Otherwise, view A is more important. For instance, on the 1000 and 10000 sub-datasets, the Acc. reaches its highest value when the parameter θ is relatively large (e.g., 10^3 or 10^5), suggesting that view B holds greater importance than view A. Furthermore, on the 103000, and 143000 sub-datasets, the optimal performance is achieved when the parameter θ is small (e.g., 10^-5 or 10^-3), indicating that view A holds more significance than view B.
§.§ Classification accuracy tables of the proposed MvRVFL models along with baseline models on UCI, KEEL, AwA and Corel5k datasets
In this section, we present the performance of the proposed MvRVFL models, along with the baseline models.
|
http://arxiv.org/abs/2409.03356v1 | 20240905085907 | Magnetic field tunable spectral response of kinetic inductance detectors | [
"F. Levy-Bertrand",
"M. Calvo",
"U. Chowdhury",
"A. Gomez",
"J. Goupy",
"A. Monfardini"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"astro-ph.IM",
"physics.ins-det"
] |
[email protected]
Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France
Groupement d'Intérêt Scientifique KID, Grenoble and Saint Martin d'Hères, France
Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France
Groupement d'Intérêt Scientifique KID, Grenoble and Saint Martin d'Hères, France
Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France
Groupement d'Intérêt Scientifique KID, Grenoble and Saint Martin d'Hères, France
Centro de Astrobiología (CSIC-INTA), Ctra. Torrejon-Ajalvir km.4, 28850 Torrejon de Ardoz, Spain
CEA/DRF/IRIG – Grenoble - France
Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France
Groupement d'Intérêt Scientifique KID, Grenoble and Saint Martin d'Hères, France
§ ABSTRACT
We tune the onset of optical response in aluminium kinetic inductance detectors from a natural cutoff frequency of 90 GHz to 60 GHz by applying an external magnetic field. The change in spectral response is due to the decrease of the superconducting gap, from 90 GHz at zero magnetic field to 60 GHz at a magnetic field of around 3 mT. We characterize the variation of the superconducting gap, the detector frequency shift and the internal quality factor as a function of the applied field. In principle, the magnetic field tunable response could be used to make spectroscopic measurements. In practice, the internal quality factor behaves hysteretically with the magnetic field due to the presence of vortices in the thin superconducting film. We conclude by discussing possible solutions to achieve spectroscopy measurements using kinetic inductance detectors and magnetic field.
Magnetic field tunable spectral response of kinetic inductance detectors
A. Monfardini
September 9, 2024
========================================================================
Kinetic Inductance Detectors (KID), based on planar superconducting resonators <cit.>, are popular detectors for astrophysical observations <cit.> and interesting devices for physics studies <cit.>.
One of the current challenges in millimetre astrophysics observations is to achieve a given degree of spectral resolution without sacrificing the large field-of-view of the current cameras.
Here we explore a solution to achieve that goal by tuning the spectral response of KID with an external magnetic field. We present the first demonstration of the optical response of KID under a variable magnetic field. We also evaluate the effects of the applied magnetic field on the resonators quality factors and conclude by discussing future improvements of our initial KID design.
KID are a particular implementation of superconducting resonators. They are planar LC-resonant circuits made of superconductor thin films deposited on an insulating substrate, optimized for photon detection. The photon detection principle consists in monitoring the resonance frequency shift that is proportional to the incident power. Incident radiation breaks down Cooper pairs, generating quasi-particles and modifying the kinetic inductance resulting in an shift of the resonance frequency f=1/(2π√(LC)) where C is the capacitance and L=L_K+L_G is the total inductance, i.e. the sum of the kinetic and geometric inductances. The internal quality factor of the resonator decreases with the number of generated quasi-particles.
Figure <ref> shows a schematic view of the experimental set-up. The array of KID is made of a 200 nm aluminum film deposited on a high-resistivity silicon wafer. Four wire resistivity measurements performed on the same 200 nm aluminum film give a critical temperature T_c∼1.2K and a perpendicular critical field H_c∼4.5mT. Each KID is coupled via its inductor to the readout line. The inductor L is a Hilbert shape, sensitive to all in-plane polarization <cit.>. The lines are 4 μm wide. A magnetic field is applied perpendicular to the KID using a custom Helmholtz coil. The KID and the coil are cooled to approximately 100 mK in a dilution refrigerator with optical access. Illumination is controlled by a Martin-Puplett spectrometer <cit.> at room temperature and reaches the KID through a suitable series of optical filters and lenses <cit.>.
The bottom panel of figure <ref> displays the measured spectral response of KID as a function of the incident optical frequency at different magnetic fields. The response extends from the 2Δ superconducting gap up to the low-pass filter frequency (180 GHz). At zero magnetic field the 2Δ superconducting gap equals 90 GHz, in agreement with the BCS-value 3.52k_BT_c/h∼88GHz. When increasing the magnetic field the 2Δ-gap decreases, increasing the band response from 90-180 GHz to 60-180 GHz at about 3 mT. The level of noise in the spectra, observable outside the response band, below 2Δ and above 180 GHz, increases with the magnetic field. Within the limits of noise and measurement accuracy, about ∼1 GHz resolution for the optical frequency, the common-band response appears identical. So, in principle, by subtracting the response measured at 3 mT from that measured at zero magnetic field, we could obtain the response of the 60-90 GHz band. Extrapolating a little further, subtracting the responses measured at very close magnetic fields would give access to the response of a highly resolved spectral band: at 2 mT, a 0.001 mT step, would give a spectral resolution R=ν/Δν of about 8000 at 80 GHz (e.g. a spectral band of 0.01 GHz).
In practice, it is not so straightforward to access the spectroscopic signal. In a magnetic field, the frequency shift is due both to changes of the optical load, the signal of interest, and of the magnetic field. The latter are due to variations in the kinetic inductance L_k, resulting from the change of the 2Δ superconducting gap as L_K=ħ R_□/(πΔ) where R_□ is the sheet resistance in the normal state <cit.>.
Figure <ref> illustrates the frequency variations due to both the magnetic field and the change in optical load. The figure shows Vector Network Analyser (VNA) response of a KID under two optical loads and two magnetic fields.
The red and blue curves correspond, respectively, to measurements under a high or low optical load, with the 300 K window of the cryostat either closed by a dark plastic cap or closed by a mirror.
The dark plastic cap acts as blackbody source at about 300 K (T_bb∼300K).
The mirror, which reflects the emission from the coldest stages of the cryostat, acts as a very cold blackbody source at about 0 K (T_bb∼0K).
Varying the magnetic field from 0 to 2.7 mT, the resonance frequency shifts by about 2 MHz (from left to right panel). Due to the optical signal, the resonance frequency shifts by 6 kHz at 0 mT and by 50 kHz at 2.7 mT.
To access the absolute spectroscopic signal preliminary calibrations in magnetic field are required. Figure <ref> shows the variation of the superconducting gap, the frequency and the quality factor of KID with respect to the magnetic field. Within the errors bars both the 2Δ superconducting gap and the δ f/f relative frequency shift remain identical while ramping up and down the magnetic field. The magnetic field dependence of the superconducting gap follows the following formula valid for thin film <cit.> (i.e. for d/λ<<1 where d is the thickness and λ is the magnetic penetration depth):
Δ(H)=Δ_0√(1-(H/H_c)^2)
where Δ_0∼ 90 GHz is the gap at zero magnetic field and H_c∼ 4.5 mT is the critical field. The relative frequency shift under an almost constant small optical load is adjusted with the following formula valid for δ f<< f:
δ f(H)/f=-α_0/2[1-√(1-(H/H_c)^2) ]
where α_0=L_K^0/(L_K^0+L_G)∼ 1.4% is the ratio of the kinetic inductance over the total inductance at zero magnetic field. The expression <cit.> is the usual, δ f/f=-α/2×δ L_k/L_K, with the magnetic field dependence of L_K inserted. The value of α is in agreement with the ones estimated from the actual resonance frequency f_L_K+L_G, the resonance frequency simulated with the SONNET software <cit.> without any kinetic inductance f_L_G and the following formula α=1-(f_L_K+L_G/ f_L_G)^2, resulting in α∼1-4%. The value of α is low because the kinetic inductance of a 200 nm thick film of aluminum is small <cit.>.
The internal quality Q_i strongly varies with the magnetic field from about 10^6-10^5 down to 10^3 and its value depends on the history of the magnetic field. This behavior is due to the presence of vortices in the resonator meander. The vortices are in the so-called plastic regime, in which they are alternatively pinned and mobile when sweeping the magnetic field <cit.>. Vortices develop in aluminum thin films because they are type-II superconductors contrary to bulk aluminum <cit.>.
To achieve spectroscopy using kinetic inductance detectors and magnetic field the first point to address is the vortex issue. The theory predicts that in thin superconducting films, vortices develop above a magnetic field perpendicular to the films H_0=πΦ_0/(4w^2) where Φ_0 is the quantum of magnetic flux and w is the width of the superconducting line <cit.>. Experiments on 200 nm thick Nb superconducting lines validate the formula <cit.>, and, show that the critical field for the vortex nucleation, H_0 drops drastically from about 200 mT in bulk Nb to below 1 mT for a 200 nm thick Nb film. For superconducting thin films, the width of the lines must be of the order of 500 nm to avoid the formation of vortices up to 5 mT. Thus, future developments of spectroscopy using kinetic inductance detectors and magnetic field requires a new KID design and e-beam lithography.
An other point to address is the magnetic field generation. The diameter of the magnetic coil must scale up with the diameter of the field of view which is proportional to the focal plane diameter. One solution is to implement a large standard coil at room temperature, around the cryostat. The magnetic field at the center of the coil is H=μ_0 I N/d where μ_0 is the vacuum permittivity, I is the current, N is the number of turns and d is the diameter of the coil. For a diameter of 30 cm, a current of 1 A, and 3000 turns, the magnetic field equals 6 mT. This solution is the simplest from a cryogenic point of view, but the magnetic field on the KID may not be sufficiently homogeneous.
We tune the spectral response of kinetic inductance detectors with a magnetic field. The change in spectral response is due to the decrease of the superconducting gap when increasing the magnetic field. Our results suggest that it may be possible to achieve high-resolution spectroscopy over a wide field of view using KID and a magnetic field. The main pending limitation is the formation of vortices in the lines of the KID. This can be solved by reducing the lines width. The spectral resolution could reach several thousands depending also on the superconducting gap steepness. The spectral bands could be adjusted with the superconducting material and the magnetic field. Aluminum films with a critical temperature of T_c∼1.2 K and H_c∼5 mT give access to the relevant 50-100 GHz band. Tantalum with a critical temperature of T_c∼4.5 K and H_c∼80 mT could, in principle, give access to a 200-400 GHz band.
We acknowledge the contribution of G. Donnier-Valentin and of T. Gandit, respectively, for the design and for the realization of the Helmotz superconducting coil. We acknowledge the contribution of O. Bourrion for the electronic acquisition. We thank B. Sacépé for excellent discussions. We acknowledge the overall support of the Cryogenics and Electronics groups at Institut Néel and LPSC. This work has been partially supported by the French National Research Agency through the LabEx FOCUS Grant No. ANR-11-LABX-0013 and the EU13s Horizon 2020 research and innovation program under Grant Agreement No. 800923 (SUPERTED). A. G. acknowledges financial support from PID2022-137779OB-C41 funded by the Spanish MCIN/AEI/10.13039/501100011033.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ REFERENCES
unsrt
Day P. K. Day, H. G. LeDuc, B. A. Mazin, A. Vayonakis, and J. Zmuidzinas, https://doi.org/10.1038/nature02037A Broadband Superconducting Detector Suitable for Use in Large Arrays, Nature 425, 817 (2003).
Concerto CONCERTO collaboration, https://doi.org/10.1051/0004-6361/202038456A wide field-of-view low-resolution spectrometer at APEX: Instrument design and scientific forecast, Astronomy & Astrophysics 642, A60 (2021).
NIKA2 NIKA2 collaboration, https://doi.org/10.1051/0004-6361/201936220Calibration and performance of the NIKA2 camera at the IRAM 30-m Telescope , Astronomy & Astrophysics 637, A71 (2020).
axion B. Aja et al., https://doi.org/10.1088/1475-7516/2022/11/044The Canfranc Axion Detection Experiment (CADEx): Search for Axions at 90 GHz with Kinetic Inductance Detectors, J. Cosmol. Astropart. Phys. 2022, 044 (2022).
BULLKID A. Cruciani et al., https://doi.org/10.1063/5.0128723BULLKID: Monolithic Array of Particle Absorbers Sensed by Kinetic Inductance Detectors, Applied Physics Letters 121, 213504 (2022).
sc_coll_modes F. Levy-Bertrand et al., https://doi.org/10.1103/PhysRevB.99.094506Electrodynamics of Granular Aluminum from Superconductor to Insulator: Observation of Collective Superconducting Modes, Phys. Rev. B 99, 094506 (2019).
Visser P. J. de Visser, S. J. C. Yates, T. Guruswamy, D. J. Goldie, S. Withington, A. Neto, N. Llombart, A. M. Baryshev, T. M. Klapwijk, and J. J. A. Baselmans, https://doi.org/10.1063/1.4923097The Non-Equilibrium Response of a Superconductor to Pair-Breaking Radiation Measured over a Broad Frequency Band, Applied Physics Letters 106, 252602 (2015).
Driessen E. F. C. Driessen, P. C. J. J. Coumou, R. R. Tromp, P. J. de Visser, and T. M. Klapwijk, https://doi.org/10.1103/PhysRevLett.109.107003Strongly Disordered TiN and NbTiN S-Wave Superconductors Probed by Microwave Electrodynamics, Phys. Rev. Lett. 109, 107003 (2012).
Lukas L. Grünhaupt, N. Maleeva, S. T. Skacel, M. Calvo, F. Levy-Bertrand, A. V. Ustinov, H. Rotzinger, A. Monfardini, G. Catelani, and I. M. Pop, https://doi.org/10.1103/PhysRevLett.121.117001Loss Mechanisms and Quasiparticle Dynamics in Superconducting Microwave Resonators Made of Thin-Film Granular Aluminum, Phys. Rev. Lett. 121, 117001 (2018).
SONNET1 J. C. Rautio and R. F. Harrington, https://doi.org/10.1109/TMTT.1987.1133738An Electromagnetic Time-Harmonic Analysis of Shielded Microstrip Circuits, IEEE Trans. Microwave Theory Techn. 35, 726 (1987).
SONNET2 SONNET, Sonnet Software, Liverpool, NY, 2005.
Adane A. Adane, C. Boucher, G. Coiffard, S. Leclercq, K. F. Schuster, J. Goupy, M. Calvo, C. Hoarau, and A. Monfardini, https://doi.org/10.1007/s10909-016-1490-3Crosstalk in a KID Array Caused by the Thickness Variation of Superconducting Metal, J Low Temp Phys 184, 137 (2016).
Lopez D. López-Núñez, Q. P. Montserrat, G. Rius, E. Bertoldo, A. Torras-Coloma, M. Martínez, and P. Forn-Díaz, http://arxiv.org/abs/2311.14119Magnetic Penetration Depth of Aluminum Thin Films, arXiv:2311.14119.
Tinkham M. Tinkham, Introduction to Superconductivity.
Monfardini_Hilbert A. Monfardini et al., https://doi.org/10.1007/s10909-013-0985-4Latest NIKA Results and the NIKA-2 Project, J Low Temp Phys 176, 787 (2014).
MP_suppl Details on the Martin-Puplett spectrometer are in the Supplementary information of N. Maleeva et al., https://doi.org/10.1038/s41467-018-06386-9Circuit Quantum Electrodynamics of Granular Aluminum Resonators, Nat Commun 9, 3889 (2018).
NIKA1 A. Monfardini et al., https://doi.org/10.1088/0067-0049/194/2/24A Dual-band Millimeter-wave Kinetic Inductance Camera for the IRAM 30 m Telescope, The Astrophysical Journal Supplement Series 194, Number 2, 24 (2011).
Annunziata A. J. Annunziata, D. F. Santavicca, L. Frunzio, G. Catelani, M. J. Rooks, A. Frydman, and D. E. Prober, https://doi.org/10.1088/0957-4484/21/44/445202Tunable Superconducting Nanoinductors, Nanotechnology 21, 445202 (2010).
Douglas D. H. Douglass, https://doi.org/10.1103/PhysRevLett.6.346Magnetic Field Dependence of the Superconducting Energy Gap, Phys. Rev. Lett. 6, 346 (1961).
Borisov K. Borisov et al., https://doi.org/10.1063/5.0018012Superconducting Granular Aluminum Resonators Resilient to Magnetic Fields up to 1 Tesla, Applied Physics Letters 117, 120502 (2020).
Song C. Song, T. W. Heitmann, M. P. DeFeo, K. Yu, R. McDermott, M. Neeley, J. M. Martinis, and B. L. T. Plourde, https://doi.org/10.1103/PhysRevB.79.174512Microwave Response of Vortices in Superconducting Thin Films of Re and Al, Phys. Rev. B 79, 174512 (2009).
Maksimova G. M. Maksimova, https://doi.org/10.1134/1.1130618Mixed State and Critical Current in Narrow Semiconducting Films, Phys. Solid State 40, 1607 (1998).
Stan G. Stan, S. B. Field, and J. M. Martinis, https://doi.org/10.1103/PhysRevLett.92.097003Critical Field for Complete Vortex Expulsion from Narrow Superconducting Strips, Phys. Rev. Lett. 92, 097003 (2004).
|
http://arxiv.org/abs/2409.03451v1 | 20240905115836 | Automatic occlusion removal from 3D maps for maritime situational awareness | [
"Felix Sattler",
"Borja Carrillo Perez",
"Maurice Stephan",
"Sarah Barnes"
] | cs.CV | [
"cs.CV"
] |
Automatic occlusion removal from 3D maps for maritime situational awareness
Felix Sattler1, Borja Carrillo Perez1, Maurice Stephan1, Sarah Barnes1
1German Aerospace Center (DLR), Institute for the Protection of Maritime Infrastructures,
Bremerhaven, Germany
Received Month dd, yyyy; accepted Month dd, yyyy
===============================================================================================================================================================================================
§ ABSTRACT
We introduce a novel method for updating 3D geospatial models, specifically targeting occlusion removal in large-scale maritime environments.
Traditional 3D reconstruction techniques often face problems with dynamic objects, like cars or vessels, that obscure the true environment, leading to inaccurate models or requiring extensive manual editing.
Our approach leverages deep learning techniques, including instance segmentation and generative inpainting, to directly modify both the texture and geometry of 3D meshes without the need for costly reprocessing.
By selectively targeting occluding objects and preserving static elements, the method enhances both geometric and visual accuracy.
This approach not only preserves structural and textural details of map data but also maintains compatibility with current geospatial standards, ensuring robust performance across diverse datasets.
The results demonstrate significant improvements in 3D model fidelity, making this method highly applicable for maritime situational awareness and the dynamic display of auxiliary information.
3D geospatial models, Occlusion removal, Generative inpainting, Maritime situational awareness, Instance segmentation
§ INTRODUCTION
In the context of maritime security, different stakeholders such as port authorities, law enforcement agencies and research institutions maintain large, geospatial 3D assets (for example, digital surface models, DSM) that are used for situational awareness and on-site monitoring.
Static 3D information is used as a geospatial layer onto which different auxiliary information is displayed dynamically <cit.>.
When performing 3D reconstruction of static maritime environments from remote sensing data, dynamic objects that occlude the environment (occluders) are almost always present.
In port infrastructures this can refer to berthed vessels on water bodies, shipping containers in terminals or parked vehicles along the quay.
Generating a 3D map that incorporates these occluding objects does not reflect the true environment and limits the insertion of auxiliary information into the 3D map.
The generation of large 3D assets is resource intensive and requires specialized processing techniques such as photogrammetry which is capable of producing 3D geometries from remote sensing data.
Simply removing all occluding objects manually from the collected imagery presents two disadvantages:
First, masking out objects during preprocessing by retouching further increases resource demand during model generation.
Additionally, especially in large regions with sparse pattern information (for example water bodies or container depots) occluders provide important features that help to register images more robustly and thus improve the reconstructed geometry.
Therefore, their removal during preprocessing is not always feasible.
In this work, we present a novel method for direct 3D geometry processing to enable the removal of occluders as a postprocessing step.
With this method, existing 3D assets can be reprocessed to enable the insertion of auxiliary information.
Users, such as scientific staff, government authorities or analysts can reuse existing DSMs instead of creating new ones.
Our framework combines state-of-the-art instance segmentation and generative inpainting using deep learning with projection mapping to correct surface textures and remesh geometry information.
The proposed method is robust and allows users to select classes of occluding objects that will be removed automatically without regenerating the whole 3D asset.
In the remainder of this paper an overview of related works will be given (Section <ref>), then the method will be introduced (Section <ref>), followed by an application to a real-world dataset (Section <ref>) and a summary (Section <ref>).
§ RELATED WORKS
To effectively modify a 3D mesh, it is essential to alter both texture and geometry information.
A key technique utilized in this context is mask-aware inpainting, which has become increasingly significant in recent advancements in image processing <cit.>.
Inpainting methods aim to fill or reconstruct missing or occluded regions of an image or 3D model, guided by a mask that specifies the areas to be reconstructed.
Traditionally, inpainting methods relied on patch-based<cit.> or geometric constraints<cit.>, which often struggle with complex details and large occlusions, leading to artifacts or blurring<cit.>.
Recent deep learning methods, such as those based on generative adversarial networks (GANs)<cit.>, diffusion models<cit.>, and image convolutions<cit.> in the frequency domain, have significantly improved inpainting capabilities.
In remote sensing, 2D inpainting has been used successfully in various applications.
For instance, GANs have been employed to fill in cloud-occluded areas in satellite images<cit.>, where traditional methods fall short.
Similarly, inpainting techniques have been applied to remove vehicle occlusions to produce consistent lane markings for semantic analysis<cit.>, and to reconstruct false-color data in sea surface temperature (SST) images<cit.>.
This demonstrates the versatility of inpainting across different imaging modalities.
2D inpainting techniques have also been applied to alter 3D data.
Engels et al. <cit.> combined traditional inpainting with plane-fitting to modify 3D point clouds of building facades in urban scenes, though the method is limited by poor mask detection and inpainting quality.
To improve upon this, recent approaches have proposed using state-of-the-art 2D inpainting methods to remove objects from images and then refit the 3D model with the new data.
Due to the ability of deep-learning inpainting methods to generalize across a variety of imaging data, it is possible to modify non-color data such as depth information or 3D position data.
Mirzaei et al.<cit.> proposed a method called SPIn-NeRF generating a neural radiance field (NERF)<cit.> and reoptimizing it with 2D inpainted depth and color data.
Similarly, Prabhu et al. <cit.> jointly optimized a NERF and a diffusion model for 2D inpainting.
However, these methods encode data as neural representations that are difficult to integrate with modern geospatial data standards like 3DTiles OGC<cit.>.
Also, a reoptimization of neural representations is slow and requires users to convert existing 3D data into a suitable format.
Nevertheless, works like SPIn-NeRF showcase the potential of inpainting for optimizing 3D geometry.
Building on these advances, we introduce a novel method that combines 2D instance segmentation and mask-aware inpainting with 3D reprojection and remeshing.
We perform instance segmentation on an orthogonal bird's eye view of the 3D map and then apply inpainting to color and elevation data.
This approach directly modifies the surface texture and geometry of 3D meshes without the need for expensive retraining or recomputation.
This approach allows users to reuse existing 3D data, such as DSMs and is conceptualized to work robustly with large 3D datasets.
§ PROPOSED ARCHITECTURE
Our method for updating 3D geospatial models integrates mask-aware inpainting and geometric remeshing, optimized for remote sensing data from satellite or aerial applications.
In Figure <ref> the process is illustrated.
It begins by setting up an orthogonal camera in a bird's-eye view (BEV).
This camera setup ensures that the projection is consistent across the entire scene, avoiding perspective distortions.
The camera transformation matrix and projection matrix from BEV to 3D mesh are stored for later use in remeshing and reprojection.
Then, a color (RGB) and a height map are rendered.
The height map is a 16-bit normalized raster map of the elevation of the sampled 3D mesh.
The spatial resolution of color and position samples is user-definable and should be close to the ground sampling distance (GSD) of the source data.
To handle large-scale maps efficiently, the entire pipeline operates in a tiled manner, processing overlapping image patches.
For optimal performance, the sampled area of a patch should correspond to the size of the geospatial 3D map tiles.
After generating a BEV patch, instance segmentation is performed using an appropriate deep learning approach which is explained in detail in Section <ref>.
It is important to note that the general framework does not require a specific instance segmentation model but works with any model that is trained to output masks and works with the image patch size.
We allow the specification of occluder classes depending on the model and training data.
For the maritime domain occluders are mostly vehicles, vessels and port infrastructure such as cranes.
This semantic analysis of the rendered color images, identifies areas in the 3D model that require updating, such as occluded or outdated regions.
The predicated 2D masks are then forwarded to two subsequent inpainting stages: a color and a height pass.
In the color pass, deep learning-based generative inpainting techniques remove occluders from the surface texture outputting predicted color images.
Concurrently, the position pass refines the geometric details by inpainting height information, a process closely related to SPIn-NeRF <cit.>, and outputting predicted height maps.
This approach leverages the ability of deep-learning based inpainting to generalize well to non-color data, ensuring accurate reconstruction of the scene’s elevation and geometry.
When all image patches have been processed and stored, geometric remeshing and color reprojection are performed.
Remeshing is performed by projecting the 3D vertices of the mesh into the reference frame of the orthogonal camera.
The elevation values of the vertices of the original 3D model are replaced with the inpainted ones. After this step, the occluding geometry conforms to the curvature of the background, which reflects the surrounding environment rather than being strictly flat.
To clean up the projected geometry, vertices are merged based on distance, resulting in a consistent topology.
Color data is then reprojected onto the cleaned vertices, ensuring the updated 3D model is visually accurate.
By default, we generate a second set of texture coordinates with a blending mask which is compatible with the 3D Tiles standard and its underlying GLTF format<cit.>.
Alternatively, a resampling of the original texture using rasterization can be performed to generate a new texture.
Our approach improves upon earlier methods like Engels et al. <cit.>, who used a similar concept with 3D point clouds.
However, their approach lacked semantic analysis, resulting in poor segmentation and ineffective reconstruction of geometric structures.
They also required remeshing the 3D point cloud using standard Poisson reconstruction <cit.> and recomputation of the surface textures due to their reliance on point-based plane-fitting for geometry correction.
§ RESULTS
Our proposed method for updating 3D geospatial models produces a clean mesh while preserving texture and geometry fidelity of static regions.
Figure <ref> shows an overview of the technique on a large scale harbor area.
The results presented here illustrate the effectiveness of the mask-aware inpainting and geometric remeshing pipeline on aerial imagery.
§.§ Dataset
All aerial data used for the 3D reconstruction of the DSM shown here was captured using a fixed-wing drone flying at an altitude of approximately 330.
The area of interest is a port in the south of Bremerhaven, Germany, recorded with a ground sampling distance (GSD) of approximately 3.7.
In total, the area covered by the dataset is roughly 1^2 (1030×900).
The generated 3D mesh was sampled from the BEV with a GSD of ∼6 to generate patches with a size of 2048×2048 pixels.
We chose this to match the image resolution on which the inpainting methods were trained on.
Additionally, we chose a 50% overlap across all images.
Overlap improves the completeness of detections during instance segmentation by concatenating multiple patches.
In total we processed 195 patches to generate the modified 3D map depicted on the right in Figure <ref>.
§.§ Instance Segmentation and Inpainting
Figure <ref> illustrates how we combine instance segmentation and inpainting.
The figure shows the normalized difference between the original (source) elevation and the generated (inpainted) elevation as a heatmap overlaid using the instance masks generated by instance segmentation.
Areas requiring updates are clearly identified by the neural network while all static environments are unaltered.
To accomplish this, we employed YOLOv8 <cit.>, a state-of-the-art real-time instance segmentation algorithm with the largest configuration (YOLOv8x).
We used pretrained weights on MS COCO <cit.> and fine-tuned the model by training on the DOTAv2 dataset <cit.>, an aerial dataset for instance segmentation that contains the class of interest: vehicle and vessel.
Instance segmentation was performed at LOD6 (the highest resolution) for 195 patches.
The generated masks for processing, such as cars and vessels, were dilated using a 5×5 kernel to improve thin masks, and masks from neighboring patches were merged to account for any missing detections at the image edges.
These masks were then downsampled for application to lower LODs.
The proposed framework is agnostic to the inpainting method used, so we compared three recent and established architectures: A GAN-based architecture called CoModGAN<cit.>, MAT<cit.>, a transformer-based architecture and LaMa<cit.>, a neural network architecture using Fourier-based convolutions.
Figure <ref> shows a qualitative comparison of the different inpainting approaches we evaluated for this work.
All three models were trained on the Places dataset<cit.>, which includes a wide variety of images with landscape, urban and architectural scenes making it suitable for our use case.
In Table 1 we also provide quantitative metrics to assess the performance of the inpainting methods.
Since we perform evaluation on real-world data, no ground-truth without the occluders is available.
Therefore, mean Shannon entropy and the mean earth mover distance (EMD) <cit.> were used for a comprehensive evaluation.
The removal of occluders effectively means removing high-frequency detail from the image and replacing it with surrounding information.
This directly corresponds to a compression of information which can be measured by a reduction in entropy and change of image statistics.
The EMD measures the cost of transforming the histogram of the inpainted patch to the source histogram, giving a measure of global change.
It is applicable here because the inpainting methods tend to be guided by the image statistics as illustrated in Figure <ref>.
When examining images d) and h) in Figure <ref> as well as Table 1 it can be seen that LaMa outperforms both MAT and CoModGAN for color as well as position inpainting qualitatively and quantitatively.
Particularly in maintaining consistency with respect to surrounding geometry and texture patterns exemplified by the removal of the berthed vessel.
This is reflected by the reduction in entropy as well as the increase in the EMD.
Especially for roads and parking lots we observed that LaMa also correctly inpainted small details (for example lane markings).
As shown in images c) and g), MAT struggles to understand the context of the masked surroundings, often duplicating parts of the image when generating new content or failing to remove structures (exemplified by the vessel hull in the position map, Figure <ref> g)).
CoModGAN performed slightly better than MAT for color but not for position data (see Table 1) by producing less smearing on the texture, however it did not respect geometric constraints or color variation (see in Figure <ref> the edge between quay wall and water).
Overall it can be seen that all models perform better on lower-frequency position data rather than color data.
When comparing images f through h this can be seen when examining how the vessel is inpainted.
MAT and LaMa work on full resolution images, while CoModGAN was limited to 512×512 pixels requiring an upsampling in the final stage thus degrading quality.
§.§ Geometric Remeshing and Projection
Figure <ref> demonstrates how the inpainted position data is used to alter the geometry of the 3D mesh.
The wireframe overlay (Figure <ref>, left) shows the original mesh structure, while the right side shows the remeshed structure after applying our method.
Our remeshing technique samples the elevation from the inpainted position map (shown for reference in the center of Figure <ref>) and effectively addresses overlapping and inconsistent geometric artifacts by merging vertices based on distance, resulting in a consistent topology.
The merge distance is a user-definable parameter and was set to 0.4 for the data shown here.
As seen in the figure, the updated geometry better reflects the curvature and contours of the scene, ensuring a more accurate 3D model.
The visible remnants of the original geometry (for example around the berthed vessel) are not artifacts but necessary to allow the remapping of the original texture coordinates to the new model.
By keeping a subset of the original triangle data, it is possible to raster the inpainted color to the original texture.
Our approach not only improves visual accuracy but also addresses compatibility of the updated 3D models with existing geospatial standards like 3D Tiles, aiming to provide seamless integration into modern geospatial applications.
§ CONCLUSION
Our proposed method for updating 3D geospatial models in the maritime domain successfully addresses the challenges of occlusion removal and texture fidelity in large-scale remote sensing data.
By employing a combination of state-of-the-art instance segmentation and generative inpainting networks, we maintain both the geometric and visual accuracy of the 3D model and allow alteration without the need for recomputation.
The DSM used for validation provided a testbed, showcasing the effectiveness of our approach across various levels of detail.
The results demonstrate that our method selectively targets dynamic parts of the scene while preserving static environments, thereby minimizing unwanted alterations.
Moreover, the remeshing technique effectively resolves inconsistencies in the geometric structure, leading to a more accurate and visually coherent 3D model.
However, there is room for improvement, particularly in handling artifacts like shadows which all of the inpainting methods fail to remove properly (see Figure <ref>).
Compared to existing methods, our approach offers a unified framework for occlusion removal on 3D mesh data and maintains texture consistency, particularly in complex maritime scenes.
The integration of these techniques ensures that the updated models not only enhance visual fidelity but also maintain compatibility with current geospatial standards, making them suitable for the use in maritime situational awareness and the display of auxiliary information.
Future work should focus on refining instance segmentation, possibly integrating shadow detection techniques, such as those proposed by Wang et al.<cit.>, or employing specialized architectures for shadow inpainting.
Additionally, training the inpainting network on aerial datasets like iSAID<cit.> and DOTAv2<cit.> could enhance its performance in challenging scenarios, including parking lot cells or lane markings which would further improve the quality of the final 3D mesh.
IEEEtran
|
http://arxiv.org/abs/2409.02682v1 | 20240904131258 | Symmetries and synchronization from whole-neural activity in {\it C. elegans} connectome: Integration of functional and structural networks | [
"Bryant Avila",
"Pedro Augusto",
"David Phillips",
"Tommaso Gili",
"Manuel Zimmer",
"Hernán A. Makse"
] | q-bio.NC | [
"q-bio.NC",
"physics.app-ph"
] |
#1#2Cline#1#2nil
Cline#1-#2#3nil
multicnt#1
multispan@ne
multicnt=nefirstofone multicnt#2
multicnt-#1
multispanne
height#3
Symmetries and synchronization from whole-neural activity in C. elegans connectome: Integration of functional and structural networks
Bryant Avila1*,
Pedro Augusto2,3*,
David Phillips 4,
Tommaso Gili 5,
Manuel Zimmer 2†,
Hernán A. Makse1,6,7†
1 Levich Institute and Physics Department, City College of New York, New York, NY 10031, USA
2 Department of Neuroscience and Developmental Biology, University of Vienna, Vienna Biocenter (VBC), Vienna, Austria
3 Vienna Biocenter PhD Program, Doctoral School of the University of Vienna and Medical University of Vienna, Vienna, Austria
4 Mechanical Engineering Department, University of New Mexico, Albuquerque, NM
87131, USA
5 Networks Unit, IMT Scuola Alti Studi Lucca, Piazza San Francesco 15, 55100, Lucca, Italy
6 Department of Radiology, Neuroradiology Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
7 CUNY Neuroscience, Graduate Center, City University of New
York, New York, NY 10031, USA
* Equal contribution
† Corresponding author, [email protected]
†Corresponding author, [email protected]
§ ABSTRACT
Understanding the dynamical behavior of complex systems from their
underlying network architectures is a long-standing question in
complexity theory. Therefore, many metrics have been devised to
extract network features like motifs, centrality, and modularity
measures. It has previously been proposed that network symmetries are
of particular importance since they are expected to underly the
synchronization of a system's units, which is ubiquitously observed in
nervous system activity patterns. However, perfectly symmetrical
structures are difficult to assess in noisy measurements of biological
systems, like neuronal connectomes. Here, we devise a principled
method to infer network symmetries from combined connectome and
neuronal activity data. Using nervous system-wide population activity
recordings of the C.elegans backward locomotor system, we
infer structures in the connectome called fibration symmetries, which
can explain which group of neurons synchronize their activity. Our
analysis suggests functional building blocks in the animal's motor
periphery, providing new testable hypotheses on how descending
interneuron circuits communicate with the motor periphery to control behavior. Our approach opens a new door
to exploring the structure-function relations in other complex
systems, like the nervous systems of larger animals.
§ SIGNIFICANCE STATEMENT
Complex biological networks exhibit an intricate relationship between the structural composition of their elements’ interaction and the whole system's functionality. This is particularly true in neuroscience, where the multiscale organization of neurons gives place to a multitude of hierarchical functionalities responsible for the nervous system interacting with the environment in which it is embedded. In this study, we focus on the chemical synaptic network of the backwards-crawling locomotion circuitry of the worm C. elegans to correlate structural clusters of neurons with dynamic patterns of their synchronization. Our findings reveal functional building blocks in the animal's motor periphery, providing new testable hypotheses on how descending interneuron circuits communicate with the motor periphery to control behavior.
§ INTRODUCTION
Complex systems depend on an intricate interplay between the dynamical
properties of their building blocks and the network structures that
link them together. For decades, researchers have endeavored to
uncover how the topological attributes of a network could shed light
on its dynamical behavior. While several methods have been proposed to
elucidate this interplay
<cit.>,
the role of the network's symmetries has garnered considerable
attention in recent years
<cit.>. Identifying
and understanding such symmetries has potentially profound
implications: network theory implies that certain symmetrical
structures termed fibration symmetries in a graph permit
synchronizations
<cit.>,
a ubiquitously observed phenomenon in brain networks
<cit.>. Thus, identifying these symmetries might be
crucial for decoding some of the structure-function relationships in a
neuronal network.
Fibration symmetry refers to a specific type of symmetry where nodes
can be grouped into equivalence classes of balanced colorings based on
their input patterns
<cit.>. Nodes in the same
class receives identical 'input trees', which are hierarchical mappings
of input connectivity patterns, are proposed to share the same dynamics, and thus can
synchronize their behavior. These equivalent nodes are said to belong
to the same 'fiber' and are also 'balanced colorings' of the graph
<cit.>. This is formalized through admissible
ordinary differential equations (ODEs), which describe the evolution
of the system's state <cit.>. Theory ensures that
balanced colored nodes with isomorphic input trees following a set of
admissible ODEs will evolve in synchrony
<cit.>
forming a cluster of synchronization
<cit.>. This synchronization
can be crucial in various biological systems ranging from gene
regulatory networks to brain networks involved in language processing
<cit.>. Thus, identifying
fibration symmetries provide a prediction in which subsets of nodes
potentially exhibit synchronized dynamics.
However, there are technical and conceptual challenges to this
theory. Biological networks like connectomes are inherently
incomplete, and measuring them is limited by annotation errors,
missing links, and noise, yet identifying fiber symmetries is sensitive
to small variations in a connectivity matrix. More fundamentally,
assuming mathematically perfect symmetries and equivalences of nodes
in a biological network is unrealistic. Here, we propose that network
symmetries represent constraints rather than perfect blueprints for
their implementations in individual animals. We aim to address these
challenges by studying the interplay of symmetry and synchronization
in the locomotor system of the nematode C.elegans.
The nematode worm C.elegans <cit.>exhibits a small nervous system
of just 302 neurons that develop from a stereotypic cell lineage into
118 anatomically and genetically defined cell classes, most of which
are comprised of a bilaterally symmetrical pair of neurons (e.g., AVAL
and AVAR) <cit.>. Its connectome has been fully
reconstructed with a synaptic resolution
<cit.>, and it is
tractable for large-scale single-cell resolution neuronal calcium
imaging <cit.>. Both, in
combination, offer unique opportunities to study structure-function
relationships in the nervous system <cit.>. The worm connectome shares many
features with connectivity data from larger nervous systems, e.g.,
rich-club architecture and modularity
<cit.>.
Moreover,
neuronal recordings in C. elegans revealed nervous
system-wide neuronal activity dynamics that exhibit synchronization
patterns among well-defined ensembles of
neurons<cit.>. These
dynamics correspond to motor commands for a set of actions like
forward-crawling, backward-crawling, or
turning<cit.>.
These dynamics are generated in the absence of any known acute
time-varying sensory stimuli
<cit.> and can be
promoted by arousing conditions, such as environmental oxygen or blue
light <cit.>. Moreover, these
dynamics can be observed in immobilized animals; even under such
conditions, they extend to the motor periphery, incorporating motor
neurons, under unconstrained conditions mediating movement execution
<cit.>. These features point toward intrinsic
mechanisms that drive and maintain synchronous activity states. In
accordance, our previous work indicated that rich club architecture
and input similarities are crucial architectural features in the
connectome that permits synchronized neuronal
dynamics<cit.>.
In the present study, we focus on the chemical synaptic network of the
backward-crawling locomotion circuitry of C. elegans to circumvent some of the technical and conceptual constraints posed
by theory and computational limits. This sub-network of just 21 individual
neurons includes three bilateral pairs of interneurons, AVA, AVE, and AVD,
which are situated in the head of the animal and send descending
chemical synapses to two classes of motor neurons, termed DA1-9 and
VA1-12. These motorneurons form neuromuscular junctions with the
dorsal and ventral body wall muscles, respectively, and are required
to generate the backward crawling gait <cit.>. All
chemical synapses in this sub-network are cholinergic and, with few
exceptions, neurons within the same class share similar morphology and
gene expression patterns <cit.>. Hence, assuming equivalence among nodes within
the same class could be a reasonable simplification.
By leveraging advanced calcium imaging
techniques, we first study neuronal dynamics, aiming to identify
patterns of synchronization within its backward locomotion circuitry. We
then use the observed synchronization dynamics to infer underlying symmetries in the
connectome.
Given the potential variability in connectomes across individual worms,
expecting the universal connectome model to predict the exact neural
synchronization across neurons is unrealistic. This variability
underscores the necessity of adopting tailored approaches to
understand the dynamic behavior from the structure of the
connectome. We approach this problem by reconstructing the underlying
connectome of C. elegans guided by the synchronization dynamics
obtained experimentally. This is done by "repairing" the connectivity
structure of a typical connectome <cit.> by
solving an appropriately modeled integer linear program to
find the minimal number of repairs of the available connectome needed
to ensure the observed synchronization dynamics. We aim for a
fibration-symmetric connectome that reflects the observed
synchronization. This method provides a means to idealize connectomes conceptually. According to inter-animal connectome differences reported experimentally in
<cit.>,
the threshold for connectome modifications can be taken to not exceed approximately 50%. However, for our final repair solution we take the stringent threshold of 25% given by animal-to-animal differences found in <cit.>.
To ascertain the robustness and validity of our solutions, we engage
in a rigorous testing procedure. We aim to determine the optimality of
our solution by reshuffling the neuronal labels in the connectome and
revisiting the repair process. We accept our method's solution only
if the post-relabeling solutions consistently underperform our primary
findings (in over 95% of instances).
Our study bridges the gap between structural and functional neuronal
data, leveraging the power of symmetries in graph theory to shed light
on synchronization dynamics within C. elegans and the
structure of the connectome underlying this synchronization.
§ RESULTS
In this section, we delineate the pipeline employed throughout our
study in a condensed fashion and details are elaborated in the
Methods section <ref>. Figure <ref> provides an
outline of the method to reconstruct the connectome:
* The
process begins with the experimental setup to record neuron activity in Fig. <ref>A with a microfluidic device
for immobilization of C. elegans enabling nervous system-wide Ca++ imaging with confocal
fluorescence microscope setup.
* Time series data
for activity traces of multiple neurons are recorded simultaneously, as
shown in Fig. <ref>B. This
procedure is performed on multiple worms under similar
conditions.
* To obtain functional pairwise synchrony matrices, various metrics that capture synchrony are applied to the
recorded time series data(Fig. <ref>C).
* The synchrony matrices shown in Fig. <ref>D are
averaged across worms to account for trial-to-trial and/or inter-individual variability.
* A standard thresholding process is then applied to obtain the
functional network (Fig. <ref>E)
<cit.>. Starting from a
disconnected graph, we add links between nodes in decreasing order
of weight of the averaged synchrony matrix. This is done for each
averaged matrix.
* Finally, a consensus matrix is calculated
<cit.> across different methods of synchrony
(Fig. <ref>F).
* A hierarchical clustering algorithm is implemented to find a
partition of synchrony clusters (Fig. <ref>G). Each
neuron is assigned a color according to its cluster of
synchrony. These colors are used as inputs to the repair algorithm
in the next step.
* We formulate and solve a mixed integer linear program (MILP)
that finds the minimum number of edges to add or remove from the
raw connectome (taken from
<cit.>) to produce an ideal
fibration-symmetric network that reproduces the coloring in the
consensus partition obtained in the previous step
(Fig. <ref>H).
* The network produced by the solution to the MILP is a fibration symmetric connectome with cluster synchronization that reproduces the experimental data. This network can be collapsed into a smaller representation, the base
graph, where nodes belonging to the same fiber have isomorphic input
trees (Fig. <ref>I). The base graph suggests functional building blocks in the connectome.
* For each consensus
partition, a permutation p-value test is performed by permuting
node labels and repairing the structural network 1,000 times. The partition with the lowest p-value is
chosen as the optimal solution (Fig. <ref>J).
§.§ Neuronal activity recordings
To simultaneously record calcium activity from interneurons
in the head and motor neurons along the entire ventral cord, we
modify a recently reported whole nervous system imaging pipeline
<cit.>. Briefly, worms expressing the calcium probe
NLS-GCaMP6f in a pan-neuronal fashion, together with NeuroPal
multi-color cell identification labels <cit.>, are
immobilized in a microfluidic device that positions them in the field
of view of a spinning disk confocal microscope. After fast volumetric
GCaMP6f imaging, multi-color stacks are taken for cell class
identification (see Methods section
<ref> for details).
§.§ Whole nervous system recording and characterization of motor neuron activity
We generated eight datasets from different young adults well-fed
worms. Imaging covered almost the entire body spanning from the head
ganglia, the complete ventral cord, and the tail ganglia with
single-cell resolution (Fig. <ref>A-B). Animals were
recorded for 10 minutes at approximately three volumes per
second. Figure <ref>C shows a multi-neuron time series
with discernible calcium activity patterns of the neurons selected for
this study. These include neurons AVAL/R and AVEL/R, which belong to
the major descending interneurons conveying motor commands for
backward crawling, as well as the downstream backward crawling motor
neurons of the DA and VA classes. Consistent with previous studies
<cit.>, AVA and AVE
activity exhibited discrete transitions in their activity patterns,
characterized as low, rise, high, and fall states. We previously
validated that these states correspond to behavioral states in freely
crawling animals, i.e., low corresponds to forward crawling, rise and
high corresponds to backward crawling, and fall corresponds to turning
<cit.>.
Here, we observe that these features are mirrored in the motor
periphery (Fig. <ref>C). The mean pairwise correlations
among motorneurons and interneurons were high, indicating the strong
coupling between them (Fig. <ref>D). During free backward
crawling, animals generate a posterior-to-anterior traveling body wave
of alternating dorsal-ventral body oscillations. We did not observe
any obvious oscillations or activity patterns alternating between the
DA and VA motorneurons, which innervate the dorsal and ventral body
wall muscles, respectively. Neither did we observe any delay or
sequential activation patterns between motor neurons that innervate
more posterior to anterior muscles (which is indicated by the index
number in their name, n=1 most anterior, n=12 most
posterior). However, all motor neurons follow the rise and fall states
of the AVA descending command interneuron
(Fig. <ref>E). In conclusion, under our experimental
immobilization conditions, A-class motorneurons do not generate
gait-related patterns in their Ca++ activity profiles. However, they appear to respond to descending
motor commands from their presynaptic interneurons reliably. This
feature and the absence of movement, hence the lack of proprioceptive
inputs, makes the present setup particularly suitable for studying
neuronal dynamics dependent on internal neuronal circuit interactions
(see also discussion).
§.§ Synchronization measures
Assessing the synchronization of two or more signals recorded
simultaneously involves various methods tailored to the specific
characteristics and similarities of interest. This results in a
diverse array of techniques aiming to capture synchronization through
different approaches <cit.>. These techniques
are generally classified into four primary groups based on their
focus: time domain versus frequency domain and methods that account
for directional dependencies versus those that do not (see Methods
section <ref> and
Table
<ref>).
Correlation is a robust
measurement for coherence in neuronal data obtained via calcium
imaging
<cit.>
(Methods section <ref>).
In addition to correlation, to quantify the amount of synchronicity
between two signals, we implement the Level of Synchronicity (LoS)
measure introduced in <cit.>.
This metric determines how closely two
signals are to perfect synchronicity, meaning that
two signals have the same value at the same time, forming a cluster of synchrony:
V_i(t) = V_j(t) ∀ i,j∈ C_k.
Here, V_i is the dynamic of a neuron, which, in the present case, is
associated with the neuron's calcium activity. C_k is one of the
non-overlapping sets of neurons partitioning the neural system into
Cluster of Synchronicity (CS) <cit.>.
To quantitatively capture this synchronicity, the LoS evaluates the
synchronization of two signals over time by considering their instantaneous differences
and scaling them through a parameter σ defined as:
LoS_ij = 1/T∑_t^Texp-[V_i(t) - V_j(t)]^2/2σ^2,
where T is the total amount of time steps in which
LoS is measured between the signals of neurons i and j.
The parameter σ serves as a scale to define a benchmark for
closeness between two points in time.
This parameter permits the user to deal with natural variations in
biological systems. No two neurons are perfect copies of each other;
therefore, if identical signals stimulate two similar neurons at rest,
their outputs (membrane potential) will naturally vary in intensity
and phase by small amounts. With this concept in mind, we implement
multiple versions of the LoS, each with a different value for
σ. When the value of σ is zero, each signal will only be
synchronous with its exact copy; as the value of σ increments,
it reaches the state that all signals are synchronous with all
others. This indicates that an ideal value lies between zero and an
upper limit (the largest difference between signals at some time t
should suffice). After many trials, we settle for values ranging from
0.01 up to 0.20
to explore different levels of synchronicity. We proceed with two
classes of synchronization metrics, correlations and LoS, each having
variations through different parameters or relational measurements,
making for a total of 44 metrics.
Matrices of synchronization and correlation are computed for each of
the N=8 worms under study, capturing the synchronous activities
observed within each individual as depicted in
Fig. <ref>C. To obtain a representative overview of
synchronicity across the entire set of worms, these individual
matrices are averaged per type, i.e. for each value of σ
(Fig. <ref>D). However, compiling these averaged matrices
require care, particularly in instances where the activity of
individual neurons are missing for specific worms. Therefore, we
calculate the average by averaging over worms for a particular metric
type and divide each element (neuron-pair functional measurement) in
the summed matrix by the number of times they appear together across
the whole cohort of worms.
§.§ Identifying cluster synchronization from the functional network
Each element in the averaged matrix contains information on a
neuron pair synchronous relationships used to construct
functional networks composed of all the neurons for the backward
locomotion gait.
We follow standard thresholding procedures
<cit.> to build
the functional network. Through this method, we purge the functional
data to only contain the strongest links leading to a functional
network, as seen in Fig. <ref>E. Using this network, we
identify neuron cliques that are synchronized via community and
cluster detection algorithms.
A group of neurons that are more synchronous to each other
than other neurons belonging to different groups is considered
equivalent to a CS of neurons. We apply cluster and community detection algorithms to extract these functionally synchronous clusters of neurons from the functional network using two methods:
* Clique synchronization <cit.>: It decides if a node belongs to a synchronous clique if the average value of the edges of the functional network inside the clique is bigger than any of
its outside edges (see Methods section <ref>).
* Louvain community detection <cit.>: It decides the number of clusters and the number of
nodes in each cluster by optimizing the modularity function as given
by Eq. (<ref>) in the Methods section <ref>, which is solely cluster dependent.
The result of applying these methods to a LoS correlation matrix
is seen in Figs. <ref>A-B, which results in a functional network with the node partition as seen in Fig. <ref>C.
The nodes with the same colors belong to the same cluster synchronization, as can be seen since they share thicker edges than
edges between nodes of different colors.
The clusters obtained using the LoS measure at
different parametric values produce various motor-neuron partitionings. We only keep those that are unique among
the 39 results for the range of σ between 0.01 and 0.20. This
is done for each of the two cluster/community detection methods. The
number of unique partitionings for each method can be seen in the table
within Fig. <ref>E.
§.§ Consensus synchronous clusters
We construct 44 metrics to obtain synchronization information: 39
LoS matrices and five types of correlation measures (Pearson,
Spearman, and Kendall coefficient, distance correlation and
covariance). We combine these metrics with the two functional
clustering methods mentioned in the previous section
<ref> (clique synchronization and Louvain) to
create 88 possible partitionings. Then, a consensus is created
following the methods developed in <cit.> to leverage
the information obtained from all the partitionings. To achieve this,
each partitioning, which consists of n clusters of synchronous
neurons, is used to create a co-occurrence matrix, which is simply a
matrix with n diagonal blocks, one for each synchronous group, with
value 1, and with all other values outside these blocks set to zero
(see Fig. <ref>D). These matrices are summed up and
normalized. The resulting consensus matrix is depicted in
Fig. <ref>G. Through this, the sets of neurons
that appear in the same synchronous group more frequently than with
other neurons appear with higher values. This consensus avoids putting
each partitioning in competition with the other and instead compiles
them into a globally agreed-upon result <cit.>.
The consensus matrix, called X (Fig. <ref>G),
has optimal leaf ordering of its rows and columns. Such is produced by
the hierarchical clustering Ward metric (also known as Ward's
minimum variance method), applied to the dissimilarity matrix 1-X
(as seen in Fig. <ref>F). This metric aims to
minimize the total within-cluster variance. The goal is to choose the
successive clustering steps to minimize the increase in the total
within-cluster variance. This metric is particularly effective for
creating clusters that are compact and have a roughly similar number
of elements <cit.>, which was our main reason for
selecting it.
From this consensus, we can obtain various partitionings depending on
where the dendrogram is sliced. When the dendrogram is sliced at a
value of 1.00 we obtain three major groups as observed in the thick
block structure in Fig. <ref>G. Each block can
be further divided into smaller diagonal blocks if we reduced the
threshold. For instance, 7 clusters are observed for a cutoff at 0.55
represented by the smaller blocks in Fig. <ref>G.
In the next sections, we will see that this partitioning is the optimal
solution obtained by the reconstruction algorithm with the least amount of
modifications to convert the baseline Varshney connectome into a
fibration symmetric solution (7 clusters for a cutoff at 0.55 in Table
<ref>).
All these partitions are the input to the mixed integer linear
programming algorithm that we designed to reconstruct the connectome
with a minimal number of modifications, as explained next.
§.§.§ Balanced coloring partitions, fibrations and
cluster synchronization
To develop the optimization algorithm to reconstruct the
connectome to satisfy the synchronization found in the previous
section, we need to introduce the concept of 'balanced coloring' and
the fibration of the graph. Figure <ref>A shows an
example of a graph with a balanced coloring.
Let 𝒞 = C_1, ··· ,C_K be a partition of
the nodes of a network G=(V,E) with V vertices and E edges. We
identify each cluster C_k with a different color and K is the
total number of colors. A 'balanced coloring' is a coloring of the
graph such that each node with color k in cluster C_k is connected
(by edges of the same type) to the same number of nodes with color j
in cluster C_j, for 1≤ k, j ≤ K. That is, nodes of the same
color receive the same colors from their neighbors
(Fig. <ref>A, center)
Translated into the terminology of dynamical systems, a system of
differential equations for the state variables of each node V_i(t)
can be interpreted as a 'message passing' process of passing colored
messages through the edges of the graph. Since two nodes i and j
with the same color received the same colors (messages) from their
neighbors, we can think, intuitively, that they will synchronize their
activity V_i(t) = V_i(t), forming a synchronous cluster. This
intuition is made mathematically rigorous <cit.>
through the theory of fibrations
<cit.>.
Traditionally, a way to formalize the balanced coloring partitions is
through the automorphisms of the graph forming its symmetry group
<cit.>. In this case, the
orbits of the automorphisms of the graph are the balanced
colors. However, not all balanced colorings are orbits. Many
biological networks contain no automorphisms, yet, they display a
non-trivial balanced coloring partition with many colors
<cit.>. This is exemplified in
Fig. <ref>A. The graph has no automorphisms (except for
the trivial identity), but has a balanced coloring that reduces the
graph to a base with just two nodes. This balanced coloring is only
captured by the fibration.
The graph fibration formalism introduced in <cit.>
is a more general formalism that captures all balanced coloring
partitions of the graph. Graph fibrations are a particular case of
fibrations between categories introduced previously by Grothendieck
<cit.> and are formal generalizations of
graph automorphisms.
A graph fibration (shown on the right of Fig. <ref>A) is
a morphism of the graph that collapses every cluster of synchronized
balanced colors (called 'fibers') into a single representative node in
the base graph B while conserving the 'lifting property' defined in
the Methods section <ref>. This transformation leaves
invariant the dynamics in the graph and captures the maximal
symmetries of the network. It is then called a symmetry fibration
<cit.>. Here, we propose that fibers represent potential functional modules or building blocks in a network of neurons.
§.§ Fibration symmetry driven repair algorithm
The clusters of synchronous neurons found in section
<ref> are used to repair the raw chemical
synapses connectome of the backward locomotion gait of
C.elegans provided by Varshney et al.
<cit.> (plotted in
Fig. <ref>B). We develop an optimization mixed
linear integer program to modify this network by adding or removing
the least amount of edges within the limits of an objective function
<cit.> to match the synchronization obtained
experimentally. We call this algorithm the Symmetry-Driven Repair
Algorithm or SymRep for short.
The goal is to obtain a network with balanced coloring partitioning as
provided by one of the synchronization clusters from
Fig. <ref> (details appear in the Methods section
<ref>). The most important feature of this algorithm in the
context of the present paper is the objective function used to
determine optimal solutions. Many combinations of adding and/or
removing edges can satisfy turning the connectome into a network with
color partitionings as provided by a functional clustering while
respecting the constraints provided by Eq. (<ref>) and
Eq. (<ref>). Of these many solutions, the one that
minimizes the objective function below provides an optimal solution
(Methods section <ref>):
f_α,β(r,a)=α∑_i,j∈ E r_ij + β∑_i,j∈ E^C a_ij
where E is the set of edges present in the original connectome and
E^C is the set of non-existing edges in the original connectome that
are permissible to be added. The terms r_ij and a_ij are
binary variables, where the first term, associated with E, takes on
a value of 1 if an edge has been removed. The second term, associated
with E^C, takes on a value of 1 if an edge has been added. If
necessary, one can control the SymRep algorithm to prohibit removing
(adding) connections between physically impossible neurons, which
studies show are always connected (disconnected). This can be done by
simply not including them in E or E^C. The parameters α and
β are penalty weight constants for each of these variables,
respectively. The relative weight between these parameters determines
if the SymRep prefers to repair a network through the addition of
edges rather than removal. In our study, we explore the solutions
found by varying the relative differences between α and
β. We keep the value of α at one and increment the value
of β in steps of one, starting from one and culminating at 10.
We work with the raw chemical synapses connectome of
C.elegans for the backward locomotion gait provided in
<cit.>. This is composed of the connectivity
between motorneurons (VAs and DAs) and 3 main interneuron pairs
(AVAL/R, AVEL/R, AVDL/R). We specifically use two versions of this
connectome. One is that reported by Varshney et al.
<cit.> shown in
Fig. <ref>B.
We call it the Uncollapsed Varshney connectome. The second, shown in
Fig. <ref>C, is a collapsed version of this
network in which we assume that motorneurons can not distinguish
between signals received from the left or right version of an
interneuron pair (i.e., AVAL or AVAR). This is done by substituting an
interneuron pair for one node (i.e., AVAL and AVAR are collapsed into
AVA) and substituting the two edges a neuron may receive from an
interneuron pair for one edge with a weight equal to 1 if both edges
are present, one if only one edge is present, and 0 if not
present. This version is called the Collapsed Varshney connectome. As
a final note, we assume each interneuron pair belongs to its unique
synchronization cluster.
§.§ Synchronization-driven reconstruction of the connectome
With the 39 unique clusterings found in the previous section, the 19
different combinations of weight penalties and the two different raw
connectomes (original and collapsed versions) used, we run SymRep to
produce 1,482 solutions. All of these solutions satisfy the condition
of balanced coloring. Of the 741 solutions for the collapsed Varshney
connectome, 730 (98.52%) satisfy the stricter condition of minimal
balanced coloring, and 737 (99.46%) fulfill this condition for the
not collapsed version. Of these, we present the most optimal solution
for each type of cluster or community detection algorithm in
combination with the functional measurement group (LoS and
correlations).
We decide on an optimal solution based on the number of modifications
it enacts on the raw connectome with the condition that this number is
the lowest, and at least below 50% of natural variability found from
animal to animal <cit.> to accept the
solution. We pick those that modify the connectome by the least amount
of addition and removal of edges. We consider two cases to measure the
number of modifications. The first one is where adding or removing has
a cost of 1, where the final sum of modifications is divided by the
total number of edges in the raw connectome. The second case is in
which the severity of
removing an edge is tied to the weight of the
edge in the collapsed connectome before binarization while adding an edge only has
a cost of one. In the latter, the total sum of modifications
(additions + removals) is divided by the total number of edges in the
raw connectome. Metadata about the solutions can be observed in Table
<ref> with significant p-values for each method.
§.§ Statistical significance
To test the significance of these functional partitionings and
eventually choose the most significant partition, we proceed to use a
permutation test <cit.>. Specifically, for each of
the partitionings considered in Figure
<ref> we randomly permute the labels
(names) of the neurons, keeping the number of clusters and number of
nodes distributed among these the same. This leads to a shuffling of
synchronous groups keeping their size (amount of neurons in each) the
same. We produce slightly over 1,000 permuted versions of each of
these partitionings and inspect how many lead to networks with these
partitionings as fibers with equal or less number of edge
modifications. This quantity is then divided by 1,000 or by the number
of networks that lead to a minimally balanced coloring solution with
an equal number of fibers as in the partitionings (∼1,000).
§.§ The optimal solution
The consensus matrix allows the detection of fibers at different
scales (number of clusters) through the information contained in its
hierarchical clustering, as seen in
Fig. <ref>F. These fibers are used for repairing
the Varshney chemical synaptic backward gait produces some outstanding
results among the collapsed and non-collapsed connectome. We highlight
the results for the collapsed connectome at the bottom portion of
Figure <ref>.
When it comes to the statistical significance of clusters obtained
through a system of ever-increasing node partitionings to repair a
network, as in our case, there is some expected behavior. For
instance, the trivial partitioning of N neurons into N
non-overlapping clusters will give a p-value of 1 as all permutations
of a trivial partitioning will produce the same partitioning,
resulting in all of them modifying the network by the same amount. The
same can be said for a trivial partitioning with 1 cluster as all
permutations result in the same partitioning. Therefore, as the number
of clusters change between 1 to N, the p-value of statistical
significance is granted to produce a (global) minimum p-value between
1 and N clusters <cit.>. For a low-noise system
in which node dynamics are faithful to the network structure with
1≤ M≤ N fiber partitionings, it is suggested that as the number
of non-overlapping clusters is reduced from N, the p-value is
expected to improve (be below 1), with the same expected p-value as
the number of clusters is increased above one where the minimum
p-value should be located at M clusters. Deviations from this
depends on how far removed the network under repair is from the
original fiber symmetric version producer of the node mentioned above
dynamics. These predictions in the behavior of p-value are observed in
the repairing of the backward locomotion network in
Fig. <ref>. As the number of clusters
approaches the trivial N-partitioning the amount of repairs needed
decreases because the network we are trying to repair is indeed
originally trivially colored.
Applying all these considerations, we find that the most optimal
partition and reconstructed connectome is obtained for 7 clusters at
consensus clustering cutoff of 0.55 repaired from the collapsed
Varshney connectome. As shown in
Fig. <ref> and Table
<ref> this solution has the lowest
p-value=0.033, being the only one of the consensus solutions below the
standard 0.05 p-value cutoff for statistical acceptance. Furthermore,
the repair percentage is 21.88%, below the acceptance limit of 25%
variability from animal to animal. Therefore, we choose this
connectome, shown in Fig. <ref>A, as the one
representing the data in the closest possible way.
§ DISCUSSION
Understanding how the structure of the connectome influences the
function of the network is a long-standing problem in system
neuroscience <cit.>. However, addressing this
question is challenging due to low sample number in available connectome datasets and potential
variability across individuals. We have addressed the problem of incomplete/missing data in
biological networks by first developing a consensus method to obtain
average clusters of neuron synchrony across worms from whole-body
calcium recording of neural activity. This information drives a
reconstruction optimization algorithm of the connectome consistent with the observed synchronization. When the modifications to
the connectome are below the experimentally found variation from
animal to animal and offer a
minimal p-value significance, the obtained connectome models can be thought
of as idealized networks consistent with the experimentally measured dynamics of the neuronal circuits. Such a network bridges the gap between structure
and synchronization and can be used to further assess the
functionality of the connectome via perturbations to its structure.
Our research has substantiated the pivotal role of fibration
symmetries in the connectome structure in orchestrating neuronal
synchronization in C. elegans. The refined understanding that
structural connectomics underpins significant aspects of neural
functionality furthers our grasp on the physical bases of neural
synchronization. This synchronization is crucial, as it forms the
foundation for coordinated motor outputs and behavioral responses in
organisms. The use of advanced calcium imaging and graph theory to
correlate these structural motifs with dynamic patterns of neural
activity offers a compelling model for predicting neuronal behavior
based on underlying anatomical data.
The methodological innovations in our study, particularly the use of
integer linear programming (SymRep) to adjust connectomics data based
on functional imaging, demonstrate a novel computational approach to
neuroscientific research. This technique refines existing neural
network models and provides a quantifiable method for aligning
theoretical predictions with empirical observations.
Previous research in C. elegans has suggested local oscillators at the motor
neuron level for rhythm generation in the VNC for reverse
locomotion <cit.>; these patterns have been observed when motor neurons were experimentally deprived of their descending interneuron inputs. Our experimental conditions, Ca++ imaging in immobilized worms with interneurons AVA and AVE generating descending motor commands, did not reveal any oscillations or activation patterns consistent with a rhythmic backward gait pattern i.e., alternations between VA- and DA-motorneurons, or a traveling wave from the posterior to anterior position. Instead, we observed that A-motor neurons faithfully mirror the descending motor command activity in sync. It was shown that C. elegans can adapt their gait flexibly to the physical properties of the environment via proprioception, as shown for forward locomotion <cit.>, and that A-motorneurons have indeed proprioceptive properties themselves <cit.>. We suggest that proprioception might be an essential driver to unleash the oscillatory or rhythmic properties observed in ref. <cit.>. In our study, the lack of proprioceptive inputs and absence of complex internal motorneuron dynamics was an advantage, since it enabled us to study the activity of all A-motorneurons solely in response to their synaptic partners, providing a filter on activity that should be mostly explainable by the connectome architecture alone.
Connectomes between adult worms are remarkably variable, in particular for connections between neurons with few synaptic contacts <cit.> as well as in the posterior ventral cord (VNC) <cit.>. Extrapolating from these data, connections between descending interneurons and motor neurons in the entire VNC might vary by up to 50% i.e., about half of the connections found might not be reproducible across individuals. Our connectome repair algorithm provides a family of statistically significant solutions that require connection additions/removals well below this range (Table 1, Fig. 6). All of these solutions provide circuit models with fiber symmetrical properties consistent with synchronization patterns found in A-motorneurons. It is unlikely, that individual animals indeed perfectly match these idealized symmetric solutions, thus neuronal networks must be endowed with additional properties that ensure stable synchronous dynamics. However, here we propose that fibration symmetries provide constraints, from which individuals should not deviate too much to ensure the robustness of synchronous dynamics. Here we show that symetrization procedures can, however, be useful, for identifying potentially functional circuit modules in the resulting base graph.
Our analyses led to a base graph of the backward locomotion circuit that suggests functional modules characteristic of differential innervation by the descending interneurons AVA, AVE, and AVD (Fig. <ref>E), which intriguingly appear as a crude topographic map (Fig. <ref>A). Our analyses thereby suggest that these modules could operate as functional units in deferentially transmitting the motor commands to different body parts, perhaps in a manner supporting effective backward locomotion. AVA, AVE, and AVD differ to some degree in their connectivity to other sensory circuits <cit.>; we thus speculate that the modules might enable differential control over how to execute the reversal motor command. Note, that AVD neurons were not active in our recordings. Future studies that activate also AVD, and selective inhibition or interference with AVA, AVE, and AVD will test the relevance of our findings for the control of A-motorneuron activity and backward crawling behavior.
The present study focused on a smaller neuronal circuit where some of the assumptions that our theory includes might be reasonable simplifications. Anatomically and molecularly, left and right members of interneuron classes AVA, AVE and AVD are nearly indistinguishable <cit.>, justifying bilaterally collapsing the connectome (Fig. <ref> and Table
<ref>). Moreover, most chemical synapses in the circuit are excitatory and cholinergic <cit.>, and VA and DA motorneurons have similar molecular footprints <cit.>, justifying the simplification of node and link equivalency. Future work on larger circuits in the worm and other model organisms should develop our theoretical approach further to realistically allow for more diversity in neuronal and synaptic properties.
Indeed, the implications of our findings are not restricted to
C.elegans. The principles of neuronal synchronization,
facilitated by connectomic structures observed in this study, likely
have analogs across bilaterian species, including humans
<cit.>. Investigating these principles in more
complex nervous systems could reveal new insights into how brain-wide
synchronization patterns contribute to complex behaviors and cognitive
functions. Furthermore, exploring the impact of structural variations
on the synchronization in disease models could open new therapeutic
avenues, particularly for neurological disorders where dysregulation
of synchronized activity is evident.
§ METHODS
In this section, we first explain the experimental setup and the
methods to extract the functional synchronization clusters of neurons
obtained using various combinations of functional measurements and
different methods of extracting clusters and communities. Following
this, we introduce the graph fibration formalism to describe balanced
coloring partitions of graphs and cluster synchronization. We conclude
with the repair algorithm for optimal fibration symmetric network
solutions (SymRep) applied to the synchronization partitionings.
§.§ Neuronal calcium imaging
Whole-nervous system Ca2+ imaging experiments were performed on
transgenic young adult C.elegans hermaphrodites (age was
determined by the number of a maximum of 5 eggs) expressing genetically-encoded
calcium indicator NLS-GCaMP6f in a pan-neuronal fashion and localized
to the cell nuclei <cit.>
together with NeuroPal cell class identification labels
<cit.> (ZIM2001: otIs669; MzmIs52 ; lite-1 (ce314))
Mounting and microfluidic setup: Animals were imaged in
two-layer PDMS microfluidic devices to control the oxygen environment
and with curved channels <cit.> to immobilize and
laterally align animals, enabling reliable positioning of worms across
recordings and fitting them into the field of view of the imaging
system. Such a layout allowed us to cover the head ganglia, ventral
cord, and tail ganglia of the animals. The worm channel of the
microfluidic device was connected to a syringe that contains NGM
buffer with 1 mM tetramisole to paralyze worms. All components were
connected using Tygon tubing (0.02 in ID, 0.06 in OD; Norton) using
23G Luer-stub adapters (Intramedic). Constant gas flow of 21% O2 and
79% N2 (50ml/min) was delivered using a gas mixer connected to mass
flow controllers (Vögtling Instruments) using lab custom scripts in Micromanager. Adult worms were picked on food-free NGM agar plates in a
drop of NGM with 1mM tetramisole and aspirated into the worm channel.
Animals were first habituated for 10min and afterward imaged at 21%
O2 for 10 min.
Microscope setup: High-resolution data of neuronal activity in
the head and tail ganglia were acquired with an inverted spinning disk
confocal microscope (Zeiss Axio Observer.Z1 with attached Yokogawa
CSU-X1) using a sCMOS camera (pco.edge 4.2 with Camera Link HS connection) and a 40x 1.2 LD LCI Plan- Apochromat
water-immersion objective (Zeiss). Moreover, we used 0.5x demagnification relay optics at the port of the spinning disk unit to increase intensity/pixel and the total area that can be imaged on the camera sensor both by the factor of 2, allowing imaging of a larger field of view encompassing the full worm nervous system. We use a custom-made GUI in
Micromanager to control the different elements of the
microscope. Exposure time was 20 ms, with 2μm steps between Z-planes
operated by a Piezo stage (P-736 PInano, Physik Instrumente GmbH); the
total plane number of z-planes varied between 16-20, leading to a
volume acquisition rate of up to 3 Hz.
Neural trace extraction: As described in detail in our previous
work in Kato et al. <cit.>, neuronal activity
traces were obtained by tracking the intensity maxima in each volume
over time and calculating the single-cell fluorescence intensities. F0
was calculated for every neuron as the mean fluorescence intensity
across each trial. After background subtraction, DF/ F0 was calculated
for each neuron, following bleach correction by linear detrending and
exponential fitting. See reference <cit.> for details.
Neuronal identification: Identification of each neuron was done
based on a neuron dictionary termed NeuroPAL <cit.>
(Fig. <ref>A). In each recording, we aim to detect at least 25 crucial neurons for our analysis: two reversal interneurons AVA and AVE, and all the reversal motor neurons DA01-DA09 and VA01-VA12. Some neurons in the head and tail
tip were lost in individual recordings, depending on animal size, when it exceeded the imaging
region. Other reasons for missing neurons might be a failure in
segmentation and tracking due to low signal/noise ratios based on low
expression levels and/or low calcium levels. Neuronal traces were
curated after each recording, and obviously erroneous traces were
removed.
Average correlation matrices were generated by calculating the mean pairwise correlations between the activity time series of identified active neurons (up to n=8, depending on the number of pairwise observations, but at least n=3). The identified active neuron numbers are as follows: 72 for recording 1, 76 for recording 2, 98 for recording 3, 79 for recording 4 and 5, 69 for recording 6, 63 for recording 7, and 62 for recording 8
NeuroPAL labeling: The identities of neurons were determined via
NeuroPAL using the following procedure. We obtained image stacks from
each recorded animal after GCaMP6f imaging. For each plane, we
acquired spectrally isolated images sequentially of CyOFP1,
mNeptune2.5, and mTagBFP2. We excited CyOFP1 using the 488nm laser at
5% intensity with a 585/40 bandpass emission filter. Afterward,
mNeptune2.5 and TagRFP-T were recorded using a 561nm laser at 20% intensity (percentage of max) with a 655LP filter and a 570LP filter, respectively. mTagBFP2 was isolated using a 405nm laser at 30% intensities with a 447/60 bandpass filter. These imaging
conditions allow for the color-specific labeling of
C. elegans neurons. Neuronal identities were manually annotated
according to the NeuroPal guidelines. <cit.>.
§.§ The zoo of synchronization measures
Determining the synchronization between two or more simultaneously
recorded signals is a task with multiple approaches depending on the
characteristics that one is interested in measuring and determining
their level of similarity. Naturally, all this leads to a zoo of
methods that try to capture the synchronization between signals
through different features <cit.>. These
methods tend to be broken into four main camps of measurement, which
are the intersection of the time domain vs the frequency domain and
those that consider directional dependencies and that do not; some of
these examples can be observed in Table
<ref>.
Depending on the data and the aspects of the signals one is interested
in, one may use a few examples in Table
<ref> and even other methods not
mentioned in the table. Selecting the appropriate metrics can
sometimes be a difficult problem, and one must compare these against
some reference, producing competition between the selected
metrics. Below, we indicate which metrics we have decided to
use. Further, down in the paper, it is shown that it is best not to
put these metrics in competition one against the other to find the
best metric that captures synchronization. Instead, a better approach
is to use the information that these provide in a consensus manner to
decipher the synchronization in our system <cit.>.
§.§ Clique synchronization and Louvain method
From section <ref>, the Clique Synchronization
method developed in <cit.> accepts a clique if all
nodes composing a clique satisfy the conditions outlined in section
<ref>. We define a cluster of N neurons to
be synchronous as a fully connected clique consisting of N nodes
that meet the conditions:
∑_i<j^1,Nσ(x_i (t),x_j(t)) ≥N(N-1)/2σ(x_k (t),x_k'(t)) ∀ k= 1, …, N and k' ∈ℳ_k,
where ℳ_k denotes the set of nearest neighbors of node k
(for k=1, …, N) that are not part of the clique in
question. Here, σ(x_i (t),x_j(t)) represents the value for
synchronization, measured by the LoS metric or any correlation
metric used in this paper.
The Louvain method <cit.> is a popular algorithm for
module and community detection. At every execution, it assigns every
neuron to its community, as per the method used here, it proceeds to
randomly group neurons into bigger clusters, only accepting mergers if
the modularity value given by Eq. (<ref>) increases and
halting once this equation can no longer increase in value
<cit.>. Due to the stochastic nature of this process,
each execution may lead to a slightly different result. Due to this,
the Louvain method is executed 1,000 times for a given network
retaining the partitioning with the highest measured modularity. The
modularity measure is defined as:
Q=1/2m∑_i,j[A_ij-k_ik_j/2m]δ(c_i,c_j),
where A_ij is the adjacency matrix of the network, m is the
number of edges in the network, k_i is the number of in-degree
edges attached to node i and c_i are numeric labeled given to
each synchronization cluster <cit.>.
§.§ Fibration symmetry of graphs and cluster synchronization
In the study of complex networks, understanding the interplay between
structure and dynamics play a crucial role. Specifically, the concept
of synchronization, where nodes in a network adjust their behavior in
accordance with each other is fundamental in various disciplines,
including physics, biology, and engineering. This section delves into
the intricate relationship between graph topology, characterized by
fibration symmetry, and the emergence of synchronization through the
lens of admissible ordinary differential equations (ODEs)
<cit.>.
A graph G=(V_G,E_G) consists of a set of vertices V_G and a set of
edges E_G where each edge connects a pair of nodes u to v
represented as (u,v) ≡ e^u → v_G. The set of edges
from node u to node v is written as E_G(u,v), that is, the set
of edges e ∈ E_G that have as a source node u brought by the
function s(e)=u and have as a target node v brought by the
function t(e)=v. The set of edges in graph G that have as a target
the node v is denoted by E_G(-,v) <cit.>.
A graph G can be mapped into another graph W through the structure
preserving process of a morphism Ψ:G ⟶ W using a
pair of functions Ψ_V:V_G⟶ V_W and
Ψ_E:E_G⟶ E_W. These functions map vertices to
vertices and edges to edges, respectively, from graph G to W,
while being commutative with the source and target mapping
functions. In mathematical terms, these functions must obey
<cit.>:
s_W∘Ψ_E =
Ψ_V∘ s_G ,
and
t_W∘Ψ_E = Ψ_V∘
t_G .
If the mapping functions Ψ_V and Ψ_E are surjective, that is
to say, they map multiple elements (vertices or edges) in their domain
(graph G) to one representative element in their range (graph W),
then morphism Ψ is called an epimorphism.
A (surjective) graph fibration is a particular type of epimorphic
morphism that maps elements that belong to the same category into one
representative element while preserving the lifting property which
guarantee dynamical invariance. The original definition of fibration
was defined between categories by Grothendieck and others
<cit.>. Boldi and Vigna
<cit.> worked out the definition of fibrations
between graphs, which is the main theoretical framework of the present
work. Morone et al. <cit.> then showed the
application of fibration to biological networks to understand their
building blocks <cit.>, and synchronization
<cit.>.
A graph fibration is a morphism from
graph G to the base B:
φ: G ⟶ B
that satisfy the lifting property <cit.>. This
means that for every edge e = (u,v) ∈ E_G, there is an edge e' =
(φ(u), φ(v)) ∈ E_B, and for every vertex v ∈ V_G there
exists a unique edge e^v∈ E_G(-,v) satisfying (w,φ(v))
∈ E_B.
The importance of the lifting property is that the dynamics between
the graphs G and B is preserved. This means that if we add a set
of admissible ODEs to the graph with dynamical variables x_i(t),
then the dynamical evolution of the system in graph G is the same as
the dynamical evolution of the graph B.
(G) = (B) .
This property defines the clusters of synchrony as follows.
When a function φ: G → B acts as a fibration, we
refer to G as the total graph and B as the base graph of this
mapping function. In this context, G is said to be fibred over
B. The set of vertices in G that φ sends to a specific
vertex x in B is called the fiber over x, denoted by
φ^-1(x). Fibers are also balanced color clusters.
Vertices belonging to the same fiber have isomorphic input trees as
defined in <cit.>. The input tree for a node v,
denoted T(v), is a rooted tree centered at node v that captures
all the paths in the graph leading to v. The first layer of the
tree is the node's in-neighborhood, called its input set. Each
subsequent layer is then iteratively defined as the input set's input
set. A visual example of an input tree can be seen in
Fig. <ref>I, where all blue nodes in the total space
graph have the same input tree structure.
In terms of dynamics, we can think of a message passing process where
the information travels through the input tree and arrives at the
rooted node v. If another node u has an isomorphic input tree with
v, meaning that T(v) ∼ T(u), then these two nodes receive the
same messages through the network, although from different
pathways. Therefore u and v synchronize their dynamics x_v(t) =
x_u(t). This statement has been put in rigorous mathematical terms by
DeVille and Lerman <cit.>.
A fiber is called trivial if it contains exactly one vertex, that is
if |φ^-1(x)| = 1 and called nontrivial if
|φ^-1(x)|>1. Throughout this paper, the function |·|
denotes the number of elements in the mathematical structure it
encloses unless specified otherwise.
Vertices belonging to the same fiber, say u and v, have an
equivalence relation ≃ called an in-isomorphism, which can be
seen as a particular case of a discrete homeomorphism
<cit.>, where a discrete bijective function
(one-to-one association of elements in two domains) ψ: G(-,u)
→ G(-,v) is applied to the discrete topological space of
graphs.
Additionally, the in- in in-isomorphism is the added
condition for the bijective function to obey s(e) ≃ s(ψ(e))
for all e ∈ G(-,u). In short, in-isomorphism holds the notion that
the vertices in a graph G can be converted into one another while
conversing the adjacency connectivity from the same (or equivalent)
source vertices.
A fiber in a graph G is associated with the cluster containing the
vertices of the fiber u, v, w,...∈ C_i where n_i represents the
number of vertices in cluster C_i with its vertices noted as
∪_l=1^n_i v^i_l = C_i. The union of these clusters
contains the entirety of vertices C_1 ∪ C_2, ∪...C_K∈ V_G
where K is the number of fibers in graph G. The overlap of
different clusters is always empty C_i∩ C_j=∅ for i≠
j ∀ i,j∈ K_G where K_G={x∈ℤ | 1 ≤ x
≤ K } such that ∑_c=1^K n_c = |V_G|.
These fibers are the equitable partition or colored-balanced
partitioning of the graph as the number of in-degree edges of the
in-isomorphic vertices in a cluster C_i receives from another
cluster C_j only depends on the choice of the clusters. Thus, both
fibers and balanced colorings describe, in two different ways, the
same synchronous dynamics of clusters of nodes.
We can represent this as:
C_i↢ C_j≡{|∪_l=1^n_j E_G(v_l^j,
v^i)|=|∪_l=1^n_j E_G(v_l^j, v^k)|} ∀ v^i,v^k∈ C_i
where the number of edges |·| every vertex in C_i receives
from a cluster C_j must the same <cit.>.
The particular type of fibration we work with in this paper is the
minimal fibration, which leads to a minimal equitable
(colored balanced) graph partitioning. The word minimal enforces the
value K representing the number of colored balanced partitionings of
a graph to be the lowest it can be. This leads to the situation in
which no two clusters receive the same in-degree edges relative to a
third cluster, which leads to the constraint
∑_k=1^K |(C_i↢ C_k)-(C_j↢
C_k)|>0 ∀ i,j∈ K_G.
The surjective minimal graph fibration is called a symmetry fibration
<cit.> since it collapses the graph into its
minimal base capturing the minimal number of fibers (or balanced
colors) of the graph, and thus collecting the maximal symmetries of
the graph.
An implementation of the algorithm to find minimal balanced colorings
in a graph in the form of an R package is available at
<https://github.com/makselab/fibrationSymmetries> and
<https://osf.io/z793h/>.
§.§ Fiber symmetries constrain admissible ODEs into synchronization
The fibration symmetry of a graph imposes a structural constraint that
can facilitate synchronization. Specifically, if the graph has a
fibration symmetry, the nodes in each fiber can be expected to
synchronize with each other <cit.> due to the
consistent interaction patterns enforced by symmetry. This can be
understood through the synchronization of a system of ordinarily
differential equations (ODEs) 'admissible' to a graph
<cit.>.
Admissible means the structure of a graph G imposes the coupling
terms between the |V_G| number of ODEs, where each equation
characterizes the state of a vertex in the graph. Consider a dynamical
system on a graph where each vertex v_i has a state x_i that
evolves according to an ODE:
dx_i/dt = F(x_i,t) + ∑_j ∈∂_i A_ijH(x_i,x_j,t) .
Here, F represents the intrinsic dynamics of each vertex, H
embodies the interaction between vertices, and A_ij are the
elements of the adjacency matrix A of the graph, where A_ij=1
indicates the presence of an edge and A_ij=0 indicates the absence
of an edge from vertex j to i. ∂_i denotes the set of neighbors
of vertex i. As an example, the Kuramoto model, taken along with its
master stability function can be used to study synchronization in a
network
<cit.>.
Cluster synchronization as defined by Pecora et al.
<cit.> in this context refers to the situation in
which the dynamical states of a group of vertices for a given fiber
converge to a common trajectory, i.e., x_i(t) →
c_ℓ(t) ∀ v^ℓ_i∈ C_ℓ, given the
appropriate initial conditions and after any transients have died out.
The symmetries in the graph fibration allow the system of ODEs
admissible to the graph G to be reduced from a |V_G| number of
equations to a K number of equations for each fiber
cluster. Equation (<ref>) can be reduced to a system of equations
for each of the fibers:
dc_i/dt = F(c_i,t) + ∑_j ∈ V(c_i) Q_ijH(c_i,c_j,t).
Here, Q is the adjacency matrix of the base of the graph obtained by
the fibration.
The fibration formalism extends the automorphism symmetry groups of
the graph <cit.>. An automorphism is a permutation
symmetry of the nodes of the graph that leaves invariant the adjacency
of nodes. That is, the nodes permuted by the automorphism have the
same in- and out-neighbors before and after the application of the
automorphism. The analogous of fibers in fibrations are the orbits of
the automorphisms. An orbit a node is the set of nodes obtained by the
application of all the automorphisms of the graph.
Both orbits and fibers are balanced colorings of the graph. All orbits
are fibers, but not all fibers are orbits. Thus, the fibers of the
fibration capture more balanced colorings than the orbits of
automorphisms. In short, fibration symmetries are rigorous extensions
of automorphisms, and all automorphisms are fibration symmetries but
the opposite is not always true.
This situation is exemplified in Fig. <ref>A. This graph
has no automorphism. That is, there is no permutation of any nodes
that leaves invariant the adjacency matrix. Therefore it has a trivial
orbital (and balanced coloring) partition where each node is its own
orbit or its own color. However, there is another minimal balanced
coloring partition that is shown in the two colors in the center of
Fig. <ref>A. This is the balanced coloring partition
captured by the fibration, which is then applied to reduce the
network to the base on the left of Fig. <ref>A.
A final fundamental difference between automorphism and fibration
symmetry is that the former is a global symmetry of the network. That
is, it is a permutation that constrains the global adjacency of nodes
to be the same. However, the fibration is a local symmetry leaving
invariant the input trees of the nodes, which are local views of the
network of the nodes collapsed by the fibration. This fundamental
property makes the fibration applicable to a larger set of complex
networks than the more restricted automorphism.
The interplay between fibration symmetry and synchronization in
networks via admissible ODEs provide profound insights into the
dynamics of complex systems. The structural constraints imposed by
fibration symmetry can dictate the synchronization patterns, where the
quotation or base graph provides a simplified version of a larger
networks leading to predictable and potentially controllable behavior
in networks. This understanding has significant implications for the
design and analysis of complex systems in various domains, ranging
from biological networks (as in this paper) to engineered distributed
systems
<cit.>
§.§ Symmetry Driven Repair Algorithm
In this section, we present a set of equations and inequalities based
on the concepts discussed previously to construct our integer linear
programming model for our problem, considering a specific graph, node
clustering, parameters, and constraints
<cit.>. We name this as the
Symmetry-Driven Repair Algorithm or SymRep for short.
For this model, we consider a directed graph where we denote n=|V| and m=|E| as the number of nodes and directed edges, respectively. We also define
E^C = { (i,j) : i,j ∈ V, ij ∉E}
as the set of node pairs between which no directed edge exists in G: these pairs indicate potential edges that could be added to the graph G. We define as a coloring of G, represents the sets dividing V, i.e., the various clusters of nodes.
The three types of decision variables in the model are:
r_ij for (i,j) ∈ E is a binary indicator for when edge
(i,j) is removed;
a_ij for (i,j) ∈ E^C is a binary
indicator for when a new edge (i,j) is added; and
s_ijR for
i ∈ P, j ∈ Q, R, P, Q ∈ is a binary indicator that i
and j has an imbalance in the color set R.
The objective function aims to minimize the weighted sum of edges
removed and added, defined as
f_α,β(r,a) = α∑_ij ∈ E r_ij + β∑_ij ∈ E^C a_ij.
Constants α, β are parameters that adjust the importance
between edge removal and addition in the objective. The main
constraint ensures that represents a balanced coloring of the
graph G, as defined by
∑_ip ∈ E: i ∈ S (1 - r_ip) + ∑_ip ∈ E^C:i ∈ S a_ip =
∑_iq ∈ E: i ∈ S (1 - r_iq) + ∑_iq ∈ E^C: i ∈ S a_iq;
p,q ∈ T; S,T ∈.
This aims to ensure the nodes in the same division receive equal
influence from all other divisions; this is the SymRep equivalent of
Eq. (<ref>). An optional constraint ensures that the
in-degree of all nodes is at least one.
∑_ip ∈ E (1 - r_ip) + ∑_ip ∈ E^C a_ip≥ 1, p ∈ V.
This helps avoid scenarios where nodes in a fiber are disconnected
from the rest of the network. The following constraints are necessary
but not sufficient for minimally balanced colorings
Eq. (<ref>).
∑_ip ∈ E: i ∈ R (1 - r_ip) + ∑_ip ∈ E^C :i ∈ R
a_ip - (∑_iq ∈ E: i ∈ R (1 - r_iq) + ∑_iq
∈ E^C :i ∈ R a_iq) ≥ s_pqR + ns_qpR;
p
∈ S; q ∈ T; R,S,T ∈
∑_iq ∈ E: i ∈ R (1 - r_iq) + ∑_iq ∈ E^C :i ∈ R
a_iq - (∑_ip ∈ E: i ∈ R (1 - r_ip) + ∑_ip
∈ E^C :i ∈ R a_ip) ≥ s_qpR + ns_pqR;
p
∈ S; q ∈ T; R,S,T ∈,
s_pqR + s_qpR≤ 1; p ∈ S; q ∈ T; R,S,T ∈,
∑_R ∈ (s_pqR + s_qpR) ≥ 1; p ∈ S; q ∈ T;
S,T ∈
∑_O ∈-(S∪
T) (s_pqO + s_qpO) + ∑_I ∈ (S∪ T) |s_pqI -
s_qpI| ≥ 1; p ∈ S; q ∈ T; S,T ∈
These inequalities define the limits to ensure that two different
divisions do not have the same influence, termed the unbalancing
constraint. The entire model is defined as follows.
min{f_α,β: (<ref>),
(<ref>), (<ref>),
(<ref>),(<ref>),
(<ref>), r_ij, a_kℓ, s_pqR∈{0,1}, ij
∈ E,kℓ∈ E^C, p ∈ P, q ∈ Q, P≠Q,R ∈}.
where Eq. (<ref>) within the equation above is a
reference to only select one of its
sub-equations. Eq. (<ref>) restricts feasible
solutions and potentially better minimal balanced solutions relative
to Eq. (<ref>) but is stronger with respect to the
minimal balanced coloring property. In our implementation, we always
try to find a solution to our networks by first implementing
Eq. (<ref>) after which if it fails to produce a
minimal colored solution, then Eq. (<ref>)
is used. We solve the integer linear programs with the solver
Gurobi. The implementation is available at
<https://github.com/makselab/PseudoBalancedColoring> and
<https://osf.io/26u3g>.
§ ACKNOWLEDGMENTS
We acknowledge discussions with Ian Leifer and Luis Alvarez. We thanks Ev Yemini for the NeuroPal animals and advice.
§ FUNDING
Funding was provided by NIBIB and NIMH through the NIH BRAIN Initiative Grant R01 EB028157 and ONR Grant N00014-22-1-2835. The work in this manuscript was partially supported by the Simons Foundation (#543069) (M.Z.) and the International Research Scholar Program by the Wellcome Trust and Howard Hughes Medical Institute (#208565/A/17/Z) (M.Z.).
§ AUTHORS CONTRIBUTIONS
B.A. and T.G. performed the analysis of the data. P.A. performed Ca++ imaging experiments, neuron identifications, and the analyses shown in Fig. 2. D.P. performed the algorithmic developments. M.Z. led the experimental work and provided advice for the theory part. H.A.M. led the modeling and theoretical work. B.A., P.A., D.P., T.G., M.Z., and H.A.M. wrote the manuscript. M.Z. and H.A.M. led the study.
§ DATA AND CODE AVAILABILITY
All data and code are available at <https://github.com/MakseLab>,
<https://osf.io> and <http://kcorebrain.com>. In particular,
an implementation of the algorithm to find minimal balanced colorings
in a graph in the form of an R package is available at
<https://github.com/makselab/fibrationSymmetries> and
<https://osf.io/z793h/>. An implementation of the MILP solver is
available at <https://github.com/makselab/PseudoBalancedColoring>
and <https://osf.io/26u3g>.
naturemag
§ SUPPORTING INFORMATION
|
http://arxiv.org/abs/2409.03184v1 | 20240905022054 | Regimes of Steady-State Turbulence in a Quantum Fluid | [
"Tommy Z. Fischer",
"Ashton S. Bradley"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"physics.atom-ph",
"quant-ph"
] |
Department of Physics, University of Otago, Dunedin, New Zealand
Dodd-Walls Centre for Photonic and Quantum Technologies
Department of Physics, University of Otago, Dunedin, New Zealand
Dodd-Walls Centre for Photonic and Quantum Technologies
§ ABSTRACT
We simulate the Gross-Pitaevskii equation to model the development of turbulence in a quantum fluid confined by a cuboid box potential, and forced by shaking along one axis. We observe the development of isotropic turbulence from anisotropic forcing for a broad range of forcing amplitudes, and characterise the states through their Fourier spectra, vortex distributions, and spatial correlations. For weak forcing the steady-state wave-action spectrum exhibits a k^-3.5 scaling over wavenumber k; further decomposition uncovers the same power law in both compressible kinetic energy and quantum pressure, while the bulk superfluid remains phase coherent and free from extended vortices. As the forcing energy exceeds the chemical potential, extended vortices develop in the bulk, disrupting the k^-3.5 scaling. The spectrum then transitions to a k^-7/3 regime for compressible kinetic energy only, associated with dense vortex turbulence, and phase coherence limited to the healing length. The strong forcing regime is consistent with an inverse cascade of compressible energy driven by small-scale vortex annihilation.
Regimes of Steady-State Turbulence in a Quantum Fluid
Ashton S. Bradley
September 9, 2024
=====================================================
§ INTRODUCTION
Turbulence is a ubiquitous phenomenon in nature, observed in a wide range of systems from the atmosphere to the oceans <cit.>, and in the dynamics of quantum fluids <cit.>.
Yet the turbulent flows understood in classical fluids <cit.> assume a different character in the quantum realm. The pursuit of understanding quantum turbulence in highly controllable Bose-Einstein condensates (BEC) has shed light on the unique quantum properties of superfluids <cit.>, and uncovered universal aspects of fluid dynamics <cit.>. Quantum fluids exhibit superfluidity and support quantum vortices <cit.>, and compressible Bogoliubov phonons <cit.>, while constituent atoms can also possess internal spin degrees of freedom <cit.>, or interact via long-range dipolar potentials <cit.>. These additional degrees of freedom offer more channels for energy transport, and can cause dramatic departure from classical behavior <cit.>.
A cascade of energy between scales can emerge when energy is injected into a fluid at a particular length scale and dissipated at another scale. This results in the formation of ever smaller structures down to the dissipation scale (direct cascade) <cit.>, or energy accumulating into coherent structures at the system scale (inverse cascade) <cit.>. If the interaction is local in k-space, power laws can appear in the energy distribution associated with the energy cascade, providing clear signatures of particular dynamics, which may otherwise prove opaque. The most well-known example of emergent power-law behavior in fluid turbulence is the Kolmogorov-Obhukov -5/3 law <cit.>. This states that at high Reynolds number in classical incompressible fluids, the distribution of kinetic energy in Fourier-space is given by
E(k) = C ε^2/3 k^-5/3,
where C is a universal dimensionless constant and ε is the rate of energy flux per unit mass. Recent experiments <cit.> and numerical investigations <cit.> of turbulent BECs have reported power-laws related to predictions from weak-wave turbulence (WWT) theory <cit.>, observed in the wave-action spectrum accessible in experiments.
Wave-turbulence involves nonlinearly interacting waves, rather than vortices or eddies in typical hydrodynamic turbulence. In particular, ideal WWT theory predicts a direct energy cascade with wave-action spectrum power-law n(k) ∝ k^-3 <cit.>. Interestingly, most of the publications report a slightly steeper scaling law than k^-3 <cit.>, conjectured to be due to the residual role of vortices (which are neglected in WWT theory), nonperturbative nonlinear interactions between waves, or quantum pressure <cit.>. Recently it has been argued that the direct energy-cascade is marginally non-local <cit.> and that a logarithmic correction can account for the steeper power-law observed for weak forcing <cit.>. However, that analysis was applied to a four-wave kinetic equation, relevant for Bose gas dynamics when there is no condensate. The need for a deeper understanding of turbulent regimes in a quantum fluid motivates further analysis of the highly condensed regime <cit.>.
In this work, we use the Gross-Pitaevskii Equation (GPE) to simulate the development of steady turbulence in a cuboid box-trap. Energy is injected via an oscillating potential that shakes the box along one axis, and damped by selective removal of high energy atoms <cit.>. By finely varying the forcing energy from weak to strong relative to the superfluid chemical potential we identify regimes of turbulence by their power laws, vortex distributions, and two point correlations. Motivated by a recent reformulation of spectral analysis for quantum fluids <cit.>, we apply those methods to the turbulent states. All quantum-phase information missing from semi-classical velocity power spectra is retained, allowing highly resolved decomposition of wave-action spectra.
For weak forcing, we observe k^-3.5 scaling as reported in experiments and other numerical studies. Further decomposition shows that the compressible and quantum pressure components are in lock step throughout the power-law range of k, consistent with a Bogoliubov-phonon direct cascade. For strong forcing, the system contains many vortices, and approaches a k^-7/3 power law for the compressible kinetic energy only, consistent with the WWT prediction of inverse particle cascade scaling in the non-condensed regime <cit.>. The phase coherence length of the GPE wave function drops to the healing length in this regime associated with dense vortex tangles.
This paper is organized as follows: In secII, we present our theoretical model of forced and damped Gross-Pitaevskii turbulence. secIII details the identification of two distinct turbulent regimes through analysis of density fluctuations, vortex distributionwes, and the components of kinetic energy. In secIV, we apply a reformulated spectral analysis to these regimes. Finally, in secV we discusses the implications of our findings and present our conclusions.
§ THEORETICAL MODEL
Far below the critical temperature for condensation, T ≪ T_c, Bose-Einstein condensed dilute gasses are well described by the Gross-Pitaevskii Equation <cit.>. Generalizations of the GPE to finite temperature have also been developed via several different approaches, including exact phase space methods <cit.>, number-conserving Bogoliubov methods <cit.>, the Zaremba-Nikuni-Griffin (ZNG) formulation of two-fluid theory <cit.>, and the truncated-Wigner classical field method <cit.>; see <cit.> for reviews of generalized GPE approaches to the dynamics of the dilute Bose gas.
Under conditions of weak forcing, the GPE provides a quantitative treatment of the system, directly comparable with experiments. Stronger forcing involves large energy injection and requires careful consideration. Due to the numerical scale of the system and the challenges of running large-scale simulations for very long physical times (of order 10s), in this work we take the approach of constructing a minimal GPE model, similar to other recent numerical treatments <cit.>. Our primary differences are using higher working precision which allows consideration of stronger forcing, and larger spatial sizes in units of the healing length. Ideally, we aim to observe a power-law over a decade of wavenumbers in order to rule out the possibility of falsely attributing power-law behavior to a crossover <cit.>.
The GPE is
iħ∂ψ (r,t)/∂ t = [ -ħ^2/2m∇^2 + V(r,t) + g|ψ(r,t)|^2 ] ψ(r,t),
where g=4πħ^2 a/m, ħ is the reduced Planck constant, a is the S-wave scattering length, and m is the atomic mass. For a homogeneous ground state with chemical potential μ, we define the healing length ξ via
μ = g n_0 =ħ^2/mξ^2
The natural units of time and length are τ=ħ/μ and ξ=ħ/√(mμ) respectively. To make our results more concrete, we report our results using parameters for ^87Rb: m=1.44× 10^-25kg and a=5.8× 10^-9m.
In Eq. GPEdim, the potential V(r,t), consists of a hard-walled cuboid potential, as well as a time-dependent forcing potential:
V(r,t) = V_static(r) + V_F(r,t)
The walls of the static trapping potential are slightly smoothed rather than being piecewise defined, giving both a better description of experimental reality, and improved numerical stability due to eliminating Gibbs phenomena <cit.>. The cuboid box trap has a large flat region with dimensions (L_x,L_y,L_z)=(40,30,20)ξ. The trap walls are modeled by shifting a steep transition potential
V_wall(x) =
V_b sin(π x/2ℓ_b)^24 for |x|≤ℓ_b/2
V_bH(x) otherwise
to coordinates x-L_x, etc, where H(x) is the Heaviside step. Over a region of width ℓ_b=6ξ centered on x=3ξ the potential V_wall(x) ramps from zero up to V_b = 30μ, providing sufficiently steep walls for box-confined dynamics while avoiding overly sharp changes in the potential; the shape of the potential causes the flat density to drop away steeply at x=0, so that after the shift, the homogeneous region is constrained to ± L_x. Numerically finding the ground state ψ_0(𝐫) of V_static(𝐫) using imaginary time evolution for chemical potential μ = k_B× 1nK, results in a total atom number N ≃ 7.2 × 10^5. The loss of initial anisotropy of the ground state under time evolution provides provide a useful indicator of the development of steady isotropic turbulence.
Generation of steady-state turbulence requires both energy injection and dissipation. Energy injection is provided by an oscillating linear potential applied along the z-direction in the form
V_F(r,t) = U_Fz/L_zsin(ω_Ft),
where ω_F is the driving frequency. U_F is the driving energy scale, here characterising the extra energy per particle introduced at the edge of the box. To explore different turbulent regimes, we use a fixed driving frequency of ω_F = 4 Hz and vary the forcing amplitude over the interval 0.1≤ U_F/μ≤ 5. The lower limit is a very weak perturbation, while the upper limit represents strong forcing.
Dissipation occurs in the form of high-energy atoms escaping the trapping potential <cit.>. We model this numerically by introducing a dissipative `potential' outside of the box trap. The total static complex potential is
V_static(r) ≡ V_box(r) - iV_diss(r)
where
V_diss(r) =
2.5μ if |u| > 0.5L_u + 5ξ, u ∈{x,y,z}
0 otherwise.
The momentum required for a particle to escape is a function of the height of the trap walls <cit.>, and for our trapping potential loss occurs at dimensionless dissipation wavenumber k_Dξ = √(2V_b/μ) = √(60)≃ 7.74 [This choice also gives sufficient margin between the dissipation k-value and the Nyquist k-values for a grid of 256^3 points (Appendix <ref>). High-energy particles are thus dissipated well before their spatial grid representation becomes inaccurate. ]. This energy and spatially selective loss is the only dissipation used in our simulations, and provides the energy sink required for a steady state to form under conditions of steady forcing.
To create turbulence in the forced and damped GPE, we apply V_F to the ground state ψ_0 and evolve until t=t_f≡ 10 seconds. This time interval ensures a steady state is reached for all U_F values. We use a grid of 256^3 points and time-evolve using a pseudo-spectral Runge-Kutta method, referred to as the RK4IP <cit.>. A fixed time step Δ t=10^-3s is chosen to ensure numerical convergence all simulations. The code was written in Julia and optimized to achieve a minimal memory footprint and run on GPU hardware using the CUDA.jl library <cit.>. We achieved a runtime of 38 minutes per second of dynamics on NVIDIA A100 GPU using Float64 representation. For details of the algorithm and error analysis see Appendix <ref>.
§ DEVELOPMENT OF TURBULENCE
§.§ Vortices and density waves
To outline the general phenomenology, we first present the dynamics that occur at a forcing amplitude of U_F=μ. The initial ground state, shown in 3d_turb(a), responds to forcing by weakly oscillating between the sides of the trap, creating weak density waves, as seen in 3d_turb(b). After sufficient energy is injected, the superfluid develops many small vortex rings parallel to the x-y plane [3d_turb(c)]. After approximately two seconds, the system transitions to an isotropic density and vortex distribution involving many vortices and significant energy, density, and phase fluctuations [3d_turb(d)]. For this particular forcing, there are no extended vortex lines in the steady state, but many small vortex rings are present.
To study the response to anisotropic forcing along z, we consider condensate density and vortex distributions on the y=0 plane for a range of forcing amplitudes and times, shown in heatmaps. Vortices are detected by calculating the circulation of the quantum phase around each grid plaquette in the plane. We see that weak forcing excites small amplitude fluctuations, but contains insufficient energy to generate bulk vortices, as seen in heatmaps for U_F=0.2μ. As U_F increases, vortices appear at the edges of the condensate and eventually in the bulk as small rings. They do not survive as extended line vortices in the steady state until the forcing reaches U_F≈ 1.2μ. For strong forcing, e.g. heatmaps, U_F=3μ, the steady state involves a dense and isotropic vortex distribution. The transition from weak surface vortices to dense bulk vortices in the steady state is evident in the three-dimensional vortex distribution, heatmaps(b), but harder to identify due to the proliferation of surface vortices.
excite_energies(a) and (b) show the normalization and energy per particle of the system during excitation. We observe a brief delay between when forcing begins and when particles have sufficient energy to escape the trap, shown in excite_energies(a). Consequently, the energy per particle of the system rapidly increases [excite_energies(b)]. Once dissipation takes effect, the energy per particle approaches an approximate steady-state, oscillating about a mean value that increases with U_F; the system thus approaches an energetic steady state while continuing to lose particles.
§.§ Energy decomposition
To further investigate the role of vortices and compressible excitations, we perform a decomposition of the kinetic energy <cit.>. Using a Madelung-transform, we can represent the complex wave function (away from vortex cores) in terms of two real fields: the density, ρ = |ψ|^2, and the quantum phase, Θ.
ψ(r,t) = √(ρ(r,t))e^iΘ(r,t)
Using the velocity field v(r,t) = ħ/m ∇Θ (r,t), we construct the density-weighted velocity field [The ordinary velocity field 𝐯 diverges near the core of vortices, and so cannot be used for decomposition. By using 𝐮 we can apply the Helmholtz decomposition to a twice continuously differentiable vector field, as required by the decomposition theorem.], u = √(ρ)v, and perform a Helmholtz decomposition of the vector field into incompressible and compressible components
u = u^i + u^c,
where the incompressible component is divergence-free and associated with vortices, and the compressible component is curl-free, associated with density excitations:
∇·u^i = 0, ∇×u^c = 0.
We also define a quantum pressure vector field <cit.>
u^q = (ħ/m) ∇√(ρ).
The quantum pressure is not a physical velocity field, but instead arises from density gradients. The total kinetic energy can then be written as the sum of the three components
E_kin ≡∫ d^3𝐫ħ^2|∇ψ|^2/2m=E_kin^c+E_kin^i+E_kin^q,
where
E_kin =m/2∫ d^3𝐫|𝐮^α(𝐫)|^2.
Figures <ref>(c) and (d) show E_kin^i and E_kin^c during the initial period of excitation. Both components oscillate in response to the forcing, acquiring a mean value that increases with U_F. The kinetic energy components of the steady-state display a nonlinear dependence on U_F, indicating different regimes of turbulence, shown in excite_energies(e). The relative importance of compressible and incompressible energy indicates a change from wave to vortex turbulence. Significant kinetic energy is stored in density gradients even in the absence of vortices, evident in the quantum pressure.
Figure <ref>(e) is consistent with the behavior shown in heatmaps: from U_F = 0.2μ, E^c_kin sharply increases as density fluctuations grow [excite_energies(e)]; for the same U_F values, E^i_kin increases gradually as vortices are present only at the condensate surface. Recent reports of wave-turbulence in BECs generally correspond to these forcing amplitudes (U_F < μ) <cit.>, which is consistent with the dominance of compressible kinetic energy in this region.
At U_F ≃1.2μ, a distinct change is evident in excite_energies(e). E^i_kin increases rapidly, associated with the formation of bulk vortices, and E^c_kin has a much weaker response to increased forcing. Finally, another change in response occurs for U_F > 2μ, where incompressible kinetic energy and quantum pressure have a weaker response to increased forcing. In heatmaps this corresponds to the steady state vortex distribution becoming homogeneous. There may be a limit to the number of vortices that can stably exist within a BEC of a certain size and density, bounding growth of incompressible kinetic energy with forcing.
This analysis suggests distinct turbulent regimes exist for different forcing strengths. In the following sections, we apply tools of spectral analysis to relate these regimes to previous observations and theoretical predictions, aiming to identify power law behavior associated with particular regimes in excite_energies(e).
§.§ Spectral Analysis
Our general aim is to identify power-law behavior in kinetic energy spectra, to gain insight into the underlying mechanism of transport and the state of the turbulent Gross-Pitaevskii field. In this section, we summarize relevant results of the spectral analysis formulation presented in Ref. <cit.>, that will allow such identification.
For any two complex vector fields, u and v, the spectral density of their inner product ⟨𝐮𝐯⟩(k), is defined as
⟨u | v⟩≡∫_0^∞ dk ⟨uv⟩(k).
It can be shown <cit.> that in three dimensions
⟨u || v⟩(k) = k^2/2π^2∫ d^3r sin(k |𝐫|)/k|𝐫| C[u,v](r),
where C[u,v](r) is the spatial two-point correlation of u and v:
C[u,v](r) ≡∫ d^3 R ⟨u| R - r/2 ⟩⟨R + r/2|v⟩.
This reformulation of the spectral analysis problem offers several advantages. First, carrying out the angular integral in Fourier-space analytically, removes the need for numerical binning to create spectral functions of k=|𝐤|. Second, by decoupling the position and Fourier-space resolution, it allows arbitrary resolution spectra to be computed. Finally, a corresponding system-averaged two-point correlation can easily be extracted <cit.>:
G_u v(r) =1/4π∫ d^3 𝐫^'δ(r-|𝐫^'|) C[𝐮, 𝐯](𝐫^')
=∫_0^∞ dk sin(k r)/kr⟨𝐮𝐯⟩(k).
Depending on which fields u and v are inserted into spec_IP, a variety of spectral distributions can be generated for any given wave function. For example, choosing u = v = ψ(r), we obtain
N = ∫_0^∞ dk ⟨ψψ⟩(k),
where the momentum spectrum, describing the particle occupation of momentum-states is
⟨ψψ⟩(k) =k^2/2π^2∫ d^3𝐫sin(k |𝐫|)/krC[ψ,ψ](𝐫),
with corresponding two-point spatial correlator of the field
G_ψψ(r) =∫_0^∞ dk sin(k r)/kr⟨ψψ⟩(k).
Eqs. spec_dens and N_int are one-dimensional integrals in k-space. However, spectra in WWT literature <cit.>
are usually defined such that a three-dimensional integral returns the total inner product. This introduces a factor of 4π k^2 in the definition of the wave-action spectrum
n(k) ≡1/4π k^2⟨ψψ⟩ (k),
where
N = ∫ d^3𝐤 n(k) = ∫_0^∞ dk 4π k^2 n(k)
We emphasize that there is no assumption of isotropy in the definition. Instead, formally integrating over the Fourier-space solid angle generates the sinc kernel in psispec. Another example is found by instead using the gradient of the wave function, 𝐮=𝐯=∇ψ, giving the kinetic energy spectral-density (again introducing the spherical factor 4π k^2)
e_kin(k) ≡1/4π k^2⟨∇ψ∇ψ⟩(k),
related to the total kinetic energy by
E_kin = ∫ d^3k e_kin(k) = ∫_0^∞ dk 4π k^2 e_kin(k).
We now address the decomposition problem. To set up a decomposition of the kinetic energy that includes all quantum phase information <cit.>, we define three fields:
(w^i,w^c, w^q) ≡ (iu^i, iu^c, u^q) e^iΘ.
These definitions furnish a linear decomposition of the gradient ∇ψ = (m / ħ)[ w^i + w^c + w^q ]. Using w^α as input fields to spec_dens, we obtain the respective components of the kinetic energy-density:
e_kin^α(k) ≡1/4π k^2⟨w^αw^α⟩(k), α∈i,c,q.
The kinetic energy densities calculated in e_kin_components retain all quantum phase information [The standard decomposition in quantum-turbulence studies <cit.> involves kinetic energy density calculated semi-classically as a velocity power spectrum via Plancherel's theorem (Appendix. <ref>), which discards quantum phase information.]. If instead we ignore the phase factor, as is usually done <cit.>, we would obtain the standard velocity power spectra [The usual definition does not factor out the spherical measure 4π k^2, here written explicitly for convenience in connecting with recent literature.].
ε_kin^α(k) ≡1/4π k^2⟨u^αu^α⟩(k) , α∈i,c,q.
An important feature of the velocity power spectra components is that while they integrate to give the respective component of kinetic energy, E^α_kin, they do not sum locally in k-space to give the power spectral density <cit.>
ε_kin(k) ≠ε^i_kin(k) + ε^c_kin(k) + ε^q_kin(k).
Conversely, the components of the energy density do add locally, albeit with additional cross-terms describing redistribution of energy between scales
e_kin(k) = e^i_kin(k) + e^c_kin(k) + e^q_kin(k)
+ e^ic_kin(k) + e^iq_kin(k) + e^cq_kin(k).
The cross-terms integrate to zero and thus do not contribute to the total kinetic energy, but are required to get the correct total energy at each k <cit.>. In our simulations e^α_kin(k) and the corresponding ε^α_kin(k) were found to differ significantly, as detailed in Appendix <ref>.
As an example to check our formulation, we plot the time evolution of n(k) in k3.5. As forcing is initiated, a peak in the spectrum is evident at the wavenumber k_L_z=2π/L_z, corresponding to the forcing lengthscale. This peak then migrates towards higher k-values, as expected for a direct cascade. Shortly after reaching the dissipation length scale, n(k) remains approximately constant in time apart from a change in amplitude due to particle loss. For this particular forcing, the power law form n(k)∼ k^-3.5 is evident in k3.5, consistent with previous works <cit.>.
§ STEADY STATE TURBULENCE
We now investigate the properties of steady-state turbulence. We use the spectral analysis developed in the previous section to further decompose the wave-action spectrum. As kinetic energy densities and number densities only differ by a factor of ħ^2k^2/2m, the spectral decomposition allterms can be used to define the component wave-action spectra
n^α(k) = 2m/ħ^2k^2 e^α_kin(k) = m/2πħ^2k^4⟨w^αw^α⟩(k),
where α∈{i,c,q}.
We can then use the previously established decomposition of energy-densities to identify power-law behavior in distinct parts of the wave-action spectrum, considering the compressible, incompressible, and quantum pressure components [We do not consider the cross terms further in this work as they are not very intuitive to interpret. See Ref. <cit.> for a simple example.].
We decompose the steady-state wave-action spectrum according to nk_i and present the densities n^α(k) in decomposed_nk. The decomposed spectra reveal further information beyond what is available from n(k). The presented steady-state spectra are averaged over one forcing period, however, time averaging only alters the result near the forcing lengthscale, a further indication that the steady-state is reached.
We do not observe any obvious power-law behaviour for U_F≪μ (not shown). Within a narrow region of forcing, 0.15μ≲ U_F ≲ 0.3μ, the compressible density approximates k^-3. However, the approximate scaling only extends over a factor of 2 in wave number, insufficient to identify power-law behaviour <cit.>.
Starting at U_F=0.4μ, we observe a clear k^-3.5 scaling over nearly a decade of kξ, in both compressible and quantum pressure momentum distributions, shown in decomposed_nk(a). This power-law remains intact for a wide interval of forcing energies, 0.4μ≤ U_F≤ 1.2μ, and as incompressible energy remains small, the power-law is also evident in the total spectrum n(k). This is consistent with the observed vortex configurations [heatmaps(a)], where extended bulk vortices remain rare. In this regime it is notable that n^c(k)≃ n^q(k) in the power law region of wavenumbers. We return to this point in discussion.
For U_F ≳ 1.2μ, extended vortices develop in the bulk superfluid in the steady-state, and the system enters a transitional regime without power-law behavior in any component of kinetic energy [decomposed_nk(b)]. The total wave-action spectrum gradually steepens in the ultraviolet due to an increase in the incompressible energy-density, while the compressible and quantum pressure components uncouple, and the slope of n^c(k) decreases. The appearance of bulk vortices is associated with the breakdown of k^-3.5 scaling. Eventually, we observe a wide n^c(k) ∝ k^-7/3 power-law for strong forcing, U_F≳ 2μ, in agreement with the predicted inverse particle cascade for weak forcing in the absence of a condensate [decomposed_nk(c)] <cit.>.
To further understand the different regimes of steady turbulence, in decomposed_nk(d) we plot the ratio of incompressible to compressible kinetic energy, with the identified power-law regimes. There is a clear change in the energy dependence on U_F at the boundary of each power-law regime. Initially, the ratio is rapidly decreasing, due to compressible excitations paired with a lack of vortex nucleation. With additional energy injected, this is succeeded by a sharp increase in the kinetic energy ratio (as surface vortices begin to proliferate), and the appearance of a k^-3.5 power-law. The breakdown of k^-3.5 involves extended vortices entering the bulk in the steady-state, and a steep rise in incompressible energy with U_F. The system enters a regime of vortex turbulence, but shows no scaling behavior. Finally, for U_f≳ 2μ, a k^-7/3 scaling emerges where vortices are approximately homogeneous and dense, and the incompressible energy responds only weakly to increased forcing. In this regime E^i_kin/E^c_kin > 1, a regime we can regard as strong quantum vortex turbulence, in contrast to weak wave turublence where E^i_kin/E^c_kin≪ 1.
While Fourier-space reveals power-law behavior, we can also examine the corresponding correlation functions in position-space. When power-law behavior weakens, the appearance of characteristic length scales can be more clearly understood in position space. Fourier-transformation of the momentum distribution gives the system-averaged spatial two-point correlation of the Gross-Piteavskii wave function, as discussed in speca. In two_point(a), we plot the system averaged two-point correlation function g_1(r)≡ G_ψψ(r)/G_ψψ(0), calculated by transforming ⟨ψψ⟩(k)=4π k^2n(k) to position space via Gpp. The two-point correlator of the ground state (U_F=0) decays over a scale comparable to the shortest cuboid length L_z. As U_F increases, the length scale is reduced. Eventually, for strong forcing the correlations only persist over a scale of order ξ. We define the correlation length l_c as the distance where g_1(r) drops to 1/e. In two_point(b) we plot the correlation length against U_F, together with the power law regimes identified.
Initially there is a steep drop, followed by a slower decline during the k^-3.5 regime. As vortex lines develop in steady-state, we see a second steep decline in correlation length around U_F∼1.5μ. The dense vortex regime U_F≳2μ corresponds to l_c∼ξ, and the system loses all coherence beyond the vortex core scale.
§ DISCUSSION AND CONCLUSIONS
§.§ Discussion
We identify three distinct regimes of steady turbulence through a combination of power laws, vortex distributions, and two point correlations.
Weak wave turbulence.— For weaker forcing, 0.4μ≲ U_F≲ 1.2μ, there is a ∝ k^-3.5 power law spectrum for compressible kinetic energy, quantum pressure, and the total spectrum, and a distinct absence of extended vortex lines in the bulk superfluid. In this weak-wave turbulence regime E^i_kin/E^c_kin≪ 1. Even when small vortex rings appear in the steady state as U_F approaches μ, the k^-3.5 scaling is robust and spans a decade of k-space. The forcing amplitude reported in Ref. <cit.>, U_F = 0.8μ, is in the center of the U_F range where we identify the scaling both instantaneously [k3.5] and in time averages [decomposed_nk]. In addition, we observe a clear forcing peak near the system scale L_z in the total wave-action spectrum at early times. The wave-action spectra decomposition, combined with absence of bulk vortices implicates a direct Bogoliubov quasiparticle cascade for the k^-3.5 power law. Indeed, Bogoliubov phonons involve a quantum pressure component associated with small scale density fluctuations when kξ≳ 1. In decomposed_nk(b), we see n^c(k) and n^q(c) are closely matched within this k range, and decouple when kξ≲ 1.
The log-corrected spectrum analysis of WWT direct cascade is strictly applicable for very weak forcing in the absence of a condensate <cit.>, and is thus not directly applicable to our system that instead involves a large condensate with dominant three wave interactions. However, the three wave kinetic equation also supports a k^-3 scaling <cit.>. Applying the log-correction to our wave-action spectra, we find good agreement if we identify the forcing scale for phonons as k_F=1/ξ, instead of the natural forcing wavenumber of the potential 2π/L_z as detailed in Appendix <ref>. Indeed, the analysis of the forcing scale by Martirosyan et al. <cit.>, reached a similar conclusion [Their result translates to k_F=√(2)/ξ in our units. However, for this estimate of the forcing scale we see a narrower flat region in the log-corrected spectrum. For further discussion see Appendix <ref>.].
Mixed turbulence.— For 1.2μ≲ U_F≲ 2.5μ the system enters a transitional regime where the k^-3.5 power-law breaks down, and power law behavior is not strongly evident in any components. The compressible and quantum pressure components decouple in k space, and the incompressible energy increases rapidly with U_F, entering a regime where E^i_kin/E^c_kin∼ 1, and the compressible and incompressible wave-action spectra become comparable in k space. This is associated with the appearance of extended vortices in the bulk superfluid. The transition is also evident in the two-point correlation function, where the correlation length decreases rapidly as vortices enter the bulk.
Strong vortex turbulence.— For 2.5μ≲ U_F, the condensate bulk becomes saturated with vortex lines and the compressible spectrum approximates a k^-7/3 power-law over more than a decade of k; this is consistent with the predicted WWT scaling for an inverse particle-cascade solution to the four-wave kinetic equation. In Ref. <cit.>, this was observed in the total spectrum for weak forcing in the absence of BEC, while here we see it in the compressible spectrum only, for strong forcing. As the power law extends far below k_ξ=1/ξ, this would be consistent with WWT if random vortices effectively destroy the condensate, and indeed the phase coherence length drops to the healing length at U_F≳ 2μ [two_point]. As our model only contains damping for high wavenumbers, a steady inverse-cascade of particles can only exist in n^c(k) if particles are transported to high k by the other components of n(k). Physical consistency thus requires a compressible particle source near the dissipation scale k∼ k_D to generate an inverse cascade; one candidate mechanism is the steady annihilation of vortices, creating bursts of sound at small scales. In Figure <ref>(d),(e) the compressible energy driving by the forcing appears to become ineffective for U_F≳ 2μ, supporting the interpretation that compressible energy is instead injected at small scales via vortex annihilation.
In the high-energy regime, the incompressible energy density is extremely stable across both U_F values and forcing phase. In all other cases, spectra visibly oscillate in the infrared with forcing, necessitating cycle-averaging of spectra. However, the dense distribution of vortices appears to depend very weakly on the phase of the forcing. Wave-turbulence closure requires the initial phase and amplitude of the Gross-Pitaevskii field be independent and random, whereas we find that for weak forcing there is still a high degree of coherence. Conversely, once the condensate becomes saturated with vortices, we see a drastic decrease in the correlation length, consistent with the assumptions underlying WWT. Despite strong forcing, wave interactions may be sufficiently weak that WWT theory remains relevant, particularly if injected energy primarily resides in the quantum pressure and incompressible components of kinetic energy.
Calculation of spectra with arbitrary infrared k-resolution and decomposition of wave-action spectra require the reformulated spectral densities <cit.>. In Appendix <ref> we compare the decomposition via the wave-action spectrum with the standard decomposition of the the velocity power spectrum used in previous studies. We find the two are similar only in a minority of cases, typically in the inertial range for well-developed turbulence.
§.§ Conclusions
Motivated by recent experiments in forced quantum turbulence <cit.>, we have simulated a Gross-Pitaevskii model for a wide range of forcing energies, generating and analysing steady turbulence. We find evedence for regimes of weak-wave and strong-vortex turbulence separated by a mixed regime that lacks clear power-law behavior. Distinct regimes of power-law behavior are identified by application of high resolution spectral analysis <cit.>, and further studied through vortex distributions and two-point correlations. Our results are largely consistent with other recent works <cit.>, while also enabling further decomposition of the wave-action spectrum. For weak forcing, the bulk superfluid is approximately vortex free, and exhibits a power-law of ∼ k^-3.5, consistent with weak-wave turbulence predictions for a direct cascade <cit.>. The compressible and quantum pressure components also show the same power-law scaling and are in close agreement. Log-correction analysis implicates a forcing wavenumber k_F∼ξ^-1 for the cascade. In this regime the superfluid order parameter remains coherent over the system scale, with coherent only weakly degraded by small vortex rings. In contrast, for strong forcing the steady state develops a dense tangle of extended vortices that limites phase coherence to the healing length scale. Power-law scaling ∼ k^-7/3 is observed in the compressible component only, consistent with the weak-wave turbulence prediction of an inverse cascade in non-condensate four-wave kinetics <cit.>.
Several open questions would benefit from future exploration. It would be interesting to study the rapid vortex growth regime in the transition from k^-3.5 to k^-7/3 scaling, together with the different kinetic equation approaches and their associated scaling solutions. Another open question is identifying the true source of high-k compressible energy in the k^-7/3 regime. Detailed study of vortex tangles, energy fluxes, and velocity increments <cit.> would also reveal further insights into the nature and formation of quantum turbulence.
§ ACKNOWLEDGEMENTS
We thank Gevorg Martirosyan and Tim Copland for stimulating discussions.
TF would like to thank the University of Otago for financial support.
The authors wish to acknowledge the use of New Zealand eScience Infrastructure (NeSI) high performance computing facilities as part of this research.
47
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Grant et al.(1962)Grant, Stewart, and Moilliet]grant_turbulence_1962
author author H. L. Grant, author R. W. Stewart, and author A. Moilliet, title title Turbulence spectra from a tidal channel, https://doi.org/10.1017/S002211206200018X journal journal Journal of Fluid Mechanics volume 12, pages 241 (year 1962)NoStop
[Henn et al.(2009)Henn, Seman, Roati, Magalhães, and Bagnato]henn_emergence_2009
author author E. A. L. Henn, author J. A. Seman, author G. Roati, author K. M. F. Magalhães, and author V. S. Bagnato, title title Emergence of Turbulence in an Oscillating Bose-Einstein Condensate, https://doi.org/10.1103/PhysRevLett.103.045301 journal journal Physical Review Letters volume 103, pages 045301 (year 2009)NoStop
[Neely et al.(2013)Neely, Bradley, Samson, Rooney, Wright, Law, Carretero-González, Kevrekidis, Davis, and Anderson]neely_characteristics_2013
author author T. W. Neely, author A. S. Bradley, author E. C. Samson, author S. J. Rooney, author E. M. Wright, author K. J. H. Law, author R. Carretero-González, author P. G. Kevrekidis, author M. J. Davis, and author B. P. Anderson, title title Characteristics of Two-Dimensional Quantum Turbulence in a Compressible Superfluid, https://doi.org/10.1103/PhysRevLett.111.235301 journal journal Physical Review Letters volume 111, pages 235301 (year 2013)NoStop
[Kolmogorov(1941)]kolmogorov_local_1941
author author A. Kolmogorov, title title The Local Structure of Turbulence in Incompressible Viscous Fluid for Very Large Reynolds' Numbers, https://ui.adsabs.harvard.edu/abs/1941DoSSR..30..301K journal journal Akademiia Nauk SSSR Doklady volume 30, pages 301 (year 1941)NoStop
[Navon et al.(2016)Navon, Gaunt, Smith, and Hadzibabic]navon_emergence_2016
author author N. Navon, author A. L. Gaunt, author R. P. Smith, and author Z. Hadzibabic, title title Emergence of a turbulent cascade in a quantum gas, https://doi.org/10.1038/nature20114 journal journal Nature volume 539, pages 72 (year 2016)NoStop
[Gauthier et al.(2019)Gauthier, Reeves, Yu, Bradley, Baker, Bell, Rubinsztein-Dunlop, Davis, and Neely]gauthier_giant_2019
author author G. Gauthier, author M. T. Reeves, author X. Yu, author A. S. Bradley, author M. A. Baker, author T. A. Bell, author H. Rubinsztein-Dunlop, author M. J. Davis, and author T. W. Neely, title title Giant vortex clusters in a two-dimensional quantum fluid, https://doi.org/10.1126/science.aat5718 journal journal Science volume 364, pages 1264 (year 2019)NoStop
[Reeves et al.(2015)Reeves, Billam, Anderson, and Bradley]reeves_identifying_2015
author author M. T. Reeves, author T. P. Billam, author B. P. Anderson, and author A. S. Bradley, title title Identifying a Superfluid Reynolds Number via Dynamical Similarity, https://doi.org/10.1103/PhysRevLett.114.155302 journal journal Physical Review Letters volume 114, pages 155302 (year 2015)NoStop
[García-Orozco et al.(2022)García-Orozco, Madeira, Moreno-Armijos, Fritsch, Tavares, Castilho, Cidrim, Roati, and Bagnato]garcia-orozco_universal_2022
author author A. D. García-Orozco, author L. Madeira, author M. A. Moreno-Armijos, author A. R. Fritsch, author P. E. S. Tavares, author P. C. M. Castilho, author A. Cidrim, author G. Roati, and author V. S. Bagnato, title title Universal dynamics of a turbulent superfluid Bose gas, https://doi.org/10.1103/PhysRevA.106.023314 journal journal Physical Review A volume 106, pages 023314 (year 2022)NoStop
[Dogra et al.(2023)Dogra, Martirosyan, Hilker, Glidden, Etrych, Cao, Eigen, Smith, and Hadzibabic]dogra_universal_2023
author author L. H. Dogra, author G. Martirosyan, author T. A. Hilker, author J. A. P. Glidden, author J. Etrych, author A. Cao, author C. Eigen, author R. P. Smith, and author Z. Hadzibabic, title title Universal equation of state for wave turbulence in a quantum gas, https://doi.org/10.1038/s41586-023-06240-z journal journal Nature volume 620, pages 521 (year 2023)NoStop
[Madeira et al.(2024)Madeira, García-Orozco, Moreno-Armijos, Fritsch, and Bagnato]madeira_universal_2024
author author L. Madeira, author A. D. García-Orozco, author M. A. Moreno-Armijos, author A. R. Fritsch, and author V. S. Bagnato, title title Universal scaling in far-from-equilibrium quantum systems: An equivalent differential approach, https://doi.org/10.1073/pnas.2404828121 journal journal Proceedings of the National Academy of Sciences volume 121, pages e2404828121 (year 2024)NoStop
[Onsager(1949)]onsager_statistical_1949
author author L. Onsager, title title Statistical hydrodynamics, https://doi.org/10.1007/BF02780991 journal journal Il Nuovo Cimento (1943-1954) volume 6, pages 279 (year 1949)NoStop
[Bogoliubov(1947)]bogoliubov_theory_1947
author author N. N. Bogoliubov, title title On the Theory of Superfluidity, @noop journal journal Journal of Physics (USSR) volume 11, pages 23 (year 1947)NoStop
[Tsubota and Fujimoto(2014)]tsubota_spin_2014
author author M. Tsubota and author K. Fujimoto, title title Spin turbulence in spinor Bose-Einstein condensates, https://doi.org/10.1088/1742-6596/497/1/012002 journal journal Journal of Physics: Conference Series volume 497, pages 012002 (year 2014)NoStop
[Chomaz et al.(2022)Chomaz, Ferrier-Barbut, Ferlaino, Laburthe-Tolra, Lev, and Pfau]chomaz_dipolar_2022
author author L. Chomaz, author I. Ferrier-Barbut, author F. Ferlaino, author B. Laburthe-Tolra, author B. L. Lev, and author T. Pfau, title title Dipolar physics: A review of experiments with magnetic quantum gases, https://doi.org/10.1088/1361-6633/aca814 journal journal Reports on Progress in Physics volume 86, pages 026401 (year 2022)NoStop
[Navon et al.(2019)Navon, Eigen, Zhang, Lopes, Gaunt, Fujimoto, Tsubota, Smith, and Hadzibabic]navon_synthetic_2019
author author N. Navon, author C. Eigen, author J. Zhang, author R. Lopes, author A. L. Gaunt, author K. Fujimoto, author M. Tsubota, author R. P. Smith, and author Z. Hadzibabic, title title Synthetic dissipation and cascade fluxes in a turbulent quantum gas, https://doi.org/10.1126/science.aau6103 journal journal Science volume 366, pages 382 (year 2019)NoStop
[Johnstone et al.(2019)Johnstone, Groszek, Starkey, Billington, Simula, and Helmerson]johnstone_evolution_2019
author author S. P. Johnstone, author A. J. Groszek, author P. T. Starkey, author C. J. Billington, author T. P. Simula, and author K. Helmerson, title title Evolution of large-scale flow from turbulence in a two-dimensional superfluid, https://doi.org/10.1126/science.aat5793 journal journal Science volume 364, pages 1267 (year 2019)NoStop
[Kraichnan(1967)]kraichnan_inertial_1967
author author R. H. Kraichnan, title title Inertial Ranges in Two-Dimensional Turbulence, https://doi.org/10.1063/1.1762301 journal journal The Physics of Fluids volume 10, pages 1417 (year 1967)NoStop
[Shukla and Nazarenko(2022)]shukla_nonequilibrium_2022
author author V. Shukla and author S. Nazarenko, title title Nonequilibrium Bose-Einstein condensation, https://doi.org/10.1103/PhysRevA.105.033305 journal journal Physical Review A volume 105, pages 033305 (year 2022)NoStop
[Zhu et al.(2023)Zhu, Semisalov, Krstulovic, and Nazarenko]zhu_direct_2023
author author Y. Zhu, author B. Semisalov, author G. Krstulovic, and author S. Nazarenko, title title Direct and Inverse Cascades in Turbulent Bose-Einstein Condensates, https://doi.org/10.1103/PhysRevLett.130.133001 journal journal Physical Review Letters volume 130, pages 133001 (year 2023)NoStop
[Zakharov et al.(1992)Zakharov, Lvov, and Falkovich]zakharov_kolmogorov_1992-1
author author V. Zakharov, author V. Lvov, and author G. E. Falkovich, @noop title Kolmogorov Spectra of Turbulence 1: Wave Turbulence (publisher Springer-Verlag, address New York, year 1992)NoStop
[Nazarenko(2011)]nazarenko_wave_2011
author author S. Nazarenko, https://doi.org/10.1007/978-3-642-15942-8 title Wave Turbulence, series Lecture Notes in Physics, Vol. volume 825 (publisher Springer, address Berlin, Heidelberg, year 2011)NoStop
[Dyachenko et al.(1992)Dyachenko, Newell, Pushkarev, and Zakharov]dyachenko_optical_1992
author author S. Dyachenko, author A. C. Newell, author A. Pushkarev, and author V. E. Zakharov, title title Optical Turbulence - Weak Turbulence, Condensates and Collapsing Filaments in the Nonlinear Schrodinger-Equation, http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2 SrcAuth=mekentosj SrcApp=Papers DestLinkType=FullRecord DestApp=WOS KeyUT=A1992JD72100006 journal journal Physica D volume 57, pages 96 (year 1992)NoStop
[Martirosyan et al.(2024)Martirosyan, Fujimoto, and Navon]martirosyan_equation_2024
author author G. Martirosyan, author K. Fujimoto, and author N. Navon, https://doi.org/10.48550/arXiv.2407.08738 title An Equation of State for Turbulence in the Gross-Pitaevskii model (year 2024), https://arxiv.org/abs/2407.08738 arXiv:2407.08738 [cond-mat, physics:physics] NoStop
[Bradley et al.(2022)Bradley, Kumar, Pal, and Yu]bradley_spectral_2022
author author A. S. Bradley, author R. K. Kumar, author S. Pal, and author X. Yu, title title Spectral analysis for compressible quantum fluids, https://doi.org/10.1103/PhysRevA.106.043322 journal journal Physical Review A volume 106, pages 043322 (year 2022)NoStop
[Dalfovo et al.(1999)Dalfovo, Giorgini, Pitaevskii, and Stringari]dalfovo_theory_1999
author author F. Dalfovo, author S. Giorgini, author L. P. Pitaevskii, and author S. Stringari, title title Theory of Bose-Einstein condensation in trapped gases, https://doi.org/10.1103/RevModPhys.71.463 journal journal Reviews of Modern Physics volume 71, pages 463 (year 1999)NoStop
[Steel et al.(1998)Steel, Olsen, Plimak, Drummond, Tan, Collett, Walls, and Graham]steel_dynamical_1998
author author M. J. Steel, author M. K. Olsen, author L. I. Plimak, author P. D. Drummond, author S. M. Tan, author M. J. Collett, author D. F. Walls, and author R. Graham, title title Dynamical quantum noise in trapped Bose-Einstein condensates, https://doi.org/10.1103/PhysRevA.58.4824 journal journal Physical Review A volume 58, pages 4824 (year 1998)NoStop
[Gardiner and Morgan(2007)]gardiner_number-conserving_2007
author author S. A. Gardiner and author S. A. Morgan, title title Number-conserving approach to a minimal self-consistent treatment of condensate and noncondensate dynamics in a degenerate Bose gas, https://doi.org/10.1103/PhysRevA.75.043621 journal journal Physical Review A volume 75, pages 261 (year 2007)NoStop
[Zaremba et al.(1999)Zaremba, Nikuni, and Griffin]zaremba_dynamics_1999
author author E. Zaremba, author T. Nikuni, and author A. Griffin, title title Dynamics of Trapped Bose Gases at Finite Temperatures, https://doi.org/10.1023/A:1021846002995 journal journal Journal of Low Temperature Physics volume 116, pages 277 (year 1999)NoStop
[Sinatra et al.(2001)Sinatra, Lobo, and Castin]sinatra_classical-field_2001
author author A. Sinatra, author C. Lobo, and author Y. Castin, title title Classical-field method for time dependent Bose-Einstein condensed gases, https://doi.org/10.1103/PhysRevLett.87.210404 journal journal Physical Review Letters volume 87, pages 210404 (year 2001)NoStop
[Graham(1973)]graham_statistical_1973
author author R. Graham, title title Statistical Theory of Instabilities in Stationary Nonequilibrium Systems with Applications to Lasers and Nonlinear Optics, in @noop booktitle Springer Tracts in Modern Physics, editor edited by editor G. Hohler (publisher Springer, year 1973) p. pages 1NoStop
[Polkovnikov(2010)]polkovnikov_phase_2010
author author A. Polkovnikov, title title Phase space representation of quantum dynamics, https://doi.org/10.1016/j.aop.2010.02.006 journal journal Annals of Physics volume 325, pages 1790 (year 2010)NoStop
[Blakie† et al.(2008)Blakie†, Bradley†, Davis, Ballagh, and Gardiner]blakie_dynamics_2008
author author P. Blakie†, author A. Bradley†, author M. Davis, author R. Ballagh, and author C. Gardiner, title title Dynamics and statistical mechanics of ultra-cold Bose gases using c-field techniques, https://doi.org/10.1080/00018730802564254 journal journal Advances in Physics volume 57, pages 363 (year 2008)NoStop
[Proukakis and Jackson(2008)]proukakis_finite-temperature_2008
author author N. P. Proukakis and author B. Jackson, title title Finite-temperature models of Bose–Einstein condensation, https://doi.org/10.1088/0953-4075/41/20/203002 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 41, pages 203002 (year 2008)NoStop
[Clauset et al.(2009)Clauset, Shalizi, and Newman]clauset_power-law_2009
author author A. Clauset, author C. R. Shalizi, and author M. E. J. Newman, title title Power-Law Distributions in Empirical Data, https://doi.org/10.1137/070710111 journal journal SIAM Review volume 51, pages 661 (year 2009)NoStop
[Piotrowska et al.(2019)Piotrowska, Miller, and Schnetter]piotrowska_spectral_2019
author author J. Piotrowska, author J. M. Miller, and author E. Schnetter, title title Spectral methods in the presence of discontinuities, https://doi.org/10.1016/j.jcp.2019.03.048 journal journal Journal of Computational Physics volume 390, pages 527 (year 2019)NoStop
[Note1()]Note1
note This choice also gives sufficient margin between the dissipation k-value and the Nyquist k-values for a grid of 256^3 points (Appendix <ref>). High-energy particles are thus dissipated well before their spatial grid representation becomes inaccurate.Stop
[Caradoc-Davies et al.(2000)Caradoc-Davies, Ballagh, and Blakie]caradoc-davies_three-dimensional_2000
author author B. M. Caradoc-Davies, author R. J. Ballagh, and author P. B. Blakie, title title Three-dimensional vortex dynamics in Bose-Einstein condensates, https://doi.org/10.1103/PhysRevA.62.011602 journal journal Physical Review A volume 62, pages 011602 (year 2000)NoStop
[Besard et al.(2019a)Besard, Churavy, Edelman, and Sutter]besard_rapid_2019
author author T. Besard, author V. Churavy, author A. Edelman, and author B. D. Sutter, title title Rapid software prototyping for heterogeneous and distributed platforms, https://doi.org/10.1016/j.advengsoft.2019.02.002 journal journal Advances in Engineering Software volume 132, pages 29 (year 2019a)NoStop
[Besard et al.(2019b)Besard, Foket, and De Sutter]besard_effective_2019
author author T. Besard, author C. Foket, and author B. De Sutter, title title Effective Extensible Programming: Unleashing Julia on GPUs, https://doi.org/10.1109/TPDS.2018.2872064 journal journal IEEE Transactions on Parallel and Distributed Systems volume 30, pages 827 (year 2019b)NoStop
[Nore et al.(1997)Nore, Abid, and Brachet]nore_kolmogorov_1997
author author C. Nore, author M. Abid, and author M. E. Brachet, title title Kolmogorov Turbulence in Low-Temperature Superflows, https://doi.org/10.1103/PhysRevLett.78.3896 journal journal Physical Review Letters volume 78, pages 3896 (year 1997)NoStop
[Note2()]Note2
note The ordinary velocity field 𝐯 diverges near the core of vortices, and so cannot be used for decomposition. By using 𝐮 we can apply the Helmholtz decomposition to a twice continuously differentiable vector field, as required by the decomposition theorem.Stop
[Note3()]Note3
note The standard decomposition in quantum-turbulence studies <cit.> involves kinetic energy density calculated semi-classically as a velocity power spectrum via Plancherel's theorem (Appendix. <ref>), which discards quantum phase information.Stop
[Note4()]Note4
note The usual definition does not factor out the spherical measure 4π k^2, here written explicitly for convenience in connecting with recent literature.Stop
[Reeves et al.(2014)Reeves, Billam, Anderson, and Bradley]reeves_signatures_2014
author author M. T. Reeves, author T. P. Billam, author B. P. Anderson, and author A. S. Bradley, title title Signatures of coherent vortex structures in a disordered two-dimensional quantum fluid, https://doi.org/10.1103/PhysRevA.89.053631 journal journal Physical Review A volume 89, pages 053631 (year 2014)NoStop
[Note5()]Note5
note We do not consider the cross terms further in this work as they are not very intuitive to interpret. See Ref. <cit.> for a simple example.Stop
[Note6()]Note6
note Their result translates to k_F=√(2)/ξ in our units. However, for this estimate of the forcing scale we see a narrower flat region in the log-corrected spectrum. For further discussion see Appendix <ref>.Stop
[Zhao et al.(2024)Zhao, Tao, and Spielman]zhao_kolmogorov_2024
author author M. Zhao, author J. Tao, and author I. Spielman, https://doi.org/10.48550/arXiv.2408.04715 title Kolmogorov turbulence in atomic Bose-Einstein condensates (year 2024), https://arxiv.org/abs/2408.04715 arXiv:2408.04715 NoStop
§ NUMERICAL METHOD
Evaluation of the potential term in the GPE, as well as the interaction term |ψ(r,t)|^2, is straightforward using element-wise operations. However, the kinetic term contains a Laplacian, which is less straightforward. By applying a Fourier-transform to the wavefunction, we shift to momentum-space, where the kinetic operator becomes diagonal. This method is commonly known as pseudo-spectral differentiation and makes use of the Fourier-relation
∇^2 ψ (r,t) ↔ -k^2 ϕ(k,t),
where the arrows denote a spatial Fourier-transform, and ϕ(k,t) is the Fourier-transform of ψ(r,t). This method is efficient and offers excellent error properties compared to alternatives; however, it does come with some limitations. One limitation is that the boundary conditions imposed on the numerical grid become necessarily periodic. Another issue is that aliasing can occur when representing certain momentum values, based on the grid's density. Several tests were performed to ensure these factors were appropriately mitigated.
We begin by ensuring that a sufficiently fine grid is being used to represent the system and that numerical error is within an acceptable margin. We test this only for the most violent and energetic case (U_F = 5μ). The error for smaller forcing amplitudes is expected to be approximately bounded by this case. Excitation was simulated across a range of step-sizes and grid points, with no damping potential applied. In the absence of damping, normalisation of the wavefunction should be conserved, allowing any deviation to be attributed to numerical error. By monitoring the normalisation over time, we can quantify the error for each case.
As we would expect, error_tests(a) shows the error per time-step decreasing with step-size, until reaching a floor determined by the limit of Float64 precision. Reducing step-size beyond this point increases the total error [error_tests(b)], as more steps are required per second of dynamics. For this reason a step-size of Δ t = 10^-3 was chosen for calculations.
After selecting the step-size, we determine the necessary grid density to ensure convergence. Similar to step-size, an overly dense grid is not necessarily optimal. Although it does not increase error, it significantly raises memory-usage and computation time. The number of points along each axis of the grid, M, was chosen as powers of two to ensure optimal performance under the FFTW algorithm. The total number of grid points is then M^3. To test for convergence, we performed identical excitation of a ground state using various grid sizes. We then calculate the total kinetic energy, compressible kinetic energy, and the wave-action spectrum for each case.
In converge_test, results for M = 64 show significant deviations compared to results from denser grids. This suggests that the grid-size is insufficiently dense to accurately represent the system. For the three largest grid sizes, converge_test(a) and (b) show minimal variation. However, converge_test(c) reveals a notable difference between grid sizes, as the 256^3 and 128^3 grid sizes exhibit ultraviolet features that appear absent in the 512^3 grid. This feature is attributed to the Nyquist-frequency of Fourier-transforms.
Just as temporal Fourier-transforms have a maximum resolvable frequency determined by the sampling frequency, a spatial Fourier-transform has maximum resolvable k-values based on the spatial sampling frequency for each dimension. For a grid with M^3 points and lengths L_i, we have
[ Δ x, Δ y, Δ z ] = [ L_x/M, L_y/M, L_z/M]
k_Nyq, i = 1/22π/Δ i, i ∈{x, y, z}
Occupation of k-values beyond the Nyquist limit (Eq. <ref>), can result in aliasing, where the condensate is misrepresented in Fourier-space. This results in the observed artifacts. Grids with more points provide higher spatial sampling, which shifts the ultraviolet artifact to higher k-values.
For our simulations, we opted to use a grid size of M = 256. This allows a sufficient gap between the dissipation k-scale and the Nyquist k-scale. It also allows for efficient computation, with each second of Float64 dynamics taking approximately 38 minutes. Taking full advantage of this, we were able to test a high number of U_F values [excite_energies(e)].
§ SPECTRAL ANALYSIS
§.§ Calculation of spectral densities using Plancherel's theorem
Expressing kinetic energy in terms of our density-weighted velocity field, we can utilize Plancherel's theorem to equivalently express the integral in k-space rather than position-space
E_kin = m/2∫ dr |u(r)|^2 = m/2∫ dk |u(k)|^2,
where u(k) is the three-dimensional Fourier-transform of u(r). Integrating the right-hand side of Eq. <ref> over the solid angle in k-space, we obtain a distribution which integrates over k to give the kinetic energy.
E_kin = ∫^∞_0 dk m k^2/2∫^2π_0 dϕ_k ∫^π_0 dθ_k sin(θ) |u(k, ϕ_k, θ_k)|^2
= ∫^∞_0 dk ε_kin(k)
There is then a temptation to describe ε_kin(k) as the kinetic energy spectral density. However, this is somewhat misleading, as the components of ε_kin(k) do not sum locally in k-space, limiting it's interpretation as an energy density.
ε_kin(k) ε^i_kin(k) + ε^c_kin(k) + ε^q_kin(k)
Using ε_kin(k) also implicitly introduces a semi-classical approximation, as quantum phase information is neglected when calculating |u|^2 in Eq. <ref>. We can more appropriately describe ε_kin(k) as a velocity power spectral density. For wavefunctions represented on a cartesian grid (typically the case in numerical studies), there will also be fewer points with small |k| than large, leading to a lack of resolution in the infrared region of the resulting spectra. Conversely, the k-values for the reformulated spectral-density (Eq. <ref>) are completely decoupled from the grid representing the wavefunction, allowing for arbitrarily high resolution in k-space. While k-space resolution can be arbitrarily high, the available range is still limited by Nyquist k-values.
§.§ Comparison of energy densities and velocity power spectra
To demonstrate the importance of including phase information, spectra_comparison compares equivalent energy density and velocity power spectra components. Both are calculated identically, in except that ε^α_kin(k) use the density-weighted velocity fields, and e^α_kin(k) use the w^α fields which incorporate quantum phase, defined in Eq. <ref>.
In spectra_comparison, we see ε_kin(k) and e_kin(k) can have close agreement [spectra_comparison(b)] or be extremely distinct [spectra_comparison(a)] for different cases. Generally, the two distributions are similar in the inertial range for well-developed turbulence. During initial excitation and at extreme lengthscales, there is no clear pattern relating the two spectra. This indicates that for well-developed turbulence there is nontrivial phase-behaviour in the inertial range, such that neglecting quantum phase and imposing a semi-classical approximation does not introduce significant error to energy spectral densities. An interesting question then would be whether this is true for other forms of turbulence.
spectra_comparison demonstrates that inclusion of phase information is clearly important in certain cases. However, this information is typically inaccessible in experimental studies of BECs. Therefore, it would be valuable to understand the relationship between ε_kin(k) and e_kin(k), and identify conditions under which they are similar enough that phase information becomes less critical.
§ COMPARISON WITH LOGARITHMIC-CORRECTION MODEL
Another explanation for the k^-3.5 power law, is that the k∝ k^-3 stationary solution is marginally non-local, requiring a log-correction of the form
n(k) = C_d P_0^1/3 k^-3ln^-1/3(k/k_F).
The first two terms are constants and k_F is the wavenumber associated with the forcing length scale, where energy is injected.
This logarithmic-correction emerges from the introduction of an infrared cutoff at the forcing wavenumber, k_F. In Ref. <cit.>, k_F is a defined parameter. With our method of energy injection this is not the case, and k_F is slightly more ambiguous. We excite the condensate along the z-direction, so one might expect k_F = k_L_z. However, in practice we find the forcing peak appears at slightly higher k-values than k_L_z [k3.5], implying the forcing length scale is slightly smaller than L_z. Regardless, we test the models fit using a wide range of k_F values.
Applying the logarithmic-correction to spectra from the k^-3.5 regime, we generally find poor agreement using k_F values near the actual forcing lengthscale. However, for k_F = 1/ξ, we find good agreement in the total momentum distribution and slightly stronger agreement in the compressible & quantum pressure components [logplot]. This k_F value is clearly distinct from the actual forcing wavenumber, supporting the argument presented in Ref. <cit.>, that the physical energy-injection lengthscale is distinct from the effective injection scale for the four-wave cascade. The strength of this agreement, however, is limited by the size of the inertial range, which to create a more compelling case could be increased with a greater dissipation lengthscale and a finer numerical grid to ensure the model remains stable.
The slightly improved fit of the model in n^q(k) and n^c(k), as opposed to the total momentum distribution, is likely due to incompressible excitations irrelevant to the WWT cascade being discarded. In Ref. <cit.> there is strong agreement with no partitioning of the spectrum, possibly because fewer vortices are nucleated.
Where we use a finite box-trap with hard-wall boundaries, Ref. <cit.> uses a homogeneous condensate with periodic boundaries. Our forcing term is an emulation of experimental forcing, whereas in Ref. <cit.>, energy is injected in k-space in a narrow band about at particular k_F value. Our model also does not feature any mechanism for dissipation at large lengthscales. These factors may all play a role in introducing greater incompressible aspects to the steady turbulent state.
|
http://arxiv.org/abs/2409.03173v1 | 20240905020640 | Thermal conductivity of polymers: A simple matter where complexity matters | [
"Debashish Mukherji"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.mtrl-sci"
] |
[][email protected]
Quantum Matter Institute, University of British Columbia, Vancouver V6T 1Z4, Canada
§ ABSTRACT
Thermal conductivity coefficient κ measures the ability of a material to conduct a heat current.
In particular, κ is an important property that often dictates the usefulness of a material over a wide range of environmental conditions.
For example, while a low κ is desirable for the thermoelectric applications, a large κ is
needed when a material is used under the high temperature conditions. These materials range from common crystals
to commodity amorphous polymers. The latter is of particular importance because of their use
in designing light weight high performance functional materials. In this context, however, one of the major limitations of the amorphous polymers is
their low κ, reaching a maximum value of about 0.4 W/Km that is 2–3 orders of magnitude smaller
than the standard crystals. Moreover, when energy is predominantly transferred through the bonded connections, κ≥ 100 W/Km.
Recently, extensive efforts have been devoted to attain a tunability in κ via macromolecular engineering.
In this work, an overview of the recent results on the κ behavior in polymers and polymeric solids is presented.
In particular, computational and theoretical results are discussed within the context of complimentary experiments.
Future directions are also highlighted.
Thermal conductivity of polymers: A simple matter where complexity matters
Debashish Mukherji
September 9, 2024
==========================================================================
§ INTRODUCTORY REMARKS
When a material is subjected to a non–equilibrium condition of temperature T, such that one end of the
material is kept at an elevated temperature T_ high and a lower temperature T_ low is maintained at
the other end, a heat current flows from the hot to the cold region. This is a direct consequence of the second law of thermodynamics and
is quantified in terms of the heat flux vector j⃗. Here, the Fourier's law of heat diffusion
states that j∝( T_ high - T_ low)/ℓ across a sample of length ℓ.
The proportionality constant is thermal transport coefficient κ, which is a key material property that commonly
dictates the usefulness of a material for a particular application <cit.>.
Traditionally, heat flow in crystalline materials and in nano–structures have been of primary
interest <cit.>.
Due to the long range order in crystals, the phonon mean free path Λ are rather large and thus κ≥ 100 W/Km <cit.>.
In the carbon–based materials, κ can even exceed 1000 K/Wm <cit.>.
A complete opposite class to the crystals is the amorphous solids, where Λ is small,
i.e, within the direct atom–to–atom contact. Here, κ≤ 2 W/Km and heat propagates via localized vibrations <cit.>.
Within the class of amorphous solids, polymers are of particular importance because they usually provide a flexible platform for
the design of advanced functional materials <cit.>.
Some examples include, but are not limited to, organic solar cells <cit.>, electronic packaging and/or heat sinking materials <cit.>,
thermal switches <cit.> and thermoelectric applications <cit.>.
However, the typical κ values of the amorphous polymeric solids are further 5–10 times smaller <cit.>
than the standard amorphous materials (such as amorphous silicon). This often hinders the usefulness of polymers under the high T conditions.
Most commonly known (non–conducting) commodity polymers can be categorized into the systems where non–bonded monomer–monomer
interactions are either dictated by the van der Waals (vdW) forces or by the hydrogen bonds (H–bonds) <cit.>.
Here, the interaction strength of vdW is about k_ BT at a temperature T = 300 K and the Boltzmann constant k_ B,
while the strength of a H–bond is between 4–8k_ BT depending on the dielectric constant of the medium <cit.>.
A few examples of the commodity polymers is shown in Fig. <ref>.
Note that these particular polymers are chosen because their experimental and simulation data is readily available.
Experiments have reported that κ≃ 0.1-0.2 W/Km for the vdW systems <cit.>
and for the H–bonded polymers κ→ 0.4 W/Km <cit.>.
Table <ref> lists κ and the corresponding glass transition temperatures T_ g for the polymers in Fig. <ref>.
It can be seen in Table <ref> that PMMA (a vdW–based polymer) has κ≃ 0.20 W/Km
and its T_ g≃ 378 K, while PVA (a H–bonded system) has κ≃ 0.310 W/Km
and a lower T_ g≃ 348 K <cit.>. These values indicate that T_ g and κ are not correlated, which is also visible across many polymeric systems <cit.>. Furthermore, a closer look suggests that– within a simple approximation, T_ g is directly related to the trans–to–gauche free energy barrier Δ E_ t-g (i.e., local fluctuations),
which is dictated by a delicate combination of the bonded, the angular and the dihedral interactions along a chain backbone. The higher the Δ E_ t-g, the larger the T_ g. Here, PMMA with a larger side group has a higher Δ E_ t-g than a PVA, hence the observed trend in T_ g. Within this simple discussion it becomes reasonably apparent that the exact T_ g is a completely irrelevant quantity within the context of κ in polymers.
Amorphous polymeric solids are a special case because even when their macroscopic κ values are very small,
at the monomer level they have different rates of energy transfer. For example, energy can be transferred
between the bonded monomers and that between the neighboring non–bonded monomers.
In this context, a closer investigation of the polymer structures reveal that the carbon–carbon (C–C) covalent bond
constitute the most common backbone of commodity polymers, see Fig. <ref>.
Here, it is known that the stiffness of a C–C contact is E ≥ 250 GPa <cit.>, while E of a vdW or a H–bond
system vary between 2–5 GPa <cit.>. Given that κ∝ E <cit.>, the energy transfer between
the two non–bonded monomers (soft contacts) is significantly smaller than along a (stiff) bond <cit.>. A simple schematic of this scheme is shown in Fig. <ref>.
The thermal behavior in polymers are predominantly dictated by the non–bonded contacts, while the energy transfer along a bonded contact plays a lesser important role. This is particularly because a chain in a frozen configuration follows the random walk statistics <cit.>, i.e., when it is quenched to T ≪ T_ g
from a melt configuration.
Within this picture, when heat flows along a chain contour, it experiences scattering due to the bends and the kinks
along the path <cit.>. Energy also occasionally hops off to a non–bonded neighboring monomer.
A combination of these two effects facilitates a knocking down of κ in the amorphous polymers <cit.>. Note also that the exact monomer structure, i.e., the side groups connected to a backbone play an additional (delicate) role <cit.>, which will be discussed at a later stage within this short overview.
Over the last 2–3 decades, extensive efforts have been devoted to study the heat flow in polymers using the experimental, theoretical, and computational approaches <cit.>.
In particular, even when the polymers are a class of simple matter with their great potential in designing flexible materials, establishing a microscopic understanding in polymers is rather complex. Here, one of the grand challenges in this field is to attain a predictive tunability in κ (almost at will) using macromolecular engineering.
This requires a protocol that can properly account for a delicate balance between the bonded to the non–bonded interactions, the chain conformations,
and their morphology. Motivated by the above, this manuscript aims to highlight the latest developments in the field of polymer thermal conductivity.
For this purpose, comparative experimental and simulation results will be discussed to put forward the key concepts.
§ EFFECT OF BLENDING ON THE THERMAL CONDUCTIVITY OF POLYMERS
In a system consisting of one (linear) polymer component, κ is rather limited because of the
restrictive monomer–level interactions <cit.>. To circumvent this problem,
studies have suggested that κ of a polymeric solid may be enhanced by blending a second component with a relatively higher κ.
Here, an obvious choice is the carbon–based materials, such as the carbon nanotube (CNT) and polymer composites <cit.>.
In such a composite, a significant increase in κ requires concentration of CNT ϕ_ CNT
exceeding their typical percolation threshold. While a CNT–polymer composite certainly show a significantly higher
κ than the bare polymers <cit.>, it also has two major drawbacks: (1) it looses the underlying flexibility (typical of polymers) because of a large ϕ_ CNT and their physical properties
then get dominated by the CNTs present in the background polymers.
(2) Polymers are rather cost effective, having their typical
prices of about 2–3 orders of magnitude lower than the CNTs
and thus a CNT–polymer composite inherently becomes significantly costlier.
A more plausible alternative is the polymer blends,
where the non–bonded interactions can be altered by changing ϕ_ second of the second polymer component.
Here, however, a prerequisite is that the two components remain fairly miscible over the full range of ϕ_ second <cit.>.
On the contrary, when the two components in a blend phase separate, they create zones within a sample
consisting of the individual components. These separate zones usually have very weak interfacial interaction
and thus induce resistance for the heat flow, akin of the Kapitza resistance <cit.>.
In a symmetric polymer blends (i.e., when both polymer components have a comparable degree of polymerization N_ℓ),
κ varies monotonically between the two pure components <cit.>.
A recent experimental study, however, reported that an asymmetric blend (consisting of the longer PAA and shorter PAP chains) shows a larger enhancement in κ≃ 1.5 W/Km around a PAP concentration of ϕ_ PAP≃ 30% <cit.>.
It was argued that the PAP chains act as the H–bonded cross–linkers between the neighboring PAA chains,
forming a 3–dimensional H–bonded stiff network.
On the contrary, another set of experiments did not attain the same enhancement, instead found that PAA–PAP phase separate around ϕ_ PAP≃ 30% <cit.>.
Motivated by the above contradicting experimental results, a simulation study suggested
that the miscibility can be enhanced when PAP is replaced with PAM, i.e., a PAA–PAM system consisting of
a long PAA and a short PAM <cit.>.
PAA–PAM showed a weak non–monotonic variation in κ with ϕ_ PAM, attaining a maximum κ≃ 0.4 W/Km around ϕ_ PAM≃ 30%.
Note also that the size of PAM molecules in this simulation study was chosen to be of the order of persistence length ℓ_ p≃ 0.75 nm (or 3 monomers),
i.e., a PAM as a stiff linker that fits perfectly between two PAA chains.
When the cross–linker length N_ℓ≫ℓ_ p, they form flexible cross–linking. This on one hand makes a network soft <cit.>, on the other they also induce effective free volume (weak spots) within a network <cit.>. Collectively, these two effects reduce κ. Something that speak in this favor is that the experimental results of cross–linked PAA reported κ≃ 0.27 W/Km, which is about 25% smaller than κ≃ 0.37 W/Km measured in a linear PAA <cit.>.
The above discussions suggest that there is a need to look beyond the simple amorphous polymers. Therefore, in the
following section some analytical approaches are first presented that may provide a guiding tool for
the remaining discussions presented herein.
§ ANALYTICAL MODELS
In an isotropic material, κ is directly related to the volumetric heat capacity c,
the group velocity v_ g,i(ν), and the phonon mean–free path Λ(ν) = τ(ν) v_ g,i(ν).
Here, τ(ν) is the phonon life time and ν is the vibrational frequency.
Starting from the above description, κ can be written as,
κ(ν) = 1/3∑_i c(ν) v_ g,i^2(ν) τ(ν).
For an non–conducting amorphous material a well known theoretical
description is the minimum thermal conductivity model (MTCM) <cit.> that is discussed in the following.
§.§ The minimum thermal conductivity model
Following Eq. <ref>, the general expression of κ for a 3–dimensional isotropic system reads <cit.>,
κ = (ρ_ N h^2/3k_ B T^2)
∑_i ∫τ(ν) v_ g,i^2(ν)
ν^2 e^hν/k_ BT/(e^hν/k_ BT -1 )^2
g(ν) dν,
where ρ_ N, g(ν), h are the total particle number density, the vibrational density of states, and the Planck constant, respectively.
Within this description, MTCM uses the Debye model of lattice vibrations in Eq. <ref> and proposes
that a sample can be divided into regions of size Λ(ν)/2, whose frequencies are given by the low ν sound wave velocities v_i,
and thus approximates τ = 1/2ν <cit.> and v_ g,i = v_i. This gives,
κ = (ρ_ N h^2/6k_ B T^2)
(v_ℓ^2 + 2v_t^2 ) ∫ℐ(ν)
g(ν) dν,
with
ℐ(ν) = ν e^hν/k_ BT/(e^hν/k_ BT -1 )^2.
v_ℓ = √(C_ 11/ρ_ m) and v_t = √(C_ 44/ρ_ m) are the longitudinal and the transverse sound wave velocities, respectively.
Here, C_ 11 = K + 4 C_ 44/3, K is the bulk modulus, C_ 44 is the shear modulus, and ρ_ m is the mass density.
One key quantity in Eq. <ref> is g(ν), which can be calculated by the Fourier transform
of the mass–weighted velocity auto–correlation function c_ vv(t) = ∑_i m_i ⟨v_i(t) ·v_i(0)⟩ <cit.>
obtained from the classical simulations,
g(ν) = 1/C∫_0^∞cos(2πν t) c_ vv(t)/c_ vv(0) dt.
The prefactor C ensures that ∫ g(ν) dν = 1. The representative g(ν) for four different commodity polymers are shown
in Fig. <ref>.
It can be appreciated that the polymers have many high ν quantum degrees–of–freedom that do not contribute to κ at T = 300 K.
For example, a C–H bond vibration frequency in a polymer is ν≃ 90 THz, while the representative ν_ room≃ 6.2 THz at T = 300 K.
Such a mode, together with many other stiff modes (for ν > ν_ room), remain quantum–mechanically frozen at T = 300 K <cit.>.
If the contributions of these individual modes are not properly incorporated in Eq. <ref> via the Bose–Einstein weighted function in Eq. <ref>,
one can easily overestimate κ within the classical simulations in comparison to the experimental data <cit.>.
Standard analytical descriptions typically use the Debye estimate of parabolic vibrational density of states g_ D(ν) = 3 ν^2/ν_ D^3
in Eq. <ref>. Here, ν_ D is the Debye frequency and is written as <cit.>,
ν_ D = ( 9 ρ_ N/4π)^1/3( 1/v_ℓ^3 + 2/v_t^3)^-1/3.
While g_ D(ν) is certainly a good approximation for the standard amorphous solids when
T ≪Θ_Θ, with Θ_ D = hν_ D/k_ B being the Debye temperature.
Typical examples are amorphous silica and/or silicon, where Θ_ D≥ 480 K <cit.>.
Moreover, in the case g(ν) is complex, such as in the polymers (see Fig. <ref>),
simplistic g_ D(ν) may lead to a wrong estimate of the low ν vibrational modes
and thus leads to the default artifacts in computed κ. This is particularly because Θ_ D≃ 180-220 K
for the commodity polymers listed in Table <ref>, i.e., 20–40% smaller than T = 300 K <cit.>
where typical experiments and simulations are performed.
When exact g(ν) from Fig. <ref> are used in Eq. <ref>, κ values can be reasonably
reproduced within 5–20% error, see Fig. <ref>.
Note that this data is obtained by taking v_ℓ and v_t from the experiments <cit.>,
while ρ_ N and g(ν) are taken from simulations <cit.>.
§.§.§ High temperature approximation with correction of the stiff modes
Within the high T classical limit, i.e., when all modes are considered in Eq. <ref> <cit.>,
the original MTCM for amorphous polymers can be written as,
κ_ MTCM = (π/48)^1/3 k_ BN^2/3
(v_ℓ + 2v_t).
The corrections for the stiff modes in a polymer can then be incorporated in Eq. <ref> by considering an effective number of
atoms N = 2(N - N_ H)/3 and c = 3 N k_ B. Here, N eliminates the stiff modes associated
with the number of hydrogen atoms N_ H and other stiff backbone modes. Following this, Eq <ref> can be simply written as,
κ_ MTCM = (π/432)^1/3 k_ B^1/3c^2/3 (v_ℓ + 2v_t).
Eq. <ref> gives estimates that are 30% larger than the corresponding experimental data <cit.>.
§.§.§ Accurate computation of κ via specific heat correction
An accurate computation of κ within the standard classical molecular simulation setups is a daunting
task, because polymers have quantum degrees–of–freedom whose exact contribution to the heat balance is rather non–trivial.
Furthermore, T > Θ_ D for the commodity polymers (see Section <ref>)
and thus the polymer thermal properties are dominated by the low ν classical modes that are
dominated by the non–bonded interactions (or the localized vibrations). On the contrary, the stiff modes (i.e., for ν > ν_ room) remain
quantum–mechanically frozen and do not contribute to κ. In this context, one of the key quantities
that dictates κ behavior in Eq. <ref> for polymer is its c.
In classical simulations, every mode in a polymer contribute equally to c, i.e., given by the Dulong–Petit classical estimate.
Here, it is well documented that the classically computed c are always overestimated in comparison to the corresponding
experimental data <cit.> and thus
also leads to an overestimation of κ <cit.>.
Given the above discussion, if c is estimated accurately, it will automatically lead to an accurate computation of κ.
Recently a method has been proposed to compute the quantum corrected c. This method uses the Binder approach to estimate the
contributions of the stiff harmonic modes <cit.>, which is then used to get the difference between the classical
and the quantum descriptions <cit.>,
Δ c_rel(T)/k_ B
= ∫_0^∞{ 1 -
(hν/k_BT)^2 e^hν/k_ BT/(e^hν/k_ BT -1 )^2} g(ν) dν.
Finally the quantum corrected estimate of c(T) is can be given by,
c(T) = c_ cl(T) - Δ c_rel(T).
Here, the classical heat capacity is calculated using c_ cl = [H(T- Δ T) - H(T - Δ T)]/2 Δ T
and H(T) is enthalpy. The main advantage of using Eq. <ref> is that the stiff harmonic modes
are corrected, while the contributions from the anharmonic (low ν) modes remain unaffected.
Using Eq. <ref>, one can then calculated the quantum corrected κ(T),
κ(T) = c(T)κ_ cl(T)/c_ cl(T).
Here, κ_ cl(T) is classically computed thermal transport coefficient using the standard equilibrium <cit.> and/or non–equilibrium <cit.> methods.
Fig. <ref> show the computed κ(T) for PMMA using c(T) <cit.>.
It can be seen that the quantum corrected κ(T) compares reasonably with the corresponding experimental data,
while the classical estimate is about a factor of three too high.
The method proposed in Ref. <cit.> also highlighted different strategies to estimate c(T)
accounting for the missing degrees–of–freedom (DOF) within the united–atom and/or coarse–grained models.
A direct implication is that a certain percentage error in κ computed in the united–atom models comes from
the missing DOFs <cit.>.
It is also important to highlight that the simple scaling in Eq. <ref> works reasonably for polymers because
only the low ν modes dominate their thermal properties. When dealing with the crystalline materials special attention
is need. For example, in a crystal, not only c that has quantum effects, rather v_ g (i.e., stiffness) <cit.> and τ
also has quantum contributions at low T ≪Θ_ D.
§.§ Single chain energy transfer
In the introduction, it is discussed that a C–C bond is significantly stiffer <cit.> than the typical non–bonded contacts.
A direct consequence of this microscopic interaction contrast is that the energy transfer between two bonded monomers is over 100 times faster than
the energy transfer between two non–bonded monomers <cit.>. Taking motivation from such distinct microscopic interactions,
experimental and computational/theoretical studies have reported a large enhancement in κ in the systems where bonded
interactions dominate, such as in the single extended chains <cit.>, polymer fibers <cit.>,
and/or molecular forests <cit.>. However, until recently there existed no direct theoretical
framework that could quantitatively decouple the effects of these two separate microscopic interactions in dictating the macroscopic heat flow in polymers <cit.>.
Therefore, in this section, the key ingredients of this simple chain energy transfer model (CETM) will be discussed.
Starting from a homogeneous sample consisting of linear polymers, CETM considers the diffusion of energy along a
chain contour, i.e., between the bonded monomers. This involves multiple hops along a chain
before infrequent energy transfers to the neighboring non–bonded monomer belonging to another chain.
Note that there may also be non–bonded contacts between the two monomers belonging to the same chain, but topologically
far from one another. However, this will require loop–like conformations of a chain in a dense polymeric system.
Furthermore, the free energy difference to form such a loop of segment length 𝒩 is given
by ℱ(𝒩) = mk_ BT ln(𝒩) with a critical exponent m = 1.95 <cit.>.
Within this picture, a loop can only form when it overcomes a free energy barrier of several k_ BT,
which has a very low probability in a dense system. Note also that the CETM method does not distinguish between the
intra– and intre–molecular non–bonded hopping.
Considering the first and the second neighboring bonded monomer transfers along a chain contour,
The rate of change in the internal energy ℰ for any monomer i can then be simply written as <cit.>,
dℰ_i/ d t = c_ m d T_i/ d t
=G_b(T_i+1-2T_i+T_i-1)
+G̃_b(T_i+2-4T_i+1+6T_i-4T_i-1+T_i-2)
+nG_nb(T_bulk-T_i) ,
with G_ b/c_ m, G̃_ b/c_ m, and G_ nb/c_ m are the bonded, next nearest bonded, and non–bonded energy transfer
rates, respectively. Here, individually G values are thermal conductances, c_ m is the specific heat of one monomer, n is the number of non–bonded neighbors, T_i is the temperature of the
i^ th monomer, and T_ bulk = 300 K.
Following the treatment presented in Ref. <cit.>, diagonalizing Eq. <ref> along the chain
contour will lead to an exponential relaxation of the eigen–modes,
T̂_p(t) ∝e^-α_p t,
with,
T̂_p (t) = ∑_i = 0^N-1{T_i (t) -T_ bulk}cos[pπ/N(i+ 1/2)],
and
α_p = 4 G_ b/c_ msin^2(pπ/2N) - 16 G̃_ b/c_ msin^4(pπ/2N)
+ n G_ nb/c_ m.
In a nutshell, p gives the effective length scale in a system, i.e., a particular p mode corresponds to a length scale of N_ℓ/p.
Fig. <ref> shows the variation in α_p for PMMA.
Fitting the simulation data (symbols) with Eq. <ref> gives G_ b/G_ nb≃ 63.
Note also that G_ b/G_ nb≃ 155 for a polyethylene (PE) chain <cit.>. This difference is because of the rather bulky
side group in PMMA that act as an additional scattering center for the energy transfer. This aspect will be discussed at a later stage.
It can also be appreciated in Fig. <ref> that α_p almost plateaus for 4sin^2(π p/2N_ℓ) ≥ 0.9 (or p ≥ 11).
This length scale is comparable to ℓ_p≃ N_ℓ/p = 2.7 monomers (or 0.7 nm for PMMA), i.e., a length scale where
ballistic energy transfer may dominate that is not considered within the formalism of CETM.
The energy transfer rates obtained using Eq. <ref> can also be used to get a theoretical estimate of thermal transport coefficient
within the Heuristic Random–Walk model <cit.>,
κ_ HRW =ρ_ N/6[n G_nbr_nb^2+(G_b-4G̃_b)
r_b^2 +G̃_br̃_b^2].
Here, r_ nb, r_ b, and r̃_ b are the average distances between a monomer
and its first bonded, second bonded and non-bonded first shell neighboring monomers, respectively.
It should, however, be noted that κ_ HRW is underestimated for all investigated commodity
polymers <cit.>.
One of the central assumptions in Eq. <ref> is that the monomers surrounding the reference chain is kept at a constant
T_ bulk = 300 K <cit.>. This is certainly a good approximation for the common amorphous polymers,
where the heat leakage between the non–bonded monomers is rather weak and mostly restricted upto the
first non–bonded neighbor. Moreover, in the polymers and lubricants
under high pressure <cit.>, in the confined hydrocarbons <cit.>,
and/or in the systems where π-π stacking is dominant <cit.>,
heat leakage between the non–bonded monomers can be significantly enhanced. In these cases, the formalism within CETM may not be directly applicable
without properly accounting for T_ bulk that will have a gradient as a function of the radial distance from the central chain.
§ THERMAL CONDUCTIVITY OF CROSS–LINKED POLYMER NETWORKS
A predictive tuning of κ purely based on the non–bonded interactions is certainly a non–trivial task,
if not impossible. Therefore, a more plausible protocol might be to make use of the distinct microscopic interactions (i.e., bonded vs non–bonded),
chain conformation, and possibly also their morphology to understand their effects on κ.
In this context, one of the most common classes of polymeric materials where the bonded interactions dominate their properties is the epoxies,
commonly also referred to as the highly cross–linked polymer (HCP) networks.
In a typical HCP, an individual monomer can form more than two bonds (unlike in a linear chain)
and thus forms a 3–dimensional bonded network. The HCPs are usually light weight high performance materials with extraordinary mechanical
response <cit.>, attaining E values that can be 2–3 orders of magnitude
larger than the common amorphous polymers, consisting of linear chains, and
may provide a suitable materials platform toward the enhancement of κ <cit.>.
Therefore, recent interest has been devoted in investigating the κ behavior in HCPs.
§.§ Amine cured epoxies
One common example of amorphous HCP is the amine cured epoxy networks <cit.>,
where monomers are cross–linked with different amine hardeners with varying stiffness and cross–linker bond length ℓ_ cb,
see the insets in Fig. <ref>. It can be seen from the experimental data in Fig. <ref> that just by changing the hardener,
κ can be tuned by about a factor of two <cit.>. Moreover, even in the best case (shown by the red circles in Fig. <ref>),
κ only increases by a factor of 1.35 in comparison to a linear PMMA, i.e., κ≃ 0.20 W/Km <cit.>.
The specific enhancement in κ is rather small considering that κ in epoxies is expected to be dominated by the bonded interactions.
What causes such a small variation in κ for HCPs? To answer this question, direct information about the network micro–structure is
needed where several competing effects control their physical properties. In this context, obtaining any reasonable information regarding such
microscopic details is a rather difficult task within the commonly employed experimental techniques.
Therefore, simulations may be of particular interest in studying the κ behavior in epoxy networks, where
a direct access to the microscopic network details are reasonable available <cit.>.
Give the chemical specificity of epoxies, one may expect that the all–atom simulations might be the best possible choice.
Moreover, creating a network structure at the all–atom level is difficult and also is computationally expensive, especially when dealing
with a broad range of system parameters. Complexities get even more elevated because of the large system sizes coupled with
spacial and temporal heterogeneity <cit.>. Therefore, an alternative (and possibly a better choice) is a
bead–spring type generic simulation technique <cit.>. Broadly, generic simulations address the common
polymer properties that are independent of any specific chemical details and thus a large number of systems can be
explained within one physical framework <cit.>. Additionally, tuning the system parameters is rather straightforward within a generic setup,
such as the relative bond lengths, their stiffness, and/or bond orientations that are usually inspired by the underlying
chemical specific systems <cit.>.
Fig. <ref> shows κ as a function of ℓ_ cb for a set of model HCPs with different bond stiffness <cit.>.
It can be appreciated that the (relatively) soft bonds (representing the amine hardeners) give reasonably consistent values as
in the experiments <cit.>, i.e., κ/κ_ amorphous≃ 1.25-1.50 and it is about 1.35 in Ref. <cit.>
(represented by an arrow in Fig. <ref>).
For the stiff bonds, a significant increase in κ is observed (represented by the diamond symbols in Fig. <ref>).
This large enhancement can be understood by looking into the networks micro–structures.
From the simulation snapshots in Fig. <ref>, it can be appreciated that there are large voids (or free volume)
in all the cured samples. Such voids exist when the neighboring monomers form all their bonds pointing out of each other <cit.>
and the monomers along the periphery of a void only interact via vdW forces. These are usually the weak spots within a network,
hence resist the heat flow.
The observed void sizes are larger for the tri–functional moderately cross–linked polymers (MCP),
while the tetra–functional HCP have relatively smaller free volume.
A direct consequence is that the MCPs (open symbols) usually have lower κ than the HCPs (solid symbols)
in Fig. <ref>. In summary, it is not only that the increasing bonded contacts can by default increase κ.
Instead the cross–linked bond stiffness, ℓ_ cb, and their effects on the network micro–structures control κ.
§.§ Ethylene cured epoxies
Another class is the ethylene cured epoxy networks <cit.>, where the free volume can be tuned by changing the length
of the ethylene linkers, see the schematic in Fig. <ref>(a). Here, it is important to mention
that ℓ_ p≃ 0.65 nm for of a PE chain (or equivalent of one ethylene monomer) <cit.>.
When an ethylene linker is longer than N_ℓ≥ 4, it is soft because of its small flexural stiffness.
The longer the N_ℓ, more flexible is the linker and thus there are also larger free volume in a sample.
A direct consequence of such a linker is that– together the free volume and soft linker– they significantly reduce
stiffness of a materials and as a result κ decreases with increasing N_ℓ, see Fig. <ref>(b).
§ THERMAL CONDUCTIVITY OF CHAIN ORIENTED SYSTEMS
§.§ Extended chain configurations
A typical representing system where bonded interactions dominate κ is the polymer fibers <cit.>,
where individual chain are extended along the direction of heat flow and thus κ is dominated
by the energy transfer between the bonded monomers <cit.>.
In this context, it has been experimentally reported that a PE fiber can attain κ > 100 W/Km <cit.>,
which is significantly higher than κ≃ 0.3 W/Km for an amorphous PE <cit.>.
A closer look at an extended chain configuration reveals that it can be viewed as a quasi one–dimensional (Q1–D)
crystalline material <cit.>. This is a direct consequence of the periodic arrangement
of monomers along a chain backbone, see the top schematic in the inset of Fig. <ref>.
In such a system, phonons carry a heat current and the coupling strength between the lattice sites is dictated by the bonded interactions.
Usually a pure (pristine) sample has a large Λ and thus also a high κ≃ 160 W/Km <cit.>, see the main panel in Fig. <ref>.
However, whenever there appears a kink or a bend along a chain contour (see the bottom schematic in the inset of Fig. <ref>),
it scatters the phonons. The larger the number of kinks along a chain, the larger the resistance for heat flow and thus a lower κ.
This picture is well supported by the simulation results of an extended PE chain with varying number of kinks,
see Fig. <ref>.
To investigate the effects of kinks on κ, different PE configurations were specifically engineered <cit.>.
However, a natural system where the number of kinks and the backbone stiffness can be controlled almost at will is the bottle–brush polymers (BBP) <cit.>.
A polymer is referred to as a BBP when a linear polymer of length N_ℓ is grafted with the side chains with varying
length N_ s and grafting density ρ_ g. Here, ρ_ g is defined as the
number of side chains grafted per backbone monomer. For example, if every backbone monomer is grafted with one side chain, then ρ_ g = 1.
BBPs are of interest because of their potential in designing one–dimensional organic nano–crystals <cit.>.
In a BBP, the backbone flexural stiffness (controlling the number of kinks along a chain) is dictated by N_ s and ρ_ g <cit.>.
The heat management in the BBPs is of particular interest because their κ is dictated by two competing effects <cit.>:
(1) The presence of side chains act as the pathways for heat leakage that effectively reduce κ.
(2) The side chains increase the backbone flexural stiffness and thus there exists less kinks (or defects) along the backbone,
which effectively increases κ. To investigate the extent by which these two effects control κ, recent simulations
have been performed using a generic model. The representative data is shown in Fig. <ref>(a).
It can be appreciated in Fig. <ref>(a) that κ shows a non–monotonic variation with ρ_ g, where
two regimes are clearly visible: For ρ_ g≤ 1, scattering because of the side chains reduces κ, while the
backbone stiffening via side chains increases κ for ρ_ g > 1 <cit.>. This backbone stiffening scenario is also
supported by g(ν). It can be seen in Fig. <ref>(b) that there are almost indistinguishable changes in g(ν)
for ρ_ g≤ 1. Moreover, when ρ_ g > 1 a peak becomes more prominent around ν≃ 9-10 t_∘^-1.
This is associated with the flexural stiffness that increases with increasing ρ_ g,
as revealed by the shift in this peak towards the higher ν values. The full–width–of–half–maxima ν_ FWHM also
decreases with increasing ρ_ g and thus increases the phonon life time τ∝ 1/ν_ FWHM (or κ).
§.§ Molecular forests
The knock down in κ via kinks is a concept that can also be helpful in dictating the heat flow in more complex
molecular assemblies. One example is molecular forest, where Q1–D are grafted perpendicularly on a
surface forming a two dimensional assembly, such as the forests of CNT <cit.>, silicon nanowires <cit.>,
and/or polymers <cit.>.
These forests often exhibit intriguing and counter–intuitive physical behavior.
In this context, it had been experimentally reported that– while a single CNT has κ≥ 10^3 W/Km <cit.>,
the same CNT in a forest shows a drastic reduction in κ <cit.>.
This phenomenon is commonly referred to as the heat trap effect (HTE) in the carbon nanotube (CNT) forests <cit.>.
Even when the counter–intuitive HTE phenomenon was known for over a decade, there existed no clear understanding of this behavior.
Here, the simple concepts known from soft matter physics turned out to be reasonably useful in understanding
certain aspects of a hard matter problem of complex molecular assemblies. For this purpose, generic simulations were performed <cit.>.
The key assumption in this model is that a Q1–D is considered as a single extended polymer chain and thus
represents a molecular forest as a polymer brush. The only input parameter in such a modelling approach is ℓ_ p.
In this context, it was readily observed that a CNT can be characterized by their bending stiffness, as measured
in terms of ℓ_ p, that increases with CNT diameter d <cit.>. For example, a CNT of d = 1 nm has ℓ_ p = 50-60 μm.
Within this picture, a CNT forest of 2 mm height can have about 30-40ℓ_ p
or as many number of kinks. Note that in this simple argument we do not discuss the effect of grafting density Γ.
The representative data and the corresponding simulation snapshot is shown in Fig. <ref>.
As expected, a chain in a forest shows a significant reduction in κ <cit.>, see the left panel in Fig. <ref>.
As discussed above, this knock down is direct consequence of the kinks that act as the scattering centers for the heat flow.
The kinks are also evident from the simulation snapshot in the right panel of Fig. <ref>.
§ THERMAL CONDUCTIVITY OF CRYSTALLINE POLYMERS
A somewhat different class to the amorphous polymers is the polymers with certain degree of crystalline order,
where the long range order facilitates phonon propagation that carry a heat current and thus results in
an enhanced rate of energy transfer. The typical examples include liquid–crystalline
materials <cit.>, poly–peptide sequences <cit.>,
and/or semi–crystalline polymers <cit.>.
§.§ Liquid crystalline polymers
Liquid crystals usually have κ≃ 0.3 W/Km <cit.>, i.e., similar to the commodity
polymers. However, one of the advantages of a liquid crystalline material, such as the azobenzene–based liquid crystals,
is that an azobenzene undergoes a re–entrant trans–to–cis transition when they are exposed to near–ultraviolet light <cit.>.
Such a transition also alters the molecular order in a liquid crystal and hence κ switches
between 0.1 W/Km (in cis state) and 0.3 W/Km (in trans state) <cit.>.
This can simply be viewed as a light responsive thermal switch.
When a liquid crystalline polymer is cross–linked with the ethylene linkers, they give very interesting
and counter–intuitive trends in κ <cit.>. For example, earlier experimental studies have reported
that κ shows a zig–zag variation with increasing N_ℓ, varying between 1.0–0.2 W/Km <cit.>.
For an even number of carbon atoms in a linker, κ always has a higher
value than the next system with a linker with an odd number of carbon atoms. This behavior is commonly known as the odd–even effect in κ,
which was initially reported in a simulation study <cit.>. Moreover, a more detailed investigation is recently reported <cit.>.
Such an odd–even effect is also well–known in various other properties of the liquid crystalline polymers <cit.>.
While these studies gave very nice insight into the κ behavior of these complex systems,
an exact molecular level understanding of such a non–trivial odd–even effect is still somewhat lacking.
§.§ Conjugated polymers
Another polymeric system, where crystalline ordering is probably most important, is the conjugated polymer
because they are often used under the high temperature conditions. The crystalline order in such a system
is because of the π-π stacking of their backbone consisting of the aromatic structures <cit.>.
In this context, a significantly large value of κ→ 2.0 W/Km was reported in poly(3-hexylthiophene) (P3HT) <cit.>.
This κ can also be further increased by blending with the multi–wall CNTs <cit.>. The latter
study reported a non–monotonic variation with ϕ_ CNT, reaching a maximum value of about 5.0 W/Km around ϕ_ CNT≃ 30%.
§.§ Poly–peptide sequences
A natural soft matter that shows structural order is a poly–peptide sequence.
Previous experimental results have shown that– by controlling the specific amino acid residues
along a poly–peptide sequence, one can significantly alter its degree of secondary structure d_ s.
In such a system, an enhancement of up to κ≃ 1.5 W/Km is observed in a hydrated poly–peptide <cit.>.
Note also that κ≃ 0.6 W/Km for water and hence the observed increase is a direct consequence of d_ s.
Here, however, it is important to mention that the controlled synthesis (aka precision polymerization) of a specific
poly–peptide sequence is a grand challenge and they are commercially expensive <cit.>.
§.§ Semi–crystalline polymers
A more plausible alternative to poly–peptides might be the semi–crystalline synthetic commodity polymers,
such as the PVA, PLA, and PE systems <cit.>.
While κ of a semi–crystalline sample can be rather large because of the long range order,
it may be inferred that if they are cross–linked, the combination of these two effects might lead to an even greater
increase in κ than the bare amorphous polymers. Motivated by this, a recent experimental study has
reported that a cross–linked semi–crystalline network can only achieve an increase of up to a factor of
2.5 times than the pure PMMA sample, i.e., κ≃ 0.5 W/Km <cit.>. This rather surprising behavior was also
investigated in a simulation study, where a similar increase in κ was observed
for a critical ℓ_ cb, see Fig. <ref>(a).
A maxima in κ is only observed when ℓ_ cb is comparable to the lattice constant ℓ_ a, such that the
degree of crystallinity d_ c increases with ℓ_ cb, as revealed by the peak heights in the scattering function in Fig. <ref>(b) <cit.>.
When ℓ_ cb increases beyond a certain threshold (i.e., ℓ_ cb/ℓ_ a≥ 0.9)–
in one hand d_ c increases only slightly, on the other hand there is a large decrease in the
bond density ρ_ cb, see Fig. <ref>(c). To summarize the data in Fig. <ref>(a),
the observed initial increase in κ for ℓ_ cb/ℓ_ a≤ 0.9 is due to the increased d_ c and
the decrease in κ for ℓ_ cb/ℓ_ a≥ 0.9 is due to the
reduced ρ_ cb. This readily suggests that a delicate combination of d_ crystal and ρ_ cb controls κ <cit.>.
§ THERMAL CONDUCTIVITY OF POLYELECTROLYTES
In the preceding sections, a short overview of the κ behavior in neutral polymers are presented.
Possible ideas are also discussed that can be used to tune κ in amorphous systems by macromolecular
engineering. However, there are systems where electrostatic interaction also plays an important role,
examples include but are not limited to, organic (soft) electronics <cit.>, bio–inspired materials <cit.>,
and flexible chips <cit.>.
In these systems it is always desirable to attain a large κ that can act as a heat sink and thus improves device
performance/durability. Because of this need, extensive efforts have been devoted in studying the κ behavior
in electrostatically modified polymers <cit.>.
One of the classical examples of polyelectrolytes is the modified PAA with varying degree of ionization.
Here, a recent experimental study has investigated the effect of pH on ionized PAA, which reported κ→ 1.2 W/Km <cit.>.
This is an enhancement of about 3–4 times than the neutral amorphous PAA, where κ≃ 0.20-0.37 W/Km <cit.>.
This enhancement was also coupled with a significant increase in the materials stiffness E, i.e., consistent with the
predictions of the MTCM that κ∝ E <cit.>. The increased E was predominantly because
electrostatic interaction stretches a PAA and thus makes bonded interaction more dominant than in the case
of an uncharged PAA system. This characteristic extension of an ionized chain is also consistent with the earlier
studies investigating the effective stretching and ℓ_ p of polyelectrolytes <cit.>.
The κ behavior in the polyelectrolytes suggest that the influence of the electrostatic interaction is rather indirect, i.e.,
they help stretch a chain and thus the bonded interactions become more dominant, which increases κ.
Something may speak in this favor that the electrostatics alone do not influence κ,
as in the case of the ionic liquid consisting of small molecules <cit.>, where κ≃ 0.2 W/Km <cit.>.
§ THERMAL TRANSPORT IN SMART RESPONSIVE POLYMERS
The backbone structure of the commodity polymers are commonly dominated by the C–C covalent bonds, see Fig. <ref>.
Such a bond is extremely strong with its strength of about 80k_ BT and thus these bonds live forever under the unperturbed
environmental conditions. This creates severe ecological problems, which get even worse when dealing with water
insoluble polymers, as shown by a few example in the top panel of Fig. <ref>.
This is one of the main reasons why the recent interest has been diverted to water soluble (H–bonded) polymers,
referred to as the “smart" polymers <cit.>, as shown by a few examples in the bottom panel of Fig. <ref>.
Additionally, it is also preferred if a polymer can be bio–degradable and/or pH responsive, such as the acetal–linked
copolymers <cit.>.
A polymer is referred to as a “smart" responsive when a small change in the external stimuli can significantly alter
their structure, function, and stability. These stimuli can be temperature <cit.>,
pressure <cit.>, pH <cit.>, light <cit.>, and/or cosolvent <cit.>.
One common example of smart polymer is PNIPAM that shows a coil–to–globule transition in water around T_ℓ≃ 305 K (or 32^∘ C) <cit.>.
This is a typical lower critical solution (LCST) behavior <cit.>
driven by the solvent entropy <cit.>.
The fast conformational switching of PNIPAM in water may be extremely useful in the thermal applications.
Thermal switching is one such application that controls heat flow in various systems, including, but are not limited to,
thermoelectric conversion, energy storage, space technology, and sensing <cit.>.
In this context, the conventional thermal switches often suffer from their slow transition rates and thus also have poor performance.
Recently there has been considerable interest in studying κ in the smart polymer with a goal to attain a fast switching
in κ <cit.>.
In particular, experimental studies in the aqueous solutions of PNIPAM <cit.> and PNIPAM–based hydrogels <cit.>
have shown that their κ behavior follow the same trend as the LCST transition around T_ℓ≃ 305 K
(or 32^∘ C) <cit.>, see Fig. <ref>.
It is important to note that κ increases with T in the liquids and in the amorphous materials <cit.>
because of an increased vibrations.
This behavior is also visible in pure water, where a weak increase in κ is observed with T, see the black data set in Fig. <ref>.
Moreover, the sudden drop in κ around T ≥ 303 K (or 30^∘ C) for the PNIPAM concentration above 5 × 10^-3 g/mL
is predominantly due to the coil–to–globule transition of PNIPAM. This drop is likely due to the loss in the number of hydrogen bonds needed
to stabilize a PNIPAM configuration and the resultant breakage of the water caging around PNIPAM <cit.>.
These broken water–PNIPAM H-Bonds effectively creates weak interfaces that act as resistance for the heat flow.
Contrary to these results, another experimental study has reported an opposite trend for the concentrated PNIPAM solutions,
i.e., κ increases above T_ cloud <cit.>. This is simply because a chain under
a high concentration does not collapse into a globule, instead it remains rather expanded surrounded by the other neighboring chains
and hence the κ behavior become dominated by the energy transfer between the bonded monomers.
Furthermore, these distinct results highlight that the polymer concentration, chain size at a given concentration,
relative interaction/coordination (monomer–monomer, monomer–solvent, and solvent–solvent), water tetrahedrality around
a PNIPAM and a delicate balance between these effects play key role in dictating κ behavior in the polymer solutions.
A detailed understanding of such effects on the κ behavior is a rather open discussion.
Lastly, it might also be important to highlight another (possible) system for the thermal switching application.
In this context, elastin–like poly–peptides (ELP) are a modern class of biomimetic polymers that also shows LCST transition <cit.>.
One important aspect of the ELPs is their proline isomerization (ProI) that can have either a cis or a trans conformation, which can
dictate their relative conformations <cit.>. The free energy barrier of such a cis–to–trans transition is
about 30k_ BT, which the free energy difference between these two states in only about 2k_ B T and thus can be switched via light.
Given the above discussion, ELPs with ProI may also be alternatively used as a thermal switch, similar to that in the liquid crystalline materials <cit.>.
§ CONCLUDING REMARKS
Ever since the seminal publication of Hermann Staudinger <cit.>, the field of polymer science has traversed a long journey
with many new interesting developments for the future design of advanced functional materials. In the constant quest to find
new polymeric materials with improved performance, significant attention has been devoted within the field thermal transport
of polymers over the last 2–3 decades. Especially because the polymeric plastics usually have very low thermal conductivity coefficient κ, which is
typically a few orders of magnitude smaller than the common crystals. Here, one of the grand challenges is to attain a predictive tuning
of κ via macromolecular engineering. In the context, experiments have investigated a plethora of systems that
include– linear polymers, symmetric and asymmetric polymer blends, polymer composites, cross–linked networks, polymer fibers,
crystalline polymers and electrostatically modified polymers, to name a few. Motivated by these studies, computational
studies have also been conducted to establish a structure–property relationship in polymers within the context of their κ behavior.
While it is certainly rather difficult to address all aspects of a huge field of research within one short overview,
in this work an attempt has been made to highlight some of the latest developments in the field of heat conductivity
in polymers and polymeric materials. In particular, computational results are discussed within the context of the complementary
experiments with a goal to establish a detailed microscopic understanding that dictates macroscopic polymer properties.
Available theoretical models are also discussed that may pave the way to guide the future experimental and/or simulation studies.
Some discussions are also presented that showed that the simple concepts know from the basic polymer (soft matter) science <cit.> can be
used to understand a complex problem from an opposite class of hard matter physics <cit.>. This further highlights why polymer science
is such a vibrant and active field of research which is not only restricted within the soft matter community. Rather, it reaches
across a wide range of interdisciplinary fields.
Acknowledgement: The contents presented in this review have greatly benefited from the discussions with many colleagues.
In particular, the development of two key concepts presented here would not have been possible without
very fruitful collaborations with Martin Müser and Marcus Müller, whom I take this opportunity to gratefully acknowledge.
This draft is a contribution towards a special issue to celebrate the 40th anniversary of Max Planck Institute for Polymer Research.
I take this opportunity to gratefully acknowledge very fruitful continual collaborations with many MPIP colleagues,
especially Kurt Kremer for numerous stimulating discussions that led to the foundation of my works in MPIP and also after.
I further thank Kyle Monkman for useful comments on this draft.
Conflict of interest: The author declares no conflicting financial interest.
Copyright permission statement: Copyright permissions are obtained for all figures used in this review.
ieeetr
|
http://arxiv.org/abs/2409.03680v1 | 20240905163250 | Experimental evidence that a photon can spend a negative amount of time in an atom cloud | [
"Daniela Angulo",
"Kyle Thompson",
"Vida-Michelle Nixon",
"Andy Jiao",
"Howard M. Wiseman",
"Aephraim M. Steinberg"
] | quant-ph | [
"quant-ph"
] |
A New First-Order Meta-Learning Algorithm
with Convergence Guarantees
El Mahdi Chayti
Machine Learning and Optimization Laboratory (MLO), EPFL
Martin Jaggi
Machine Learning and Optimization Laboratory (MLO), EPFL
September 9, 2024
================================================================================================================================================================
empty
§ INTRODUCTION
One of the foundational scenarios in the field of light-matter interaction is the propagation of light through a dielectric medium. Although this has been extensively studied and, in many ways, thoroughly understood (see, for example, <cit.>), it has also sparked controversy in specific areas such as the definition of the speed of a propagating electromagnetic signal <cit.> and the mechanisms governing energy transport in dispersive media <cit.>. It is also fundamental to nonlinear optics <cit.> and to technologies such as quantum memories <cit.>. Examining energy transport from a quantum perspective gives rise to another question: How does an individual photon spend its time while propagating through the medium? In the case of light far from resonance, a commonly repeated explanation relies on the uncertainty principle <cit.>. Even when atoms are driven by light with a very large detuning δ, goes the story, they can still experience excitation for a brief period, so long as this is of order 1/δ. This extra time spent as a non-propagating excitation would be associated with the group delay (viz. the familiar treatment of slow light in terms of polaritons<cit.>). While this 1/δ dependence captures the behavior of the index of refraction far from resonance, one would probably not expect this association to hold for small detunings, where the group delay is well-known to become negative.
We define the average time that the atoms spend in the excited state, or atomic excitation time (τ_0), as the time integral of the average number of atoms in the excited state:
τ_0 ≡∫⟨N̂_e(t)⟩ dt ,
where N̂_e is the operator representing the number of excited atoms; i.e., ⟨N̂_e(t)⟩ is the single-atom excitation probability multiplied by the total number of atoms. This expectation value, ⟨N̂_e(t) ⟩, can be measured using a separate “probe” beam coupled to the atoms through a Kerr nonlinear interaction based on atomic saturation: the average phase shift on the probe beam is proportional to ⟨N̂_e(t) ⟩.
While this expectation value is straightforward to calculate from semiclassical theory – for a single incident photon on resonance, τ_e takes the value τ_0 ≡( 1- e^- OD) / Γ, i.e., the atomic lifetime multiplied by the probability of the photon being scattered – in this work, ,
When atoms are illuminated by a pulse of resonant light—which we will refer to as the `signal' beam—they become polarized and have some probability of being found in the excited state at any given time. For a single photon input, the number of excited atoms is represented by the operator N̂_e, i.e., ⟨N̂_e ⟩(t) is the single-atom excitation probability multiplied by the total number of atoms.
The expectation value ⟨N̂_e ⟩(t) is always between zero and one, and can be thought of as a probability. It can be measured using a separate `probe' beam coupled to the atoms through a Kerr nonlinear interaction based on atomic saturation: the instantaneous phase shift on the probe beam is proportional to ⟨N̂_e ⟩(t).
We define the average time that the atoms spend in the excited state (τ_0), or average atomic excitation time, as the time integral of the expectation value of the number of atoms in the excited state, τ_0 ≡∫⟨N̂_e ⟩(t) dt. While this quantity is straightforward to calculate from semiclassical theory as τ_0 ≡ P_S / Γ—i.e., the atomic lifetime multiplied by the probability of the photon being scattered— in this work, we are interested in a different (intrinsically quantum) question: if a single resonant photon incident on a medium is transmitted, how much time do the atoms spend in the excited state? One might imagine that, unlike scattered photons (mostly absorbed within the first optical decay length), transmitted photons interact with the entire sample and hence cause more integrated excitation; alternatively, one might think that excited atoms are those that lead to loss via spontaneous emission, so that post-selecting on only transmitted photons would result in little or no atomic excitation.
Our group previously carried out a measurement of this excitation time, <cit.> and found that in a particular parameter regime, a transmitted photon spent nearly as much time as an atomic excitation as the average incident photon. The implication that a large fraction of the excited atoms re-emitted in the forward direction was attributed to coherent forward emission originating from the π phase-flip that a broadband pulse's envelope picks up when propagating through an optically thick medium <cit.>. However, such a semiclassical picture cannot make a quantitative prediction as to the phase shift imprinted on our probe when the incident photon is transmitted, because there is no semiclassical equivalent of that postselection; the probe phase can depend on the intensity of the signal beam but not on whether a particular photon is transmitted or not.
Prompted by that first experiment, our group therefore employed quantum trajectory theory <cit.> and the weak-value formalism <cit.> to calculate<cit.> the time that a resonant photon spends as an atomic excitation while being transmitted through a medium. Those calculations predicted that this time is always equal to the group delay of the transmitted photon, even when that quantity is negative, and that this time depends only on the optical depth of the medium and the spectral content of the pulse. Operationally, this means that the nonlinear phase shift written on our probe by a transmitted photon should change sign when we move into the negative-delay regime (in our earlier work, the bandwidth and optical depth were such that the net delay was positive).
In the experiment reported here, we test this remarkable theoretical prediction for a wide range of parameters, including regimes in which the group delay is negative.
§ DESCRIPTION OF THE MEASUREMENT
The Kerr interaction used to probe N̂_e involves two beams of light: a) the `signal' beam, pulsed and resonant with the atomic transition, responsible for causing the excitation time; and b) the probe beam, continuous wave (CW) and off-resonant, which measures the degree of excitation caused by the signal (see Figure <ref>). While the signal pulse propagates through the medium, it weakly saturates the atoms, and
the probe picks up a phase shift proportional to this saturation.
As previously stated, our focus is on the effect of a transmitted photon. That is, we post-select on the event of transmission. Since our Kerr nonlinearity is very weak and we are interested in the behavior of a system, signal photon, defined by the preparation state and later post-selection, it follows that the quantity we measure can be calculated using the weak-value formalism <cit.>, suitably generalized <cit.>.
Specifically, the atomic excitation time caused by a transmitted photon, τ_T, is given by the time integral of the real part of the weak value of N̂_e <cit.>
, with the initial state specified by the preparation of the input photon and the final state (post-selection) being that of the photon having traversed the medium without scattering. Note that, unlike expectation values, weak values are not restricted to be within the eigenvalue spectrum of the operator being measured, and can even be complex <cit.>.
One might expect that, in order to study the behaviour of an individual transmitted photon, we would need to work with single-photon signal pulses at the input.
However, in our experiment, we leverage the simplicity and rapid rates of coherent-state pulses, combined with the post-selection technique first employed in <cit.>. In these experiments using coherent states as inputs, our group exploited the fact that upon single-photon detection, the inferred average number of photons present in a medium increases by one; one can thus discern the effect of a single photon on a probe by calculating the difference between that probe's phase in cases with and without a detection event. (This fact was recently proven rigorously, and extended, in<cit.>.)
This ensures that our measurement of τ_T using coherent states should match the results of one being performed with single photons. In our experiment, subtracting the probe phase in cases without a detector click ϕ_NC(t) from cases with a click ϕ_C(t) allows us to determine the phase shift imparted by a transmitted photon, denoted as ϕ_T(t)=ϕ_C(t)-ϕ_NC(t), whose integral is proportional to τ_T.
The atomic sample is a cold cloud of ^85Rb at a temperature of 60-70 μK, prepared in a magneto-optical trap formed using two beams of light and a magnetic field gradient of 10 G/cm. The `trap' beam is -25 MHz detuned from the cycling transition of the D2 line (F=3 to F'=4), while the `repump' beam is resonant with F=2 to F'=3. The probe and signal counter-propagate, overlapped with each other, and are focused to a waist of 25 μm inside the atomic cloud of around 1 mm of length. Both the signal and probe address the same atomic transition F=3 to F'=4 of the D2 line, with a lifetime of τ_ sp≈26 ns. The signal is a Gaussian pulse centered on resonance, while the CW probe is detuned by Δ≈-20 MHz (see Figure <ref>). Additionally, we introduce a sideband at +100 MHz relative to the carrier (probe), with the carrier power ranging from 6-9 nW (about 15% of resonant saturation intensity) and the sideband being three times stronger. This sideband serves to establish a beat-note interferometer and acts as the phase reference, remaining unaffected by the presence of atoms. We found that setting the probe carrier as far from resonance as the experimental capabilities allowed was the optimal approach for conducting the experiment (see the Supplementary information).
After the interaction inside the atoms has taken place, each beam is collected at opposite sides of the setup. The probe is coupled into a multi-mode fiber and sent to an avalanche photodiode (APD) operating in the linear gain region. The APD's output is subjected to IQ demodulation to extract the probe's phase and amplitude. The signal beam is optically attenuated to ensure that the subsequent power coupled into a single-mode fiber and transmitted to a single-photon counting module (SPCM) prevents multiphoton counts. Furthermore, to eliminate the possibility of spurious correlations driven by the electronic pulse from the SPCM (TTL), we introduce an optical delay of approximately 480 ns before the SPCM using a 100 m-long optical fiber. We use signal pulses with an incident mean photon number of approximately |α|^2≈ 100. The attenuation after the atoms was adjusted to maintain an SPCM firing probability of ∼ 0.2 during a time window of 576 ns, which we refer to as a `shot' and contains a single signal pulse. The choice of a 576 ns duration for each shot provides sufficient spacing between the pulses, ensuring that the dynamics of each pulse remain independent of the others.
If you take weak-values of photon numbers in networks populated by coherent states and post-select in single photons, that difference is gauranteed to give you the correct weak-value for the single photon. You really are learning about the history of that one extra-photon.
This experiment should give the same result as one done with a single-photon source
it is possible to learn about the history of that extra photon despite the background of other photons in the coherent state.
Each measurement cycle begins with a 3-4 ms period dedicated to trapping and cooling the atoms, during which the magnetic field is active for the first 2 ms. Subsequently, the trap beam is turned off, and to prevent atoms from falling to a dark ground state, the repump beam remains active during data acquisition, which begins immediately. Samples are captured every 16 ns.
Each trace of data starts with two frequency scans of the probe: one with atoms present and the other without atoms. These scans, lasting 300 µs each, serve to calibrate the resonant optical depth (OD) of the medium, which matches the one experienced by the signal. We then send a sequence of 1500 signal pulses, one every 576 ns, totalling 864 µs of data acquisition, for a duty cycle of ∼25%. During this time (see Figure <ref>), we record both the phase and amplitude of the probe, in addition to monitoring the TTL signal from the SPCM.
We anticipated that the phase shift imparted by a transmitted photon would be on the same order of magnitude as the phase shift imparted by the average photon (without post-selection) ϕ_0(t), whose peak value ranges from 10 to 20 μrad. This experimental value is consistent with an estimate given by the inverse of the number of atomic cross-sections addressed by our beams, which is approximately 10^4. Detecting such a minute effect presents significant challenges. Our phase noise is ∼120 mrad, which is four times greater than the quantum noise expected from 5 nW of probe power. This excess noise stems from the noise equivalent power of our detector, which is 100 fW/√(Hz), and a measurement bandwidth of 25 MHz. As a result, our measurement required tens of millions of atom cycles for a single set of parameters, translating to about ten hours of data collection.
§ OBSERVATIONS AND ANALYSIS
We investigated resonant optical depths (OD) of around 2 and 4 (the legends in Figure <ref> contain the exact values), for three pulses with root mean square (rms) durations of 10, 18, and 27 ns. We also took data for a 36 ns rms duration, but the highest OD we could use was 3 due to the increasing presence of background light becoming appreciable for more narrowband pulses (see also the Supplementary information). These choices were made to effectively explore the parameter space while adhering to the technical constraints of our apparatus.
After synchronizing the measurement of the probe's phase with the corresponding signal detection or lack thereof, accounting for the aforementioned optical delay, we average the phase shift for the shots where the SPCM fired (ϕ_C(t)) and where it did not (ϕ_NC(t)) and then subtract these two averages. A shot is tagged as `click' if the SPCM fired anywhere in the 576 ns time window. This procedure is repeated for all atom cycles, resulting in a trace of 36 datapoints with ϕ_T(t) and the corresponding covariance matrix. The results are depicted in Figure <ref>. The blue dots and green squares represent the data for ϕ_T(t) and ϕ_0(t), respectively, with error bars for ϕ_T(t) indicating the standard error of the mean. The orange solid line and the dashed pink line correspond to the theoretical results for the post-selected and average cross-phase shifts, respectively. These predictions were calculated using the theory in reference <cit.>. These calculations only require the rms duration of the pulse and the average resonant OD seen by the signal during the data runs as inputs.
The theory curves, originally in arbitrary units and time axis, are height-scaled and time-shifted to facilitate comparison with the experimental results (See the Methods section for a full description of the time shifting). The scaling factor, applied to both theory results for ϕ_0(t) and ϕ_T(t), is chosen such that the peak of the theory for ϕ_0(t) matches the peak of the experimental ϕ_0(t). The theory presented in Figure <ref> has been adjusted accordingly, using a different scaling factor for each set of parameters while using the same time-shift for all of them. We observe that, for most parameters, the relative size of ϕ_T(t) compared to ϕ_0(t) measured in our experiment is qualitatively consistent with the theoretical predictions. However, for the case of 18 ns and OD∼2 (see Figure <ref>c), it is evident that the amplitude of the feature exceeds the predicted value.
All of our data is subjected to low-pass filtering at 25 MHz to reduce noise in the phase measurement. This filtering smooths the sharp features, especially noticeable in the 10 ns data, mildly altering the original shape. Despite the fact that the theory curves are not filtered, we still observe good qualitative agreement between the experimental data and the theoretical curves in the horizontal direction. This indicates that, within the error bars, the peaks and troughs of the data and theory coincide in their timing.
We proceed to integrate the traces for ϕ_T(t) and ϕ_0(t) to determine the ratio of the atomic excitation times due to a transmitted photon (τ_T) and an average photon (τ_0) (see Figure <ref>).
Within each 576 ns shot, the effect of the signal pulse is confined to the central 100–150 ns and the rest of the data is used to determine the background. We use the theory trace |ϕ_T(t)| to select a range of integration which keeps the statistical error bars low without introducing a significant systematic error by discarding too much of the dynamics – to ensure consistency, we implement this by using the arbitrary cutoff at the point where the
tails of the curve reach 30% of the peak value.
The black solid vertical lines in Fig <ref> depict the boundaries of the integration intervals.
Figure <ref> shows the ratio of τ_T/τ_0 for four different pulse durations and two ODs distinguished by their respective markers. Except for 36 ns, where only one data point was taken. The theoretical results for the integrals over the regions shown in Figure <ref> are represented as hollowed squares, color-coded according to their corresponding OD. Error bars were found using the covariance matrix to account for correlations in ϕ_T(t) as outlined in the Supplementary information. For completeness, we also include solid lines to show the theoretical predictions integrating over all time.
These results demonstrate a robust overall agreement between theory and experiment. However, there are specific cases that merit discussion. For instance, at 27 ns, although Figures <ref>e and <ref>f show good agreement for ϕ_T(t),τ_T exhibits a significantly more negative value than expected, and neither of the two data points aligns with their respective theoretical predictions. In contrast, the data obtained at 18 ns and OD∼2 is consistent with the theoretical value, despite a noticeable discrepancy in ϕ_T(t), as shown in Figure <ref>c.
Figure <ref> shows the ratio of τ_T/τ_0 for four different pulse durations and two ODs distinguished by their respective markers (except for 36 ns, where only one data point was taken). These results demonstrate a robust overall agreement between theory and experiment. However, there are specific cases that merit discussion. For instance, at 27 ns, although Figures <ref>e and <ref>f show good agreement for ϕ_T(t),τ_T exhibits a significantly more negative value than expected, and neither of the two data points aligns with their respective theoretical predictions. In contrast, the data obtained at 18 ns and OD∼2 is consistent with the theoretical value, despite a noticeable discrepancy in ϕ_T(t), as shown in Figure <ref>c.
The main sources of discrepancies between the theory predictions and experimental results are spurious correlations between the probe phase and the signal transmission. In other words, if the frequency of the probe is changing rapidly enough with respect to the repetition rate of the pulses, and in a way that is correlated with the intensity of the signal, one can mistake that fluctuation for the effect of a transmitted photon. Given the sensitivity of this measurement, we conducted several systematic checks for these types of correlations, as detailed in the Supplementary information.
§ DISCUSSION
The measurement conducted by Sinclair et al. <cit.> was limited to one set of parameters due to several sources of noise. To explore a broader range of parameters beyond this previous measurement, we made significant progress in mitigating sources of spurious correlations, such as addressing the presence of ground loops. Additionally, we determined that detuning the probe beam further from the atomic resonance—specifically three times further than in the previous experiment—provides a cleaner background. Sinclair's experiment, using a pulse of 10 ns rms duration and an OD of 4, yielded a ratio of τ_T/τ_0=0.77± 0.16. In our study, we obtained a ratio of τ_T/τ_0=0.54± 0.28 for the same parameters, which is in agreement with the theoretical value of τ_T/τ_0=0.45 from<cit.>. It is notable that our approach to obtain τ_T/τ_0 also differs from Sinclair's in that we integrated over a region with multiple data points, rather than relying solely on the peak value. However, this resulted in greater statistical error in our measurement; despite us collecting the same amount of data, ϕ_0(t) is a much narrower feature than the integration window. This decision to integrate over a broad time window was informed by insights from theoretical work, knowing that the shape of ϕ_T(t) can significantly differ from ϕ_0(t). Despite the larger error bar, our result is more reliable as it considers more data that provides valuable information.
The results depicted in Figure <ref> undeniably raise the question: what does it mean for an atom to be excited for a negative time, or, equivalently, for a photon to spend a negative amount of time as an atomic excitation?
The theory work in <cit.> shows that the atomic excitation time due to a transmitted photon is equal to the group delay given a single photon input. Additionally, it provides an illustrative explanation of how this time can be negative, discussing a similar system in which an anomalous dwell time arises from quantum interference in a cavity.
In our experiment, we use a coherent state input, relying on the proof in <cit.> to extract the effect of a single photon, and devise a probe to measure the time defined in Equation <ref>. The theory in <cit.> predicts that such a measurement should be equal to the group delay Here, we present an intuitive argument that helps explain this fact for the particular measurement we perform.
It turns out that it is also possible to leave aside calculations of the weak value of “excitation time” observables entirely, and make a direct prediction for the correlation between the probe phase and the number of transmitted signal photons (which is what we measure experimentally), at least in the limit of single-mode signal and probe beams.
The state of the signal and probe after the interaction has taken place is given by
|ψ⟩=∑ |c_mn|e^iφ_mn|m⟩_p|n⟩_s.
Here, the subscripts p and s denote the probe and signal modes, respectively.
By definition, ϕ_T represents the change in the optical phase of the transmitted probe ϕ_PT per transmitted signal photon n. Using equation <ref>, it can be expressed heuristically as
ϕ_T≡∂ϕ_PT/∂ n=∂ (∂φ_mn/∂ m)/∂ n.
where we have used the fact that the optical phase of the probe is related to the variation of the phase φ_mn of the state with photon number in the probe mode, which we treat as a partial derivative.
The symmetry of second derivatives allows to interchange the order of partial derivatives, yielding
ϕ_T=∂ (∂φ_mn/∂ n)/∂ m=∂ϕ_ST/∂ m.
This demonstrates that a phase shift is also induced on the transmitted signal, denoted as ϕ_ST, and its change with respect to the number of transmitted probe photons m corresponds to ϕ_T. The origin of this phase shift on the signal is the AC Stark shift of the atomic levels caused by the far-detuned probe.
Assuming that the probe undergoes negligible absorption, the previous statements can be summarized by
ϕ_T=∂ϕ_PT/∂ n=∂ϕ_ST/∂ m
The latter term in Equation <ref> can be further expressed, using the chain rule, as
∂ϕ_ST/∂ m=∂ϕ_ST/∂ω_0∂ω_0/∂ m=-τ_g∂ω_0/∂ m.
Here, τ_g is the group delay for narrowband light and corresponds to the derivative of the spectral phase acquired by the signal beam with respect to its frequency.
The previous results can be summarized by
ϕ_T=∂ϕ_PT/∂ n=-τ_g∂ω_0/∂ m.
This indicates that the phase shift on the probe induced by a transmitted photon of the signal ϕ_T, as measured by our experiment, is proportional to the group delay of the signal τ_g with a scaling factor ∂ω_0/∂ m, which corresponds to the AC Stark shift induced by the probe (refer to the Supplementary information for further details of this calculation). Equation <ref> shares similarities with the expressions for anomalous drag found in <cit.>, suggesting that the group delay governs other effects in regimes where its meaning may be less apparent.
While it is widely known that the group delay can take on negative values, associated with the peak of the transmitted portion of a pulse appearing at times which may indicate superluminal or negative group velocities, it is commonly argued that this quantity does not correspond to the time anything actually “spends” in the medium, but merely to the happenstance of when interference is predominantly constructive.
Our observations, however, show that the group delay is a physically meaningful quantity. It not only provides the location of the centre of a transmitted pulse but also correctly describes the magnitude – and sign! – of the effect transmitted photons have on other systems they interact with.
§ METHODS
§.§ phase shift calibration
Before each data run, we conduct a calibration procedure to determine the phase shift imparted by an average input photon (ϕ_0(t)), specific to the chosen set of parameters (OD and pulse duration). This value serves to assess the overlap between the probe and signal inside the atoms and ensures consistency in experimental conditions from day to day. The gaussian pulses corresponding to the signal are generated by sending the corresponding waveform to an acousto-optical modulator that pulses the light. We take multiple data points for the peak phase shift of the probe as a function of the number of photons in the signal pulse (while keeping the power below 2000 photons where saturation occurs). Subsequently, we fit the data to a straight line and extract the slope, which corresponds to the peak value of ϕ_0(t).
§.§ Data analysis and error bars
The data analysis begins by synchronizing the phase of the probe with the corresponding signal detection or lack thereof, while considering the prior delay of 400 ns introduced to prevent unwanted correlations. For each atom cycle, we obtain a trace for ϕ_T(t) by averaging the phase shift ϕ(t) for the shots where the SPCM fired (ϕ_C(t)) and where it did not (ϕ_NC(t)) and subtracting these two. Additionally, we compute the outer product of ϕ_T(t) to construct the covariance matrix. We repeat this procedure for all the atom cycles, resulting in a trace with ϕ_T(t) and the corresponding covariance matrix, denoted as M. This matrix M contains the information about correlations between any pair of elements of ϕ_T(t). The error bars depicted in <ref> correspond to the square root of the diagonal elements of M, representing the standard deviation of the mean. To integrate the traces from <ref> and obtain values for τ_T/τ_0 shown in <ref>, we use the trapezoidal rule. This integration can be seen as a multivariate function denoted as f(x_i,...,x_n), where i runs over the indices of the interval of integration as specified in <ref> in the main text. We calculate the Jacobian matrix defined as 𝐉_𝐢=∂ f/∂ x_i and find the error bar of the value of the integral as σ^2=𝐉^𝐓 𝐌_𝐑 𝐉, where 𝐉^𝐓 is the transpose of the Jacobian matrix and 𝐌_𝐑 is the covariance matrix including only the information related to the points in consideration by the interval of integration.
§.§ Data and theory comparison
The theory curves are output on a time axis determined by the center of the input pulse. However, determining the center of the signal pulse in the experiment presented challenges for us. To align the time axes, we used ϕ_0(t) for the most narrowband set of parameters, as for this case, ϕ_0(t) closely follows and resembles the shape of the Gaussian input pulse. In our experiment, this occurs at 36 ns (OD ∼ 3). We obtained the experimental and theoretical values of ϕ_0(t) and fitted them to Gaussian profiles. Subsequently, we calculated the time difference between the centers of the fitted Gaussians, which was found to be 252 ns. To match the time axis of the theory, we adjusted the experimental values of both ϕ_0(t) and ϕ_T(t) by subtracting this time difference from the time variable. Consistently applying this adjustment across all data points, we created the plots shown in Figure <ref>.
The time alignment procedure solely relies on the average phase shifts and assumes a Gaussian lineshape for ϕ_0(t) across all pulse lengths, with negligible dispersion from elements other than the atomic medium. The assumption of Gaussianity in ϕ_0(t) breaks down when the pulse length goes well below τ_ sp, which includes, in this case, the data points taken at 10 ns; the shape of ϕ_0(t) is closer to an error function for the first half, followed by an exponential decay characterized by τ_ sp <cit.>.
|
http://arxiv.org/abs/2409.03599v1 | 20240905145834 | Anomalous dissipation via spontaneous stochasticity with a two-dimensional autonomous velocity field | [
"Carl Johan Peter Johansson",
"Massimo Sorella"
] | math.AP | [
"math.AP",
"35Q35, 35Q49, 76F25, 35Q30"
] |
Triple trouble with PSR J1618-3921: Mass measurements and orbital dynamics of an eccentric millisecond pulsar
K. Grunthal [email protected]
V. Venkatraman Krishnan 1
P. C. C. Freire 1
M. Kramer 1
M. Bailes 8,9
S. Buchner 7
M. Burgay 5
A. D. Cameron 8,9
C.-H.R. Chen 1
I. Cognard 2,3
L. Guillemot 2,3
M. E. Lower 6
A. Possenti 5
G. Theureau 2,3,4
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We study anomalous dissipation in the context of passive scalars and we construct a two-dimensional autonomous divergence-free velocity field in C^α (with α∈ (0,1) arbitrary but fixed) which exhibits anomalous dissipation. Our proof employs the fluctuation-dissipation formula, which links spontaneous stochasticity with anomalous dissipation. Therefore, we address the issue of anomalous dissipation by showing that the variance of stochastic trajectories, in the zero noise limit, remains positive.
Based on this result, we answer <cit.> regarding anomalous dissipation for the forced three-dimensional Navier–Stokes equations.
§ INTRODUCTION
We study the evolution of a passive scalar advected by a two-dimensional divergence-free velocity field. More precisely, in the two-dimensional torus ^2 ≅^2/ ^2, given a velocity field u : [0,1] ×^2 →^2, we consider the Cauchy problem of the advection-diffusion equation
∂_t θ_κ + u ·∇θ_κ = κΔθ_κ; ADV-DIFF
θ_κ (0, · ) = θ_ (· ) ∈ L^∞(^d);
where the unknown is the scalar θ_κ : [0,1] ×^2 →, κ≥ 0 is the diffusivity parameter and θ_ : ^2 → is a given bounded initial datum. If κ =0 this equation reduces to the well-known advection equation (also called transport equation). Supposing that u ∈ L^2((0,1) ×^2; ^2) is divergence-free, there exists a unique solution θ_κ∈ L^∞ ((0,1) ×^2) ∩ L^2((0,1); H^1(^2)), and this solution satisfies the energy equality
E1/2∫_^2 |θ_κ (t, x)|^2 dx + κ∫_0^t ∫_^2 | ∇θ_κ (s, x)|^2 dx ds = 1/2∫_^2 |θ_(x)|^2 dx , for a.e. t ∈ (0,1) .
We say that the collection of solutions
{θ_κ}_κ∈ (0,1) exhibits anomalous dissipation if
lim sup_κ→ 0κ∫_0^1 ∫_^2 | ∇θ_κ (s,x)|^2 dx ds >0 , AD
which directly implies the existence of dissipative
vanishing diffusivity solutions to the advection equation.
The main contribution of this paper is the following:
Let α∈ [0,1) be arbitrary. Then there exists an autonomous and divergence-free velocity field u = ∇^⊥ H ∈ C^α (^2; ^2) and an initial datum θ_∈ C^∞ (^2) such that the sequence of unique solutions {θ_κ}_κ >0 to (<ref>) exhibits anomalous dissipation.
Furthermore, up to not relabelled subsequences, we have
κ∫_^2 |∇θ_κ (·, x) |^2 dx *⇀ g ∈ L^∞ ((0,1)) ,
where the convergence is weak* L^∞. Finally, for any converging subsequence θ_κ_n⇀^*θ as κ_n → 0, we have
∫_^2 |θ (· , x )|^2 dx ∈ W^1, ∞ (0,1) .
For two-dimensional autonomous velocity fields u = ∇^⊥ H ∈ L^∞(^2), anomalous dissipation cannot occur for all initial data, namely there exists an initial datum θ_∈ W^1, ∞(^2) such that for the corresponding solutions to the advection-diffusion equation θ_κ, we have
lim_κ→ 0κ∫_0^T∫_^2 |∇θ_κ|^2 dx dt = 0 for any fixed T > 0.
Consider θ_ = H and notice that
∂_t (θ_κ - H) + u ·∇ (θ_κ - H) = κΔθ_κ.
Multiplying by (θ_κ - H) and integrating in space-time (mollifying and passing to the limit) we find
(θ_κ - H)(T, ·) _L^2(^2) + 2 κ∫_0^T ∫_^2 |∇θ_κ|^2 dx dt = 2 κ∫_0^T ∫_^2∇θ_κ·∇ H dx dt
≤κ∫_0^T ∫_^2 |∇θ_κ|^2 dx dt + κ∫_0^T ∫_^2 |∇ H|^2 dx dt
from which we conclude (<ref>).
For any autonomous, divergence-free velocity field u = ∇^⊥ H ∈ C^0(^2) we say that it satisfies the weak Sard property if and only if
ℒ^1 (H (S)) =0, where S = {x ∈^2 : ∇ H =0 }.
This is a necessary and sufficient condition for uniqueness of solutions at the level of the advection equation, as proved by Alberti, Bianchini and Crippa in <cit.>. We notice that the autonomous velocity field in Theorem <ref> does not satisfy the weak Sard property.
Indeed, supposing by contradiction that u satisfies the weak Sard property we find that solutions to the advection equation are unique and hence contradict a result by Rowan <cit.>.
The velocity field u is a steady solution of the two-dimensional Euler equations with a body force, more precisely it follows from (<ref>) and the Leray projector operator that there exists a p ∈ C^1, α (^2) and a divergence free force F ∈ C^α (^2; ^2)[The α∈ (0,1) is the one in the statement of the Theorem <ref>.] such that
u ·∇ u + ∇ p = F .
The problem of anomalous dissipation with autonomous velocity fields was posed by Elgindi and Liss in <cit.> as a problem “of great mathematical, and possibly physical, interest”. Indeed, the advection-diffusion equation in two dimensions with autonomous velocity fields enjoys more rigidity and constraints with respect to the case of two-dimensional time-dependent velocity fields or autonomous velocity fields in higher dimensions. We now give a swift outline of some aspects of this rigidity.
* For two-dimensional time-dependent Lipschitz velocity fields it has been proved by Elgindi, Liss and Mattingly in <cit.> that the optimal enhanced diffusion rate is |lnκ| whereas in the two-dimensional autonomous case it is κ^-1/3, see the result by Bruè, Coti Zelati and Marconi <cit.>.
* At the level of the advection equation, the mixing scale with time-dependent Lipschitz velocity fields can be exponential-in-time, see the first example by Alberti, Crippa and Mazzucato <cit.>, whereas in the two-dimensional autonomous case it is at most linear-in-time <cit.> as proved by Bonicatto and Marconi.
* Uniqueness of solutions to the advection equation with two-dimensional autonomous velocity fields is equivalent to having the velocity field satisfying the weak Sard property, see <cit.> by Alberti, Bianchini and Crippa. For time-dependent or autonomous velocity fields in higher dimensions stronger regularity assumptions are required, see for instance <cit.>.
* Regarding anomalous dissipation, a consequence of the result in <cit.> is that anomalous dissipation does not occur if the velocity field is continuous, divergence-free, autonomous and nowhere vanishing, see <cit.>. This fact is known to be false in the case of time-dependent velocity fields and autonomous velocity fields in higher dimensions <cit.>.
To tackle the two-dimensional autonomous case, the novelty of the present manuscript is twofold.
Firstly, we introduce a new mathematical approach (criterion) relying on ideas from spontaneous stochasticity, a concept that was introduced in <cit.> and, more recently, further developed by Drivas and Eyink in <cit.>. In <cit.>, they introduce the fluctuation-dissipation formula (<ref>) which provides a link between anomalous dissipation and spontaneous stochasticity.
Secondly, we construct a two-dimensional velocity field which exhibits anomalous dissipation.
Specifically, we construct a velocity field to which the already mentioned criterion can be applied.
This velocity field cannot satisfy the weak Sard property as noticed in Remark <ref>.
Examples of velocity fields which do not satisfy the weak Sard property are known <cit.> but it is not clear whether any of those exhibit anomalous dissipation.
Among the available examples of velocity fields which do not satisfy the weak Sard property in the literature, the one that inspired us the most is the non-divergence free velocity field given in <cit.>.
The anomalous dissipation phenomenon presented in our example is new and distinct from known techniques, which are detailed at the end of this introduction. We consider an autonomous velocity field with the following property: the forward flow of a smoothed approximation of this field tends to push a certain portion of the torus into a fat Cantor set with positive Lebesgue measure.
The stochastic flow is initially governed by the transport term, up to a certain small scale where diffusion begins to dominate, eventually spreading the flow across the entire fat Cantor set. This “dispersive” phenomenon of forward stochastic trajectories is responsible for the anomalous dissipation. To mathematically prove this, it is more effective to study the backward stochastic flow.
In the context of the backward stochastic flow, this phenomenon appears as a highly unstable behavior in response to small perturbations (as a small noise), reminiscent of the concept of spontaneous stochasticity, which we discuss in Section <ref>. This perspective is more advantageous for proving anomalous dissipation. Specifically, we will show that backward stochastic trajectories originating from the fat Cantor set can exhibit significantly different behaviors depending on the realization of the Brownian motion. According to the fluctuation-dissipation formula (<ref>), this variability implies anomalous dissipation, as demonstrated by the criterion provided in Proposition <ref>. Finally, we note that the dissipation caused by this phenomenon is continuous over time, due to the autonomous nature of the velocity field, as detailed in Proposition <ref>.
A central motivation for the study of anomalous dissipation comes from fluid dynamics.
To be precise, we aim at understanding how the cascade of frequencies arises due to a transport term.
In the incompressible Euler and Navier–Stokes equations, the nonlinear term takes the shape of a transport term and hence understanding anomalous dissipation at the linear level, i.e (<ref>) could allow us to understand how the cascade of frequencies arises due to the nonlinear term in the incompressible Euler and Navier–Stokes equations.
In recent years, this has been made rigorous is several situations. We refer to <cit.>, in which the authors study the three-dimensional Euler and Navier–Stokes equations via the so-called (2 + 1/2)-dimensional Euler and Navier–Stokes equations. To explain our second main result, we consider the three-dimensional forced Navier–Stokes equations with a given time independent force F_ν : ^3 →^3
∂_t v_ν + v_ν·∇ v_ν + ∇ P_ν = νΔ v_ν + F_ν ,
(v_ν) =0 ,
v_ν (0, · ) = v_, ν (·) ,
where the unknowns are the velocity v_ν : [0,1] ×^3 →^3 and the pressure P_ν : ^3 →.[If ν =0 these are the 3D forced Euler equations.]
Assuming v_ν = (u_ν^(1), u_ν^(2), θ_ν) depends only on the two first spatial components, the third component of the equations reduces to an advection-diffusion equation with velocity (u_ν^(1), u_ν^(2)). With this assumption, (<ref>) are known as (2 + 1/2)-dimensional Navier–Stokes equations.
Theorem <ref>, together with some key regularity properties of our two-dimensional autonomous velocity field (see Lemma <ref>) yield the result below by considering this setting. This result answers two open questions by Bruè and De Lellis posed in <cit.> for the three-dimensional forced Navier–Stokes equations. Indeed, in Section <ref> we prove the following statement.
For any α∈ (0,1), there exist a sequence of viscosity parameters {ν_q }_q ∈, a sequence of time independent smooth forces { F_ν_q }_q ∈⊂ C^∞ ( ^3 ) and a sequence of smooth initial data { v_, q}_q ∈⊂ C^∞ ( ^3 ) such that for any q ≥ 1 there exists a unique solution v_ν_q : [0,1] ×^3 →^3 to (<ref>) and it enjoys the following properties:
* F_ν_q - F_0 _C^α + v_, ν_q - v__C^α→ 0 as ν_q → 0 for some force F_0 ∈ C^α and initial datum v_∈ C^α.
* Anomalous dissipation holds, i.e.
lim sup_ν_q → 0ν_q ∫_0^1 ∫_^3 | ∇ v_ν_q (t, x)|^2 dx dt >0 ,
* There exists v_0 ∈ L^∞ such that up to not relabelled subsequences
v_ν_q*-L^∞⇀ v_0 ∈ L^∞
and v_0 is a solution to the 3D forced Euler equations with v_ and force F_0 and finally e(t) = 1/2∫_^3 |v_0 (t, x)|^2 dx ∈ W^1, ∞ (0,1).
We now provide a summary of existing works and techniques in the study of anomalous dissipation.
For the sake of clarity and brevity, we restrict our attention only to advection-diffusion equations.
Subsequently, we go through the techniques while pointing out the challenges that these techniques present in the two-dimensional autonomous case. This motivates and justifies the introduction of the new approach presented in this work.
The first and pioneering result of anomalous dissipation was given in <cit.> by Drivas, Elgindi, Iyer and Jeong, where the authors use a mixing velocity field in L^1_t C^α_x, with α∈ (0,1) fixed but arbitrary, for which bounded solutions, with initial data close to an eigenfunction of the Laplacian, exhibit anomalous dissipation.
This study attracted the attention of many subsequent mathematical investigations. In <cit.>, Colombo, Crippa and the second author of the present manuscript construct a new mixing velocity field to prove anomalous dissipation in any supercritical Yaglom's regime. More precisely, for any fixed
α + 2 β < 1 YAG
the authors construct a velocity field u ∈ L^∞((0,1); C^α (^2)) and an initial datum for which the corresponding solutions {θ_κ}_κ∈ (0,1) exhibit anomalous dissipation and enjoy the regularity
sup_κ∈ (0,1)θ_κ_L^2 ((0,1); C^β (^2)) < ∞.
In <cit.>, Armstrong and Vicol construct a velocity field in L^∞ ((0,1); C^α (^2)) with α < 1/3 and prove via a striking technique known as quantitative homogenization that corresponding solutions exhibit anomalous dissipation for all non-constant initial data in H^1(^2).
Furthermore, dissipation in this scenario occurs continuously in time, as predicted by the theory of scalar turbulence. In constructing this velocity field, at a fixed point in time, all frequencies are activated, and the singular set of this velocity field spans the full space-time dimension in (0,1) ×𝕋^2. Inspired by this construction, in <cit.> Burczak, Székelyhidi and Wu combined convex integration and quantitative homogenization theory to construct a dense set of solutions to the three-dimensional Euler equations in C^0 for which the corresponding solutions to the advection-diffusion equation exhibit anomalous dissipation for any non-constant initial data in H^1(^3).
In <cit.>, Elgindi and Liss prove anomalous dissipation for any non-constant smooth initial data in any supercritical Yaglom's regime. The construction of their velocity field is a rescaled-in-time and in-space version of the one in <cit.> by Elgindi, Liss and Mattingly. Finally, in the Kraichnan model <cit.>, where the velocity field is a Gaussian random field which is white-in-time and rough-in-space (only Hölder continuous), anomalous dissipation has been proved by Bernard, Gawedzki and Kupiainen <cit.>, see also <cit.>. Recently, Rowan <cit.> provided a PDE-based proof of anomalous dissipation in the Kraichnan model.
Now we go through the techniques mentioned in the summary above and for each technique point out the challenges to overcome in order to apply them in the autonomous two-dimensional setting.
Balanced growth of norms. The works based on balanced growth of Sobolev norms <cit.> construct smooth velocity fields which satisfy
θ_0 (t,· ) _H^1^σ∼θ_0 (t,· ) _L^2^σ - 1θ_0 (t, · ) _H^σ ∀ t ∈ (0,1) and ∫_0^1 ∇θ_0 (t, ·) _L^2^2 dt = ∞
for some σ∈ (1,2],
where θ_0 denotes the unique solution to the advection equation.
Such a condition is particularly well-suited for self-similar constructions and alternating shear flows which are smooth for any time less than a fixed singular time T.
However, solutions to the advection equation may become non-unique with a C^1- divergence-free velocity fields in general and the condition (<ref>) must be adapted accordingly. This means that the conditions must be specified for the solution to the advection equation with a regularized velocity field. If the regularization is sufficiently small (for instance by mollification) with respect to κ, then the solution to the advection-diffusion equation with the velocity field is close to that with the regularized velocity field. However, the second condition with the regularized velocity field cannot hold. A possible adapted assumption is to consider a quantitative growth explosion of such an integral depending on the regularization of the velocity field, which is not straightforward to satisfy. Therefore, finding sufficient conditions for anomalous dissipation based on balanced growth, which are satisfied by a two-dimensional autonomous velocity field, seems highly non-trivial. Nevertheless, it is an interesting mathematical problem in the opinion of the authors of this manuscript.
Mixing. The works based on mixing, such as <cit.>, use a condition of the following type:
θ_0 (t_q, ·) - θ_ (λ_q · ) _L^2 ≤1/100θ__L^2 for some t_q → 1 , λ_q →∞ .
This condition too is particularly well-suited for velocity fields which generate solutions to the advection equation with a high degree of self-similarity. It should be stressed that although this condition has similarities to the previous one, it is different. Indeed, (<ref>) is not satisfied by the velocity field in <cit.>, but (<ref>) is.
Quantitative homogenization. In the works based on quantitative homogenization <cit.>, one constructs a Cauchy sequence in some Hölder space of smooth divergence-free velocity fields { u_m }_m for which the corresponding solutions {θ_m }_m to the advection-diffusion equation with diffusivity κ_m → 0 exhibit anomalous dissipation.
To do so, it is necessary to control a transport term involving the term
u_m ·∇χ̃_m , k ,
where χ̃_m , k is a corrector function coming from the homogenization technique.
The term can in fact not be controlled but cancelled by a time derivative. To put this idea into action, one composes u_m with the flow map of u_m-1, which causes the velocity field to become time-dependent. This technical point is also the reason why the velocity field can only be constructed in C^α with α < 1/3.
§.§ Outline of the paper
We start by introducing some notation and provide preliminaries in Section <ref>. In Section <ref>, we review the concept of spontaneous stochasticity based on which we develop a criterion for anomalous dissipation in Section <ref>. Then, we make a choice of parameters in Section <ref>. Given this choice, we construct the velocity field u of Theorem <ref> in Section <ref>. Subsequently, we prove Theorem <ref> in Section <ref>. Finally, in Section <ref>, we prove Theorem <ref>.
§ ACKNOWLEDGEMENTS
MS and CJ are supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00034. The authors are grateful to Maria Colombo for fruitful discussions, to Vlad Vicol for useful observations on continuous in time dissipation and Lucio Galeati for discussions about spontaneous stochasticity and references on the stochastic part.
§ NOTATION AND PRELIMINARIES
§.§ Notation
In this section, we explain the notation throughout the paper.
The two-dimensional torus is denoted by ^2.
The Euclidean norm is denoted by | · |
For any > 0, the -restriction of a set S ⊆^2 is denoted and defined by
S[] = { x ∈^2 : (x, S^c) > }
where S^c denotes the complement of S. Similarly the -extension of a set S ⊆^2 is denoted and defined by
S() = { x ∈^2 : (x, S) < }.
The cardinality of a set A is denoted by # A.
For any a,b ∈, we write
a + b = { n ∈ : ∃ j ∈ with n = a + bj }
For vector-valued maps F, we denote the i-th component by F^(i).
The projection map into the j-th component will be denoted by π_j.
§.§ Preliminaries
We introduce some preliminaries required for the remainder of the paper.
Let u ∈ L^∞ ((0,1) ×^2) be a divergence-free velocity field and θ_∈ L^∞ (^2), then for any κ >0 there exists a unique θ_κ∈ L^∞∩ L^2_t H^1_x solution of (<ref>) and it satisfies
∫_^2 |θ_κ (t,x)|^2 dx + 2 κ∫_0^t ∫_^2 | ∇θ_κ (s,x) |^2 dx ds = ∫_^2 |θ_ (x)|^2 dx .
This proposition is a corollary of <cit.>.
Let u_1, u_2 ∈ L^∞ ((0,1) ×^2) be two divergence-free velocity fields and θ_∈ L^∞(^2). Let θ_κ,1 be the unique solution of (<ref>) with u_1 and θ_κ,2 be the unique solution of (<ref>) with u_2, then
∫_^2 |θ_κ ,1 (t,x) - θ_κ, 2 (t,x)|^2 dx + κ∫_0^t ∫_^2 | ∇ (θ_κ, 1 - θ_κ, 2) |^2 ≤θ__L^∞^2 /κ∫_0^t ∫_^2 |u_1 (s,x) - u_2 (s,x)|^2 dx ds
for any t ∈ (0,1).
The following identity holds in the sense of distribution thanks to the regularity of the solutions
∂_t |θ_κ, 1 - θ_κ, 2|^2/2 + u_1 ·∇ (θ_κ,1 - θ_κ, 2) (θ_κ, 1 - θ_κ, 2) + (u_1 - u_2 ) ·∇θ_κ, 2 (θ_κ, 1 - θ_κ, 2)
= κΔ ( θ_κ,1 - θ_κ, 2) (θ_κ, 1 - θ_κ, 2 )
and therefore integrating in space-time and using the divergence-free condition on u_1 of the velocity fields and integration by parts we obtain
∫_^2 |θ_κ, 1 - θ_κ , 2 |^2 + 2 κ∫_0^t ∫_^2 | ∇ (θ_κ, 1 - θ_κ, 2) |^2 dx ds ≤ 2 ∫_0^t ∫_^2 |u_1 - u_2| |∇ (θ_κ, 1 - θ_κ, 2 )| | θ_κ, 2| dx ds.
Using Young's inequality, we can can bound the right hand side with
κ∫_0^t ∫_^2 | ∇ (θ_κ, 1 - θ_κ, 2) |^2 dx ds + 1/κ∫_0^t ∫_^2 |u_1 - u_2|^2 |θ_κ, 2|^2 dx ds ,
and using the maximum principle on θ_κ, 2 we conclude the proof.
We now recall the Strong Markov property and Doob's Maximal inequality which are classical and can be found in <cit.>.
Let {W (t) }_t ≥ 0 be a standard Brownian motion, and let τ be a finite stopping time relative to the standard filtration, with associated stopping σ-algebra ℱ_τ . For t ≥ 0, define the post-τ process
W^⋆ (t) = W(t + τ) - W(τ)
and let {ℱ^⋆_t }_t ≥ 0 be the standard filtration for this process. Then, conditional on the event {τ < ∞}, we have
* W^⋆ (t)_t ≥ 0 is a standard Brownian motion;
* for each t > 0, the σ-algebra ℱ^⋆_t is independent of ℱ_τ.
Let (Ω, (ℱ_t)_t, ℙ) be a filtered probability space and W_t an adapted ^d-valued Brownian motion. Then, for every c,κ>0, T> T̃≥0 we have
ℙ ( ω∈Ω : sup_t ∈ [T̃ , T]√(2 κ) | W_t - W_T̃| ≤ c ) ≥ 1- 2 e^-c^22 κ (T-T̃).
The following ergodic property of the Brownian motion will also be of crucial importance in the proof of Theorem <ref>, see for instance <cit.> for a proof.
Let f ∈ L^∞ () ∩ L^1() with f = ∫ f(x) dx ≠ 0 and let (W_t)_t be a standard one dimensional Brownian motion on a probability space (Ω, ℱ, ℙ), then for any C≥ 0 we have
lim_t →∞ℙ ( 1/f√(t)∫_0^t f (W_t) dt ≤ C ) = √(2/π)∫_0^C exp (- y^2/2) dy .
We introduce the backward stochastic flow as well as the Feynman-Kac formula which gives a stochastic formulation of solutions to the advection-diffusion equation.
Let (W_t)_t be a two-dimensional backward Brownian motion process, u ∈ L^∞ ((0,T) ×^2) be a two-dimensional autonomous divergence free velocity field, then we define X_T,t^κ the backward stochastic flow on [0,T] ×^2 of u with noise constant √(2 κ) as
d X_T,t^κ = u (X_T,t^κ) dt + √(2 κ) d W_t ,
X_T,T (x, ω) ≡ x .
The following representation formula is well-known as Feynman-Kac formula, see for instance <cit.> for a proof.
Let u ∈ C^∞ ([0,1] ×^2; ^2) be a velocity field. Then the unique solution θ_κ to the advection-diffusion equation(<ref>) with diffusivity parameter κ≥ 0 and smooth initial datum θ_∈ C^∞ (^2) can be expressed with the backward stochastic flow map X_t,s^κ as
θ_κ (x,t) = 𝔼 [θ_ (X_t,0^κ (x, ·)) ].
The following lemma is a direct consequence of Grönwall's lemma.
Let X_T,t^κ be the backward stochastic flow related to a divergence free velocity field u ∈ L^1_t W_x^1, ∞ and X_T,t be the backward flow map and suppose that
| X_T,s^κ (x, ω) - X_T,s (y)| ≤β , ∀ s ∈ [0,T] ,
then we have the pointwise estimate
| X_T,t^κ (x, ω) - X_T,t (y)| ≤ ( |x-y| + √(2 κ)sup_s ∈ [t,T] |W_T (ω) - W_s (ω) | ) exp (∫_0^t ∇ u (s, · ) _L^∞ (B_β (X_T,s (y) ))ds ).
Finally, we introduce the so-called fluctuation-dissipation formula. It was introduced in <cit.> and has been used in <cit.> for enhanced dissipation.
Let u ∈ C^∞ ([0,1] ×^2) be a divergence-free velocity field and X_t,0^κ be the backward stochastic flow and θ_κ be the solution to (<ref>) with a bounded initial datum θ_, then
∫_^2𝔼[ | θ_ (X_t,0^κ (x, · )) - 𝔼[ θ_ (X_t,0^κ (x, ·))] |^2 ] dx = 2 κ∫_0^t ∫_^2 |∇θ_κ (x,s)|^2 dx ds. FL-DISS
We have the energy equality
∫_^2 |θ_κ (x,t)|^2 dx + 2 κ∫_0^t ∫_^2 | ∇θ_κ (x,s)|^2 dx ds = ∫_^2 |θ_ (x)|^2 dx,
and let h_κ [0,1] ×^2 → be the solution of
∂_t h_κ + u ·∇ h_κ = κΔ h_κ ;
h_κ (0, ·) = |θ_ (· )|^2.
Then integrating in space time we obtain
∫_^2 h_κ (t, x) dx = ∫_^2 |θ_ (x)|^2 dx
and plugging this identity into (<ref>) we obtain
∫_^2 h_κ (t, x) dx - ∫_^2 |θ_κ (x,t)|^2 dx = 2 κ∫_0^t ∫_^2 | ∇θ_κ (x,s)|^2 dx ds.
Finally, using the Feynman-Kac formula, we have
h_κ (x,t)= 𝔼[ | θ_ (X_t,0^κ (x, · )) |^2 ] and | θ_κ (x,t) |^2 = | 𝔼 [ θ_ (X_t,0^κ (x, · )) ] |^2
which concludes the proof.
§ SPONTANEOUS STOCHASTICITY
The proof of Theorem <ref> strongly relies on the idea of strong separation of different realizations of the stochastic flow in finite time due to a chaotic behaviour of the velocity field.
This phenomenon reminds
spontaneous stochasticity. This section is devoted to making clear the connection between our proof of Theorem <ref> and spontaneous stochasticity.
Even though spontaneous stochasticity has been studied over the last few decades, see for instance <cit.>, we could not find a precise definition of spontaneous stochasticity. Therefore, consistently to all the previous papers, we give Definition <ref>.
Then, we state a sufficient condition proving spontaneous stochasticity based on separation in finite time of different realizations of the stochastic flow, see Proposition <ref>. This is essentially the condition we will prove to conclude Theorem <ref>, see Proposition <ref>.
Spontaneous stochasticity is a macroscopic effect caused by the explosive dispersion of particle pairs in a turbulent flow predicted by Richardson <cit.> with arbitrarily small stochastic perturbation. This phenomenon leads, in the zero noise limit, to non-uniqueness of Lagrangian trajectories. In <cit.>, spontaneous stochasticity has been proved for the Kraichnan model <cit.>, where the
velocity is a Gaussian random field irregular in space and white-noise correlation in time, see <cit.> for a review.
Spontaneous stochasticity has been further developed in <cit.>, where the authors introduced the fluctuation dissipation formula and studied the relation between anomalous dissipation and spontaneous stochasticity both for passive and active scalars, as for instance Burgers equation.
Using the description given in <cit.> we consistently define spontaneous stochasticity as follows.
Notice that we consider a continuous modification of the Brownian motion which is C^β with β < 1/2 and X_1,s^κ is the pathwise unique solution (see for instance <cit.>) which is also continuous in time.
Let u ∈ L^∞ ((0,T) ×^d ; ^d) be a divergence-free velocity field and let (Ω, ℱ, ℙ) be a probability space and X_t,s^κ be the backward stochastic flow. For any x ∈^d we have X_T, ·^κ (x, ·) : (Ω, ℙ) → ( C([0,T]; ^d) , η_x^κ ), where η_x^κ = X_T, ·^κ (x, ·)_# (ℙ) ∈ℳ ( C([0,T]; ^d) ) is the pushforward of the backward stochastic flow and finally we set η^κ (d γ, d x) = η_x^κ (d γ) ⊗ dx ∈ℳ (C([0,T]; ^d) ×^d ), namely
∫_C([0,T]; ^d) ×^d f(γ, x) η^κ (d γ, d x) = ∫_^d∫_C([0,T]; ^d ) f(γ, x) η^κ_x (d γ) d x ,
for any continuous f : C([0,T]; ^d) ×^d →. We say that u exhibits spontaneous stochasticity if there exist A ⊂^d with ℒ^d (A) >0 and a narrowly converging subsequence η^κ_q⇀η such that
η = η_x ⊗ℒ^d
and for all x ∈ A, η_x is not a Dirac delta (meaning that there exists no continuous curve γ_x such that η_x = δ_γ_x).
Related to the definition above, we point out the recent paper <cit.> where the author considers the inviscid case via regularisation by mollification of the velocity field.
We cannot employ the Riesz representation theorem over the space C([0,T]; ^d) ×^d to characterize the weak* convergence in the space of measures, since C([0,T]; ^d) is not locally compact. Therefore, we use Prokhorov's theorem to deduce the compactness result of Lemma <ref>.
Let (X, d) be a Polish space and let {μ_n }_n⊂ℳ (X) be a sequence of probability measures, then the two facts are equivalent:
* (Tightness). There exists F: X → such that F(x) →∞ as d(x, 0) →∞ so that the sublevel sets { x ∈ X: F(x) ≤ L } are precompact and
∫_X F d μ_n ≤ 1 ∀ n ∈ .
* There exists a subsequence {μ_n_j}_j narrowly converging μ_n_j⇀μ, i.e. for any continuous bounded function φ X → we have
∫_X φ d μ_n_j→∫_X φ d μ
Let η^κ∈ℳ ( C([0,T]; ^d) ×^d ) defined as above η^κ (d γ, d x) = η_x^κ (dγ) ⊗ℒ^d (dx) where η_x^κ = X_T, ·^κ (x, ·)_# (ℙ) ∈ℳ ( C([0,T]; ^d) ). Then {η^κ}_κ >0 admits a narrowly converging subsequence.
We need to check the tightness hypothesis. Let β < 1/2 and observe that
𝔼 | X_T,t^κ (x, ω) - X_T,s^κ (x, ω) | ≤ ( 2 u _L^∞ + √(2 κ) c ) |t-s|^β≤ C |t-s|^β
for any |t-s| ≤ 1 and κ∈ (0,1) for some C independent of κ.
We define
F(γ , x ) = 1/Cγ_C^β
which is set = + ∞ if γ∈ C([0,T] ) ∖ C^β ([0,T]). Thanks to Ascoli-Arzela's theorem we can show that the sublevel sets of the functional F are compact. Finally, by a direct computation we have
∫_C([0,T]; ^d) ×^d F(x, γ) η^κ (d γ, dx) = 1C∫_^d∫_Ω X_T, ·^κ (x, ω) _C^β d ℙ(ω) dx ≤ 1
for any κ∈ (0,1), concluding the proof.
Let d≥ 2 and u ∈ L^∞ ((0,T) ×^d ; ^d) be a divergence free velocity field and X_t,s^κ be its backward stochastic flow as in Definition <ref>. Suppose that there exist a constant c >0 and {κ_q }_q ∈ with κ_q → 0 as q →∞ such that
inf_κ∈{κ_q}_q ∫_^d 𝔼 [ | X_T, 0^κ (x, · ) - 𝔼 [ X_T, 0^κ (x, · ) ] |^2 ] dx ≥ c ,
then u exhibits spontaneous stochasticity.
In all the proof we characterize the measures η∈ℳ (C([0,T]; ^d) ×^d) considering the pushforward of through the evaluation map (e_t, Id) : C([0,T]; ^d) ×^d →^d ×^d, where e_t (γ) = γ (t). It is straightforward to check that ν∈ℳ (C[0,T]) is such that ν = δ_γ∈ℳ (C([0,T]; ^d)) if and only if for any t∈ [0,T] there exists x ∈^d such that (e_t)_# (ν) = δ_x∈ℳ (^d).
Let us fix the sequence {κ_q }_q for which the assumption holds. We suppose by contradiction that there is no spontaneous stochasticity, i.e. for any converging subsequence η^κ_j⇀^* η we have η (d γ, dx)= δ_γ_x (d γ) ⊗ dx.
Thanks to Lemma <ref> there exists a further subsequence, not relabelled, converging narrowly, namely η^κ_q⇀η.
For such a subsequence {κ_q }_q we have that for any f, g ∈ C^0 (^d) and ∀
t ∈ [0,T] we have
∫_^d∫_C([0,T]; ^d) g(x) f (γ (t)) η_x ( d γ) dx = lim_κ_q → 0∫_^d∫_C([0,T]; ^d) g(x) f (γ (t)) η_x^κ_q (d γ) dx
= lim_κ_q → 0∫_^d∫_Ω g(x) f(X_T, t^κ_q (x, ω)) d ℙ(ω)dx .
Since we are supposing by contradiction that u does not have spontaneous stochasticity we have η_x = δ_γ_x for a.e. x ∈^d. Testing with f (x) = x_i for i =1, …, d (by approximating these functions with continuous function on the torus) we deduce component-wise that
lim_κ_q → 0∫_^d g(x) ( γ_x (t) - 𝔼 [ X_T, t^κ_q (x, · ))] )dx =0.
From this we deduce that for any t ∈ [0,T] there exists N_t ⊂^d with ℒ^d (N_t)=0 such that
lim_κ_q → 0δ_𝔼 [X_T, t^κ_q (x, · ) ] = (e_t)_#η_x
for any x ∈ N_t^c (where the limit is intended in the sense of narrow convergence).
Considering N = ⋃_t ∈∩ [0,T] N_t and using the continuity of curves we deduce that
η (d γ, dx)= lim_κ_q → 0δ_𝔼 [X_T, ·^κ_q (x, · ) ] (d γ) ⊗ dx .
Therefore, choosing g ≡ 1 and f(x) = |x|^2 (by approximating these functions with continuous functions on the torus) we deduce
lim_κ_q → 0∫_^d𝔼 [|X_T, t^κ_q (x, ·)|^2] dx= lim_κ_q → 0∫_^d | 𝔼 [X_T, t^κ_q (x, ·)] |^2 dx ,
for any t ∈ [0,T],
which implies that
lim_κ_q → 0 ∫_^d 𝔼 [ | X_T, 0^κ_q (x, · ) - 𝔼 [ X_T, 0^κ_q (x, · ) ] |^2 ] dx =0
contradicting the hypothesis.
We use the fluctuation-dissipation formula (<ref>) given in Lemma <ref> and we give a simple sufficient criterion to prove anomalous dissipation.
§ A CRITERION FOR ANOMALOUS DISSIPATION AND CONTINUOUS IN TIME DISSIPATION
In this section, we state and prove a criterion for anomalous dissipation which considers the stochastic trajectories of suitable approximations of the velocity field. The criterion essentially states that anomalous dissipation occurs if trajectories evaluated at time one (or any other fixed time for that matter) remain stochastic in the vanishing noise limit.
In addition, we prove that anomalous dissipation with autonomous velocity fields necessarily occurs continuously by which we mean that up to nonrelabeled subsequences
κ∫_^d |∇θ_κ|^2 dx *⇀ f ∈ L^∞ (0,T)
as κ→ 0, where the convergence is L^∞ weak*.
We start with a criterion for anomalous dissipation.
Let u ∈ L^∞ ((0,T) ×^d; ^d) be a divergence free velocity field. We say that u exhibits anomalous dissipation if there exists θ_∈ C^∞ (^d) for which the following holds
lim sup_κ→ 0κ∫_0^T ∫_^d | ∇θ_κ (x, s) |^2 dx ds >0 .
Furthermore, for future purposes, we will also say that the subsequence {θ_κ_q}_q for which the lim sup is attained exhibits anomalous dissipation.
The following is a new criterion to prove anomalous dissipation, which is a new approach with respect to the previous approaches.
Let α∈ (0,1) and u ∈ C^α ((0,T) ×^d; ^d) be a divergence free velocity field such that there exist { u_q }_q ∈⊂ C^∞ ( [0,T] ×^d; ^d) divergence free and {κ_q }_q ∈ with the following properties
* 1/√(κ_q) u - u_q_L^2_t,x→ 0 as q →∞,
* we denote the stochastic backward flow of u_q with noise parameter √(2 κ_q) as X_t, 0^q, κ_q. We suppose that there exist a constant c>0 and { D_q }_q ∈⊂^d with ℒ^d (D_q)≥ c for any q ∈ with the following property
inf_q ∈inf_x ∈ D_q𝔼 [ | X_T, 0^q, κ_q (x, ω) - 𝔼 [ X_T, 0^q, κ_q (x, · ) ] |^2 ] ≥ c .
Then u exhibits anomalous dissipation.
The proof of this proposition requires to construct an initial datum for which anomalous dissipation occurs. We suppose, without loss of generality, that the first component of X_T,0^q, κ_q = ( x_T,0^q, κ_q , y_T,0^q, κ_q) satisfies (<ref>) with constant c/2, namely
inf_q ∈inf_x ∈ D_q𝔼 [ | x_T, 0^q, κ_q (x, ω) - 𝔼 [ x_T, 0^q, κ_q (x, · ) ] |^2 ] ≥c/2 ,
We now define θ_ (x, y) = x for any x, y ∈ and let θ_κ_q^(q) be the solution of the advection diffusion equation with velocity field u_q and diffusivity parameter κ_q. Applying Lemma <ref> we have
2 κ_q∫_0^T ∫_^d |∇θ_κ_q^(q) (x,s)|^2 dx ds ≥∫_^d𝔼[ | θ_ (X_T,0^q, κ_q (x, · )) - 𝔼[ θ_ (X_T,0^q, κ_q (x, ·))] |^2 ] dx
= ∫_^d𝔼[ | x_T,0^q, κ_q (x, · ) - 𝔼[ x_T,0^q, κ_q (x, ·) ] |^2 ] dx
≥∫_D_q𝔼[ | x_T,0^q, κ_q (x, · ) - 𝔼[ x_T,0^q, κ_q (x, ·) ] |^2 ] dx
≥c/2 , ∀ q ∈ .
Let θ_κ_q be the solution of the advection diffusion with velocity field u and diffusivity parameter κ_q. Applying Proposition <ref> and Lemma <ref> we finally have
lim sup_q →∞κ_q∫_0^T ∫_^d |∇θ_κ_q (x,s)|^2 dx ds = lim sup_q →∞κ_q∫_0^T ∫_^d |∇θ_κ_q^(q) (x,s)|^2 dx ds ≥c/2 .
Mollifying the initial datum θ_, γ = θ_⋆φ_γ with a parameter γ >0 so that
θ_ - θ_, γ_L^2 < c/2
we conclude that anomalous dissipation holds for the advection diffusion equation with the smooth initial datum θ_, γ∈ C^∞.
We now show that anomalous dissipation for autonomous velocity fields is necessarily continuous.
Let d ≥ 2 and u ∈ L^∞ (^d) an autonomous divergence free velocity field. Let θ_κ : [0,T] ×^d → be the solution of the advection diffusion with a θ_∈ C^2 (^d). Then
sup_κ∈ (0,1)∂_t θ_κ_L^∞ ((0,T) ×^d)) ≤ u _L^2θ__C^1 + θ__C^2 .
In particular, up to not relabelled subsequence,
κ∫_^d |∇θ_κ (x,·)|^2 dx *⇀ f ∈ L^∞ (0,T)
as κ→ 0, where the convergence is L^∞ weak*. Finally, if θ_κ⇀^* θ as κ→ 0 (up to not relabelled subsequence), then
e(t) = ∫_^d |θ (x,t)|^2 dx ∈ W^1, ∞ (0,T) .
We notice that
ADV-DIFF-t∂_t g_κ + u ·∇ g_κ = κΔ g_κ,
g_κ (0, · ) = - u ·∇θ_ (· ) + κΔθ_ (· ) .
satisfies
g_κ_L^∞ ((0,T) ×^d ) ≤ u _L^2θ__C^1 + θ__C^2 ,
from the maximum principle. We now claim that the last bound holds also for ∂_t θ_κ.
First, we introduce θ_κ^ε to be the solution to
∂_t θ^ε_κ + u_ε·∇θ^ε_κ = κΔθ^ε_κ,
θ^ε_κ (0, · ) = θ_ (·)
with u_ε = u ⋆φ_ε a space mollification. It is possible to prove that θ_κ^ε is smooth and ∂_t θ_κ^ε (t, ·) → - u ·∇θ_ (·) + κΔθ_ (·) in L^2(^d) as t → 0^+. Furthermore, from standard energy estimate we have
sup_t ∈ [0,1]θ_κ (t, ·) - θ_κ^ε (t, ·) _L^2 (^d)≤ u - u_ε_L^2 ( (0,T) ×^d )^2/κθ__L^∞^2 ,
and therefore θ_κ^ε→θ_κ in C([0,T); L^2( ^d)) as ε→ 0. Since ∂_t θ_κ^ε solves (<ref>) and it enjoys ∂_t θ_κ^ε_L^∞ ((0,1) ×^d ) ≤ u _L^2θ__C^1 + θ__C^2 which implies
θ_κ^ε→θ_κ C([0,T); L^2( ^d)) ,
∂_t θ_κ^ε⋆⇀∂_t θ_κ weak *-L^∞ .
In particular, we have obtained the bound
sup_κ∈ (0,1)∂_t θ_κ_L^∞ ((0,T) ×^d)≤ u _L^2θ__C^1 + θ__C^2 .
From this and a standard energy balance we observe that
2 κ∫_^d |∇θ_κ (x,·)|^2 dx = - d/dt∫_^d |θ_κ (x,t)|^2 dx = ∫_^d∂_t θ_κ (x,t) θ_κ (x,t) dx ≤∂_t θ_κ_L^∞θ_κ_L^∞
and the last is bounded independently on κ thanks to the previous computations.
Similarly, if up to not relabelled subsequences we have
θ_κ*⇀θ∈ L^∞
∂_t θ_κ*⇀∂_t θ∈ L^∞
we can bound
| d/dt∫_^d |θ (x,t)|^2 dx | = 2 | ∫_^d∂_t θ (x,t ) θ(x,t) dx | ≤∂_t θ_L^∞θ_L^∞
where the last term is bounded since ∂_t θ_L^∞≤ u _L^2θ__C^1 + θ__C^2 thanks to the weak* convergence.
§ CHOICE OF THE PARAMETERS
In this section we define the parameters we will use in the construction of the velocity field. Let us fix α <1 as in the statement of Theorem <ref>, then we can find δ , >0 such that
12+δ > + α (1 + δ)^4 ( 3 + 3 δ + (1+δ)^2/2 + δ)
1 > (2 + δ) + (1 + α) (1 + δ)^5 ( 3 + 3 δ + (1+δ)^2/2 + δ)
1 + 2(1 + δ)^52 + δ > + (2 + α) (1 + δ)^4 ( 3 + 3 δ + (1+δ)^2/2 + δ)
noticing that for = δ = 0 the inequalities reduce to α <1 and therefore (since the previous inequalities depend continuously on , δ) we can find a ball of radius r>0, i.e. B_r (0) = { (, δ) ∈^2: || + |δ| < r}, such that the inequalities are true.
We also require that ≪δ, more precisely we require
2 δ^2 + δ^3 ≥ (1+ δ)^2 + 2 .
We now inductively define the following sequence of parameters, needed for the construction of the velocity field in Section <ref>.
a_q+1 = a_q^1+ δ ,
n_q+1 = a_q/2 a_q+1 ,
A_q+1 = A_q/2 n_q+1 ,
A_q+1 = A_q+1 a_q^δ - δ/2 + δ ,
B_q+1 = L_q - n_q+1A_q+1 -2 A_q/ n_q+1 ,
L_q+1 = B_q/2 - 4 A_q+1 - B_q/2 a_q^δ ,
v_q+1 = v_q A_q+1/A_q+1
κ_q = B_q^2 ,
with initial conditions A_0 = a_0, v_0 =1 and L_0 =1/4 and B_0 = 1/4. We now choose
a_0 sufficiently small in terms of δ and so that
∑_k =q^∞ a_k^^2≤ 2 a_q^^2 ∀ q ∈ ,
which is verified if we choose a_0 such that a_0^^2 δ≤ 1/2
We finally define the sequence of diffusivity parameters as
κ_q = B_q^2.
The following lemma gives us the asymptotic behaviour of the sequences defined above as q →∞.
Given the definition of the sequences in (<ref>),
the following bounds hold true for every q ≥ 0
n_q = a_q-1^- δ/2 ,
N_q = a_0 a_q^-1 ,
A_q = A_0 a_0^- - 1+ δ/2+ δ a_q^ + 1+ δ/2+δ
A_q = A_0 a_0^- - 1+ δ/2 + δ a_q-1^ + δ + 1+ δ/2 + δ
1/2 B_0 a_0^- 1+ δ/2+δ a_q^1+ δ/2+δ≤B_q≤ B_0 a_0^- 1+ δ/2+δ a_q^1+ δ/2+δ ,
∀ q ≥ 1 1/4 B_0 a_0^- 1+ δ/2+δ a_q^1/2+δ≤ L_q≤1/2 B_0 a_0^- 1+ δ/2+δ a_q^1/2+δ ,
1/4 B_0^2 a_0^-2+ 2 δ/2+δ a_q^2+ 2 δ/2+δ≤κ_q≤ B_0^2 a_0^-2+ 2 δ/2+δ a_q^2+ 2 δ/2+δ
v_q = v_0 a_0^ - 1/2+ δ a_q^- + 1/2+ δ
The equality (<ref>) follows directly from the choice of parameters. Note that
N_q = ∏_j =1^q 2 n_j = ∏_j=1^q a_j-1^- δ =a_0 a_q^-1,
proving (<ref>), where in the last identity we used the relation a_j= a_0^(1+ δ)^j and an explicit computation using the power series identity.
We now prove (<ref>). The equation clearly holds for q = 0.
The case q ≥ 1 follows by induction combined with (<ref>) and (<ref>).
Equation (<ref>) then follows from (<ref>), (<ref>) and (<ref>).
We now prove estimate (<ref>) by induction.
We start by noticing that (<ref>) holds for q=0,1.
Now, assume that it is true for j ≤ q and using (<ref>), we have
B_q+1 = 2L_q a_q^δ - 5 A_0 a_0^- - 1 + δ/2 + δ a_q^ + 1 + δ/2 + δ a_q^δ + δ(1 + δ)/2 + δ.
Using (<ref>) and (<ref>) we continue
B_q+1 = B_q-1 a_q^δ (1- a_q-1^δ) - 8 a_q^δ A_0 a_0^- - 1 + δ/2 + δ a_q-1^ + δ + 1 + δ/2 + δ - 5 a_q^δ A_0 a_0^- - 1 + δ/2 + δ a_q^ + 1 + δ/2 + δ a_q^δ + δ(1 + δ)/2 + δ - δ
≥ a_q^δ[ B_q-1 (1- a_q-1^δ) - 16 A_0 B_0^-1 B_q-1 a_0^- a_q-1^ + δ - 10 A_0 B_0^-1 B_q-1 a_0^- a_q-1^ (1 + δ)^2]
≥ a_q^δ B_q-1( 1 - 2 a_q-1^δ)
where we used 40 a_0^δ≤ A_0^-1 B_0 in the last inequality.
Iterating this inequality yields
B_q ≥ B_0 ∏_j = 1^q2 a_2j - 1^δ( 1 - 2 a_2j - 2^δ) if q is even and B_q ≥ B_1 ∏_j = 1^q-1/2 a_2j^δ( 1 - 2 a_2j - 1^δ) if q is odd.
Now, using that 1- x ≤^-2 x for any x ∈ (0, 1/2) we have
∏_j = 1^q (1 - 2 a_j^δ) ≥∏_j = 1^q^- 4 a_j^δ = ^- 4 ∑_j = 1^q a_j^δ≥12 ∀q≥ 1.
Hence for q even
B_q ≥12 B_0 ∏_j = 1^q2 a_2j - 1^δ = 12 B_0 ∏_j = 1^q2 a_0^δ(1 + δ)^2j - 1 = 12 B_0 a_0^δ (1 + δ)^-1∑_j = 1^q2 (1 + δ)^2j = 12 B_0 a_0^δ (1 + δ)^-1∑_j = 1^q2 (1 + δ)^2j
≥12 B_0 a_0^δ (1 + δ)^-1 (1 + δ)^2 (1 + δ)^q - 1/(1 + δ)^2 - 1≥12 B_0 a_0^δ (1 + δ)^-1 (1 + δ)^2 (1 + δ)^q - 1/δ^2 + 2 δ≥12 B_0 a_0^- 1 + δ/2 + δ a_0^(1 + δ)^q+1/2 + δ = 12 B_0 a_0^- 1 + δ/2 + δ a_q^1 + δ/2 + δ
Similarly, when q is odd, with similar computations and using the fact that B_1 = B_0 a_0^- 1 + δ/2 + δ a_1^1+δ/2 + δ we can prove that
B_q ≥12 a_0^- 1 + δ/2 + δ a_q^1 + δ/2 + δ.
Hence, we have proved the lower bound of (<ref>).
The upper bound for B_q+1 follows from the observation that
B_q+1≤L_q/n_q+1≤B_q-1/2 n_q+1 = a_q^δ B_q-1
which in turn leads to
B_q ≤ B_0 ∏_j = 1^q2 a_2j - 1^δ if q is even and B_q ≤ B_1 ∏_j = 1^q-1/2 a_2j^δ if q is odd.
By means of explicit computations together with the facts that a_q = a_0^(1 + δ)^q and B_1 = B_0 a_0^- 1 + δ/2 + δ a_1^1+δ/2 + δ we obtain that
B_q ≤ B_0 a_0^- 1 + δ/2 + δ a_q^1 + δ/2 + δ ∀ q ≥ 1.
Thus, we have proved the upper bound of (<ref>).
Inequalities (<ref>) follow from (<ref>), (<ref>), (<ref>) and (<ref>).
Inequalities (<ref>) follows immediately from (<ref>) due to the definition of κ_q in (<ref>).
Finally, from (<ref>), (<ref>) and (<ref>) we obtain
v_q+1 = v_q a_q^- δ + δ/2+δ.
An explicit computation using that a_q= a_0^(1+δ)^q yields (<ref>).
§ CONSTRUCTION OF THE VELOCITY FIELD
In this section, we construct the velocity field u of Theorem <ref> as the limit of a sequence of smooth velocity field { u_q }_q. The construction is carried out in several steps. First we construct a special rotating pipe in Subsection <ref>. We develop this construction further in Subsection <ref>. Then we build what will become the building block of our velocity fields in Subsection <ref>. In Subsection <ref>, we use this building block to build a sequence of L^∞ divergence free velocity fields { b_q }_q. Finally, we construct the sequence of velocity fields { u_q }_q as well as the velocity field u ∈ C^α in Subsection <ref> by a mollification argument.
§.§ Construction of an enlarging rotating pipe
The purpose of this subsection is to construct a velocity field whose integral curves make a 90 degree rotation and the entering velocity is different from the exiting velocity (see Figure <ref>).
Let 0 < r < R < ∞ and v > 0, λ≥ 1. Then there exists an autonomous velocity field u = u_r, R, λ, v∈ BV([0, λ R] × [0, R]) such that
* u = v ^1 |_{ 0 }× [r,R] - v/λ^1 |_ [λ r, λ R] ×{ 0 };
* the trace of u on { 0 }× [0,R] is given by
u(0,y) =
(v,0) if y ∈ [r, R]
(0,0) otherwise.
* the trace of u on [0,λ R] ×{ 0 } is given by
u(0,y) =
(0,-v/λ) if x ∈ [λ r, λ R]
(0,0) otherwise.
* there exists a constant C > 0 such that for any 0 < < r/2 and any integral curve γ [0,T] → [0, λ R] × [0, R] starting from a point in { 0 }× [r + , R - ] with T such that γ(T) ∈ [0,λ R] ×{ 0 } we have
∫_0^T ∇ u _L^∞(B_(γ(t))) dt ≤ C.
Moreover, for extensions and subsequent mollifications of this velocity field[By extensions, we mean that we extend u to the whole ^2 and set u = (v, 0) if (x,y) ∈ (- ∞, 0) × [0, R], u = (0, - v/λ) if (x,y) ∈ [0, λ R] × (- ∞, 0) and u = (0,0) if (x,y) ∉[0, λ R] × [0,R] ∪ (- ∞, 0) × [0, R] ∪ [0, λ R] × (- ∞, 0).] we have
* there exists a constant C > 0 such that for any , δ> 0 satisfying the inequality
0 < 5 λδ < < r/2.
it holds that
any integral curve γ [0,T] →^2 of u_δ starting from a point in { - δ}× [r + 3 , R - 3] and γ(T) ∈ [0,λ R] ×{ - δ} we have
∫_0^T ∇ u_δ_L^∞(B_(γ(t))) dt ≤ C.
Let 0 < r < R < ∞ and v, λ∈ as in the statement of the lemma.
Let 0 < r^' < R^' and α∈ be real numbers (to be selected depending on r, R, v and λ) and define a velocity field u_α [0, R^']^2 →^2 as
u_α(x,y) =
α√(x^2 + y^2) (y, -x) if (r^')^2 ≤ x^2 + y^2 ≤ (R^')^2;
(0,0) otherwise.
We note that for any (x,y) ∈ [0, R^']^2 such that (r^')^2 < x^2 + y^2 < (R^')^2 we have
|u_α(x, y)| = α and |∇ u_α(x,y)| = α|(x,y)|.
We will then rescale this velocity field in a suitable way (and then select α, r^' and R^' depending on r, R, v and λ) to obtain the desired velocity field. Before doing this, let us make a few observations.
Firstly, we note that the distributional divergence of u_α is given by
u_α = α |_{ 0 }× [r^',R^'] - α |_ [r^', R^'] ×{ 0 }.
Secondly, let γ [0,T] → [0, R^']^2 be any integral curve starting from a point in { 0 }× [r^',R^'] with T such that γ(T) ∈ [r^', R^'] ×{ 0 }. Then, a computation yields:
T = π |γ(0)|2 α and ∫_0^T |∇ u_α(γ(t))| dt = α∫_0^T 1|γ(t)| dt = π/2.
More precisely, we can prove that
∇ u_α_L^∞(B_(γ(t)))≤2 α|γ(t)|
under the assumption that < r^'2.
Therefore, under this assumption, we obtain
∫_0^T ∇ u_α_L^∞(B_(γ(t))) dt ≤ 2 α∫_0^T 1|γ(t)| dt ≤π.
We now rescale the velocity field.
Define the matrix
A =
[ λ^12 0; 0 λ^- 12; ]∈^2 × 2
and define w_α [0, λ^12 R^'] × [0, λ^- 12 R^'] →^2 as
w_α(x,y) = A u_α (A^-1 (x,y)).
We get that
∇ w_α(x,y) = A A^-1∇ u_α (A^-1(x,y)) = ∇ u_α (A^-1(x,y))
and hence
w_α(x,y) = (∇ w_α)(x,y) = (∇ u_α)(A^-1(x,y)) = u_α(A^-1(x,y)) = 0
for all (x,y) ∈ (0, λ^12 R^') × (0, λ^- 12 R^').
Now select R^' = λ^12 R and r^' = λ^12 r and fix α = v/√(λ). Then w_α is the desired velocity field, which we from now on denote by u. We already know that <ref>, <ref> and <ref> are fulfilled. We now verify <ref>.
Let η [0,T] → [0, λ R] × [0, R] be an integral curve starting from a point on { 0 }× [r,R] of u with T such that η(T) ∈ [λ r, λ R] ×{ 0 }.
Our goal is to estimate
∫_0^T ∇ u _L^∞(B_(η(t))) dt.
It is not difficult to see that γ(t) A^-1η(t) is an integral curve of u_α and hence we can apply the observation we already made for the velocity field u_α. Indeed, we find
∫_0^T ∇ u _L^∞(B_(η(t))) dt = ∫_0^T ∇ w_α_L^∞(B_(A γ (t))) dt
≤∫_0^T ∇ u_α_L^∞(B_λ^12(γ (t))) dt ≤π
under the assumption that λ^12 < r^'2 which is equivalent to < r2. Hence we have proved <ref> and the proof for the velocity field u is finished.
We now treat the case of the mollified velocity field. Let , δ > 0 be real numbers such that (<ref>) holds and extend u to ^2 so that
u(x,y) = ( v 1_[r,R](y), 0 ) if x < 0 and u(x,y) = ( 0, - v/λ1_[λ r, λ R](x) ) if y < 0.
Then, consider the mollified velocity field u_δ = u ⋆ρ_δ. Let (x_0, y_0) ∈{ - δ}× [r, R] and let γ respectively γ_δ be two integral curves starting from (x_0, y_0) with respect to u respectively u_δ.
Geometric arguments yield
|γ(t) - γ_δ(t)| ≤ 4 λδ ∀ t ≥ 0.
Then
∫_0^T∇ u_δ_L^∞(B_(γ_δ(t))) dt ≤∫_0^T∇ u _L^∞(B_ + δ(γ_δ(t))) dt
≤∫_0^T∇ u _L^∞(B_ + 5 λδ(γ(t))) dt
≤∫_0^T∇ u _L^∞(B_2 (γ(t))) dt
≤ C
where the last inequality follows from <ref>. This proves <ref> and finishes the proof of Lemma <ref>.
In the remainder of the paper, when using the velocity fields from Lemma <ref> we will always use R = 2r.
§.§ Construction of an twice rotating and enlarging pipe
It is possible to construct, thanks to the previous construction of a rotating pipe, a pipe with a velocity field which starts out horizontal with magnitude v and the width of the pipe A.
Then, it turns and becomes a vertical pipe with width A^' and the magnitude of the velocity field is v^'. To ensure the divergence-free condition we have the relation v^' = v A/A^'.
Afterwards, the pipe again turns and becomes horizontal with width A and the magnitude of the velocity field is again v (see Figure <ref>).
§.§ Building blocks: Branching-merging pipe
In this subsection, we will use the velocity field of the previous subsection in order to build a velocity field whose fundamental property is that it starts out as a unique pipe of width A and velocity v and then branches into 2n (with n large) smaller pipes of width
A = A/2n
and each of this pipe width enlarges to A^' but with reduced intensity of the velocity which is given by
v^' = v A/A^'≪ v .
Afterwards, all these smaller pipes shrink back into small pipes of width A with corresponding velocity v and merge back into one pipe of width A as in Figure <ref>.
The quantity L corresponds to the length of the whole construction, (A+ B) corresponds to the width of the whole construction
and B^' corresponds to the distance in between smaller pipes. Note that these parameters are linked by the following relation:
L = n A^' + n B^' + 2 A.
Moreover, note that the length of the thinner pipes is given in terms of B and A as
B - 3 A2.
This velocity field is our building block, needed in Subsection <ref>, and we denote it as
W = W_L, A, B, A^', B^', n, v.
We will regularly use rotated (by some multiple of π2) versions of this velocity field. This will always be clear from the context and the properties we would like to achieve, hence we will slightly abuse the notation using always the same notation independently of whether the velocity field is rotated or not. With the twice rotating enlarging pipe given in Section <ref> we can indeed construct our building block W.
The key properties of W are stated in the following lemma.
Let n ≥ 10 be an integer and let A, B, L, A^', B^', v ∈ (0,1) be real numbers such that
A ≪ B, A^'≪ B^', and L = n A^' + n B^' + 2 A.
Let L^'∈ (0,1) be an any real number such that
L^' < B - 3 A - A/2.
Then there exists W = W_L, A, B, A^', B^', n, v : R = [0,L] × [- A+ B/2, A+ B/2] →^2 with the following properties:
* W _L^∞ = v.
* W = (W 1_R) = v ^1 |_{ 0 }×[ - A/2, A/2] - v ^1 |_{ L }×[ - A/2, A/2].
* There are 2n rectangles { R_i }_i =1^2n such that, up to a rotation of π/2 or -π / 2 and a translation,
R_i = [0, L^'] ×[ - A^' + B^'/2, A^' + B^'/2]
and in the coordinates of R_i
W (x, y) = [ v^'1_[- A^'/2, A^'/2](y); 0 ] ∀ (x, y) ∈ R_i ,
where v^' = v A/A^'.
* It holds that W _L^∞ (R_i)≤ v^' and in the coordinates of R_i
(W 1_R_i) = v^'^1 |_{ 0 }×[ - A^'/2, A^'/2] - v^'^1 |_{ L^'}×[ - A^'/2, A^'/2]
for any i =1 , ... , 2n.
* The collection of rectangles { R_i }_i = 1^2n can be reordered so that
R_i = R_1 + (i-1)(A^' + B^', 0) R_i+n = R_n+1 + (i-1)(A^' + B^', 0) ∀ i = 1, …, n.
§.§ Construction of the L^∞ velocity fields
In this subsection, we use the velocity field from the previous subsection as a building block in the construction of a sequence of velocity fields { b_q }_q ≥ 1. The sequence is constructed iteratively by adding more and more suitably rescaled copies of branching-merging pipes at each step. To do this we simultaneously define a sequence of sets of rectangles {ℛ_q }_q, where
ℛ_q = { R_1, … , R_N_q} and R_i is a rectangle for each i = 1, …, N_q. Given the velocity field b_q and the collection of mutually disjoint rectangles ℛ_q, we define b_q+1 as b_q, redefining it on R_i as a branching merging pipe keeping the divergence free property.
For the construction we identify ^2 with [0,1]^2.
Recall the parameters selected in Section <ref>, that satisfy (<ref>) and will be the parameters involved in our construction. Firstly, we define b_0 = ∇^⊥ H_0 ^2 →^2 such that for all (x,y) ∈ (A_0, 1 - A_0)^2
b_0(x,y) =
(v_0, 0) if |y - 1/2| < A_0/2;
(0,0) otherwise.
In other words, b_0 is a straight pipe of velocity v_0 and width A_0 inside the square (A_0, 1 - A_0)^2. We now define the set of rectangles at step 0 as ℛ_0 = { (A_0, 1 - A_0)^2 }.
We now define b_1 as b_0 and redefine it on each rectangle of the collection ℛ_0 to be a branching-merging constructed in Subsection <ref> with the parameters L_0, A_0, B_0, A_1, B_1, n_1, v_0 from Section <ref>. In particular, on each rectangles of the collection ℛ_0 (which in this case has a single element) we replace the straight pipe given by b_0 with a consistent branching merging pipe.
Then we note that for b_1 there exists a collection of 2 n_1 = N_1 rectangles { R_i }_i = 1^N_1 so that up to rotations and translations
R_i = [0, L_1] ×[- A_1/2 - B_1/2, A_1/2 + B_1/2]
and in these rectangles the velocity field b_1 is composed by a single straight pipe of width A_1 (see Figure <ref>).
Then we define this new collection of rectangles ℛ_1 = { R_1, … , R_N_1 }.
To construct b_q+1 from b_q for any q, the procedure is as follows. Assume that there exists a collection of rectangles ℛ_q such that #ℛ_q = N_q and for any R ∈ℛ_q it holds that up to a rotation and a translation
R = [0, L_q] ×[ - A_q + B_q/2, A_q + B_q/2]
and in the coordinates of R
b_q(x,y) =
[ v_q 1_[- A_q /2, A_q/2](y); 0 ] ∀ (x,y) ∈ R.
Now we define b_q+1 by redefining the velocity field b_q on each R ∈ℛ_q. Precisely we define b_q+1 to be a branching-merging pipe with parameters L_q, A_q, B_q, A_q+1, B_q+1, n_q+1, v_q i.e.
b_q+1 (x,y) = W_L_q, A_q, B_q, A_q+1, B_q+1, n_q+1, v_q(x,y)
in the coordinates of R ∈ℛ_q. Thanks to Lemma <ref> and (<ref>) we have that b_q+1 is still divergence free.
In a short-formula, we have
b_q+1 (x,y) =
O_q^T W_L_q, A_q, B_q, A_q+1, B_q+1, n_q+1, v_q (τ_q ∘ O_q (x,y))
∀ (x,y) ∈⋃ℛ_q
b_q (x,y) otherwise
where O_q is the identity or a rotation by a multiple of π/2 and τ_q is a translation so that b_q+1 remains divergence free. We now have to define the new collection ℛ_q+1 of rectangles.
Thanks to Lemma <ref>, for each R ∈ℛ_q, there exists a collection ℛ_q+1(R) of 2 n_q+1 mutually disjoint rectangles such that for all R∈ℛ_q+1(R) we have that R⊆ R and
R = [0, L_q+1] ×[ - A_q+1 + B_q+1/2, A_q+1 + B_q+1/2]
up to a translation and a rotation of π/2 and in these coordinates
b_q+1(x,y) =
[ v_q+11_[- A_q+1/2, A_q+1/2](y); 0 ] ∀ (x,y) ∈R.
Note that thanks to Lemma <ref> #ℛ_q+1(R) = 2 n_q+1 for all R ∈ℛ_q.
Finally, we define
ℛ_q+1 = ⋃_R ∈ℛ_qℛ_q+1(R) ,
hence #ℛ_q+1 = 2n_q+1· N_q = N_q+1.
Before proving further properties of the velocity fields, we define some additional objects useful for the proof of Theorem <ref>. First note that for any q ≥ 2 we can define the ancestor map 𝔞_q ℛ_q→ℛ_q-1 defined such that 𝔞_q(R) is the unique rectangle belonging to ℛ_q-1 such that R ⊆𝔞_q(R) ∈ℛ_q-1.
Let q ≥ 2 be arbitrary and let R ∈ℛ_q-1 be arbitrary.
Then we define ℰ_q(R) ⊆ℛ_q(R) by removing the two first and the two last rectangles in the branching structure (see Figure <ref>).
Hence
#ℰ_q(R) = 2n_q - 4 ∀ R ∈ℛ_q-1, ∀ q ≥ 2.
We define ℰ_1 ⊆ℛ_1 to be the sub-collection of rectangles obtained by removing the two first and the two last rectangles in the branching structure of b_1. Hence #ℰ_1 = #ℛ_1 - 4 = 2n_1 - 4 = N_1 - 4.
Then, we define ℰ_q inductively by the following relation
ℰ_q = ⋃_R ∈ℰ_q-1ℰ_q(R) ∀ q ≥ 2.
Hence
#ℰ_q = ∏_j = 1^q (2 n_j - 4) = N_q ≥N_q/2.
Again, let q ≥ 2 be arbitrary and let R ∈ℛ_q-1. Define ℳ_q(R) ⊆ℛ_q(R) by removing on each side of the branching structure the ⌈n_q4⌉ first and ⌈n_q4⌉ last rectangles from ℛ_q(R).
Hence #ℳ_q(R) = 2 n_q - 4 ⌈n_q4⌉≥n_q/2.
Then set
ℳ_q = ⋃_R ∈ℛ_q-1ℳ_q(R)
and note that #ℳ_q ≥N_q/2.
Finally, we will define a sub-collection of rectangles denoted by 𝒢_q ⊆ℛ_q. The meaning and usefulness of this sub-collection is not clear at this stage all we can say is that it is a sub-collection of “good” rectangles. Precisely, we define 𝒢_q as
𝒢_q = { R ∈ℰ_q ∩ℳ_q :
𝔞_q(R) ∈ℳ_q-1, (𝔞_q-1∘𝔞_q)(R) ∈ℳ_q-2,
(𝔞_q-2∘𝔞_q-1∘𝔞_q)(R) ∈ℳ_q-3, (𝔞_q-3∘𝔞_q-2∘𝔞_q-1∘𝔞_q)(R) ∈ℳ_q-4
}.
We observe that
#𝒢_q ≥ 4^-5#ℰ_q ≥ 4^-5Ñ_q ≥ 2^-11 N_q.
For the sequence of velocity fields { b_q }_q ≥ 1 it holds that:
b_q - b_q-1_L^∞(^2) ≤ 2 v_q-1≤ C a_q-1^- + 1/2 + δ
b_q - b_q-1_BV(^2) ≤ C a_q-1^- - 2 δ
where C is a constant that depends only on a_0.
By the construction above, there exists a collection ℛ_q-1 of N_q-1 rectangles such that up to a translation and a rotation
R = [0, L_q-1] ×[ - A_q-1 + B_q-12, A_q-1 + B_q - 12] ∀ R ∈ℛ_q-1.
and the velocity field b_q-1 is a straight pipe whenever restricted to any of the rectangles in ℛ_q-1.
Equivalent to the above definition we have
b_q = b_q-1 - ∑_R ∈ℛ_q-1 b_q-11_R_straight pipes + ∑_R ∈ℛ_q-1 W_L_q-1, A_q-1, B_q-1, A_q, B_q, n_q, v_q-1_new branching-merging pipes .
By construction b_q-1_L^∞ (R)≤ v_q-1 for any R ∈ℛ_q-1 and the new velocities of the branching-merging pipes are bounded by v_q thanks to Lemma <ref>. As a consequence,
b_q - b_q-1_L^∞(^2)≤ 2 v_q-1.
In order to prove (<ref>), we observe that b_q - b_q-1 is identically zero except in the N_q-1 rectangles belonging to ℛ_q-1. Then, in order to prove (<ref>), it suffices to bound b_q _BV(R_i) and b_q-1_BV(R_i). From the construction, we directly see that
b_q-1_BV(R) ≤ 2 L_q-1 v_q-1;
b_q_BV(R) ≤ 2 n_q (2 L_q-1 + 4 L_q) v_q-1.
for all R ∈ℛ_q-1.
Hence
b_q - b_q-1_BV(^2)≤ N_q-1[ 2 L_q-1 v_q-1 + 2 n_q (2 L_q-1 + 4 L_q) v_q-1] ≤ C a_q-1^- - 2 δ
where C is a constant that depends only on a_0. This ends the proof of Lemma <ref>.
In the next lemma, we recall some of the most important properties of the velocity fields { b_q }_q ≥ 1
The sequence of velocity fields { b_q }_q ≥ 1 constructed above satisfies the following properties for all q ≥ 2
*
For all R ∈ℛ_q-1 it holds that up to a rotation and a translation
R = [0, L_q-1] ×[ - A_q-1 + B_q-12, A_q-1 + B_q-12]
and in the coordinates of R we have
b_q-1(x,y)
=
[ v_q-11_[- A_q-1/2, A_q-1/2](y); 0 ] ∀ (x,y) ∈ R.
* Each R ∈ℛ_q-1 contains a collection of 2 n_q rectangles given by ℛ_q(R) such that for all R∈ℛ_q(R)
R = [0, L_q] ×[ - A_q + B_q2, A_q + B_q2]
up to a rotation and a translation and in these coordinates
b_q(x,y)
=
[ v_q1_[- A_q/2, A_q/2](y); 0 ] ∀ (x,y) ∈R.
* Each R ∈ℛ_q contains a collection of rectangles denoted by ℱ_q (R) of cardinality #ℱ_q (R) ≤ 6 n_q+1 such that for all R∈ℱ_q (R) up to a rotation and a translation
R = [0, L_q+1] ×[ - A_q+1, A_q+1]
and
b_q+2^(2)_L^∞(R)≤ v_q+1 ∀R∈ℱ_q (R) and b_q+2^(2)≡ 0 in R ∖⋃_R∈ℱ_q (R)R.
Moreover,
inf_R_1, R_2 ∈ℱ_q (R)( R_1, R_2 ) ≥B_q+1/4.
The points <ref> and <ref> follow immediately from the construction so we only need to prove <ref>.
In order to prove <ref>, we note that each R ∈ℛ_q contains a collection of 2 n_q+1 rectangles given by ℛ_q+1(R) such that for all R∈ℛ_q+1(R)
R = [0, L_q+1] ×[ - A_q+1 + B_q+1/2, A_q+1 + B_q+1/2]
and in these coordinates
b_q+1(x,y)
=
[ v_q+11_[ - A_q+1/2, A_q+1/2](y); 0 ] ∀ (x,y) ∈R.
In order to construct b_q+2 inside R from b_q+1 we replace the straight pipes in the rectangles of ℛ_q+1(R) by branching-merging pipes. For any R ∈ℛ_q and any R∈ℛ_q+1(R) we define (in the coordinates of R)
F_R, R^1 = [0, L_q+1] ×[ - A_q+1 + B_q+1/2 , - B_q+1/2];
F_R, R^2 = [0, L_q+1] ×[ - A_q+1, A_q+1];
F_R, R^3 = [0, L_q+1] ×[ B_q+1/2 , A_q+1 + B_q+1/2].
From the construction, it is clear that
( b_q+2^(2)) ∩ (R ∩R) ⊆⋃_ξ = 1^3 F_R, R^ξ.
Now note that F_R, R^2 is a rectangle of length L_q+1 and width 2 A_q+1 whereas F_R, R^1 and F_R, R^3 are rectangles of length L_q+1 and width A_q+1. Moreover, F_R, R^1 and F_R, R^3 are situated at the boundary of R. Hence, by merging sets F_R, R^1 and F_R, R^3 where R∈ℛ_q+1(R) with each other when adjacent we obtain 2 n_q+1 - 1 rectangles of length L_q+1 and width 2 A_q+1 as well as 2 rectangles of length L_q+1 and width A_q+1. By including these two rectangles in larger rectangles and adding to this collection the collection of rectangles given by { F_R, R^2 }_R∈ℛ_q+1(R) we obtain a collection denoted by ℱ_q+1(R) which is of cardinality #ℱ_q+1(R) = 4 n_q+1 + 1 ≤ 6 n_q+1. This ends the proof of <ref> and Lemma <ref>.
Before stating the last lemma of this subsection we will define the sets { D_q }_q ≥ 1. Let q ≥ 5 be arbitrary and define for each R ∈𝒢_q the set
D_q,R = [ 0 , L_q/3] ×[ - A_q/2 - B_q/50 , - A_q/2 - B_q/100] ⊆ R
in the coordinates of R.
Then, we define D_q (see Figure <ref>) as
D_q = ⋃_R ∈𝒢_q D_q,R.
We can now prove the following lemma.
Let D_q ⊂^2 constructed above and 𝒢_q be the collection defined in (<ref>). Then,
the following properties hold true
* inf_q ≥ 1^2(D_q) > 0;
* for all R ∈𝒢_q, we have b_q+2^(1)_L^∞(D_q,R)≤ v_q+2 and in the coordinates of R we have
b_q+2(x,y) = b_q+2(x + A_q+2 + B_q+2, y) ∀ (x,y) ∈ D_q,R[ B_q150].
Point <ref> follows from the fact that
^2 (D_q) = #𝒢_q ·B_q/100·L_q/3(<ref>)≥ 2^-11 N_q ·B_q L_q/300≥ c > 0
for some constant c depending only on a_0. Point <ref> is a direct consequence of the construction.
We end the subsection by defining some sets which will be useful for the proof of Theorem <ref>. Let q ≥ 2 be arbitrary. For any R ∈ℰ_q-1, up to a rotation and a translation
R = [0, L_q-1] ×[ - A_q-1 + B_q-1/2, A_q-1 + B_q-1/2].
In these coordinates, we define the sets
C_q-1, R = { 0 }×[ - A_q-1/4, A_q-1/4] ⊆ R, Γ_q-1, R = { 0 }×[ - A_q-1/2 + A_q, A_q-1/2 - A_q ] ⊆ R.
and
S_q-1, R = [0, L_q-1] ×[- A_q-1/2 - ℓ_q-1, - A_q-1/2 + ℓ_q-1] ∪ [0, L_q-1] ×[A_q-1/2 - ℓ_q-1, A_q-1/2 + ℓ_q-1] ⊆ R
where ℓ_q-1 is a mollification parameter which will be explicitly defined at beginning of Subsection <ref>. Note that this corresponds to the set marked in orange in Figure <ref>.
Then, we define
C_q-1 = ⋃_R ∈ℰ_q-1 C_q-1, R, Γ_q-1 = ⋃_R ∈ℰ_q-1Γ_q-1, R and S_q-1 = ⋃_R ∈ℰ_q-1 S_q-1, R.
§.§ Construction of C^α velocity fields u_q
The purpose of this subsection is to construct a sequence of C^α-velocity fields { u_q }_q ≥ 1 whose limit u ∈ C^α(^2, ^2) will be the velocity field in Theorem <ref>. We start by defining a sequence of mollification parameters {ℓ_q }_q ≥ 1 as
ℓ_q = a_q^v_q+1v_qA_q+2.
Then we define { w_q }_q ≥ 1 as the sequence of velocity fields given by
w_q =
b_q - b_q-4 if q ∈ 4;
0 otherwise.
The sequence { u_q }_q ≥ 0 is then given by
u_q = b_0 ⋆φ_ℓ_0 + ∑_j = 1^q w_j ⋆φ_ℓ_j.
Of course, one could have selected w_q to be b_q - b_q-1 for each q instead of (<ref>). This choice however would destroy useful geometric features which simplify the proof of Theorem <ref> greatly. We now prove several lemmas regarding the properties of the sequence { u_q }_q ≥ 0.
Let { u_q }_q ≥ 0⊂ C^∞(^2; ^2) be the sequence of velocity fields defined above in (<ref>), then there exist a constant C>0 and >0 independent on q ∈ such that the following inequalities hold for all q ≥ 1
u_q+1 - u_q _C^α(^2; ^2)≤ C a_q^;
u_q ·∇ u_q - u_q-1·∇ u_q-1_C^α≤ C a_q^ε̃;
κ_q Δ u_q+2_C^α≤ C q a_q^ε̃.
By (<ref>), w_q _L^∞(^2; ^2)≲ v_q-4 so that
w_q ⋆φ_ℓ_q_C^0(^2; ^2) ≲ v_q-4 ,
∇ ( w_q ⋆φ_ℓ_q ) _C^0(^2; ^2) ≲ v_q-4ℓ_q^-1.
Moreover, Equations (<ref>) imply that there exists a > 0 such that
12+δ - - α (1 + δ)^4 ( 3 + 3 δ + (1+δ)^2/2 + δ) > (1 + δ)^3
1 - (2 + δ) - (1 + α) (1 + δ)^5 ( 3 + 3 δ + (1+δ)^2/2 + δ) > (1 + δ)^5
1 + 2(1 + δ)^52 + δ - - (2 + α) (1 + δ)^4 ( 3 + 3 δ + (1+δ)^2/2 + δ) > (1 + δ)^4
From the definition of ℓ_q, we find ℓ_q ≳ a_q^3 + 3 δ + (1 + δ)^2/2 + δ
and therefore
w_q ⋆φ_ℓ_q_C^α(^2; ^2) ≲ v_q-4ℓ_q^- α≲ a_q-4^- + 1/2+ δ - α (1 + δ)^4 ( 3 + 3 δ + (1 + δ)^2/2 + δ)≲ a_q -1 ^. ,
Hence,
u_q+1 - u_q _C^α(^2; ^2)≤ w_q+1⋆φ_ℓ_q+1_C^α(^2; ^2)≲ a_q^.
This proves (<ref>).
To prove (<ref>) we observe that (<ref>) implies
u_q ·∇ u_q = ∑_i, j =0^q w_i ⋆φ_ℓ_i·∇ (w_j ⋆φ_ℓ_j ) = ∑_i =0^q w_i ⋆φ_ℓ_i·∇ (w_i ⋆φ_ℓ_i ) + ∑_i =0^q-1w_i ⋆φ_ℓ_i·∇ (w_i+1⋆φ_ℓ_i+1 )
+ ∑_i =0^q-1w_i+1⋆φ_ℓ_i+1·∇ (w_i⋆φ_ℓ_i )
where we use the shorthand notation w_0 b_0 and for the last equality we used that
(w_i ⋆φ_ℓ_i ) ∩ ( w_j ⋆φ_ℓ_j ) = ∅ for any |i-j| > 1
which follows from the construction.
Therefore,
(u_q ·∇) u_q - (u_q-1·∇) u_q-1 = ((w_q⋆φ_ℓ_q) ·∇) (w_q⋆φ_ℓ_q) + ((w_q-1⋆φ_ℓ_q-1) ·∇) (w_q⋆φ_ℓ_q)
+ ((w_q⋆φ_ℓ_q) ·∇) (w_q-1⋆φ_ℓ_q-1)
so that
(u_q ·∇) u_q - (u_q-1·∇) u_q-1_C^k ≤ w_q ⋆φ_ℓ_q_C^k+1 w_q ⋆φ_ℓ_q_C^0 + w_q ⋆φ_ℓ_q_C^k+1 w_q-1⋆φ_ℓ_q-1_C^0
+ w_q-1⋆φ_ℓ_q-1_C^k+1 w_q⋆φ_ℓ_q_C^0
≲ v_q-4^2 ℓ_q^- k-1 + v_q-4 v_q-5ℓ_q^-k-1 + v_q-4 v_q-5ℓ_q-1^-k-1
≲ v_q-4 v_q-5ℓ_q^- k -1 ,
where the last holds from the choice of the parameters (<ref>) and (<ref>). Thanks to interpolation inequalities, we have
u_q ·∇ u_q - u_q-1·∇ u_q-1_C^α≲ v_q-4 v_q-5ℓ_q^-1 - α≲ a_q-5^1 - (2 + δ) - (1 + α)(1 + δ)^5 ( 3 + 3 δ + (1 + δ)^2/2 + δ)≲ a_q^.
This proves (<ref>).
Finally, we prove (<ref>).
For all q ≥ 1 and all k ≥ 1, we have
u_q _C^k≤∑_j = 0^q w_j ⋆φ_ℓ_j_C^k≲∑_j = 0^qℓ_j^-k v_j-4≲ q ℓ_q^-k v_q-4
since q ↦ℓ_q^-k v_q-4 is increasing.
Hence by interpolation
κ_q Δ u_q _C^α≲κ_q q ℓ_q^-2 - α v_q-4≲ q a_q-4^- (2 + α)(1 + δ)^4 ( 3 + 3 δ + (1+δ)^2/2 + δ) - + 1 + 2 (1 + δ)^5/2 + δ≲ q a_q^
where we used (<ref>) and (<ref>).
Thanks to this lemma, it is clear that { u_q }_q ≥ 0 is a Cauchy sequence in C^α(^2; ^2) and hence we may define
u = lim_q →∞ u_q ∈ C^α(^2; ^2).
In particular, a direct implication of this definition is that for any q ∈, we have
u_q+4 - u_q _L^∞≲ v_q .
From Lemma <ref> and definition of u_q as in (<ref>) the following lemma follows.
The sequence of velocity fields { u_q }_q ≥ 0 constructed above satisfies the following properties for all q ∈{ 2 + 4 k : k }:
*
For each R ∈ℛ_q-1 and each R∈ℛ_q(R) it holds that
u_q+2(x,y) =
[ v_q; 0; ] ∀ (x,y) ∈[0, L_q/2] ×[- A_q/5, A_q/5] ⊆R
in the coordinates of R.
*
For each R ∈ℛ_q, the collection of rectangles ℱ_q(R) given by Item <ref> of Lemma <ref> has the property that
( u_q+2^(2)) ∩ R ⊆⋃_R∈ℱ_q(R)R
and
u_q+2^(2)_L^∞(R)≤ v_q+1 ∀R∈ℱ_q(R).
§.§ Stability results
In this subsection, we will establish stability results.
Precisely, we will conclude that stochastic backward trajectories starting from C_q-1 remain inside the pipe structure and eventually reach Γ_1. The proof is decomposed into two lemmas. Firstly, we show that any stochastic backward trajectory starting from a point in Γ_p reaches Γ_p-1 for all p ≤ q - 4.
Secondly, we prove that any backward stochastic trajectory starting from C_q-1 reaches Γ_q-4. The main ingredient in these proofs is Lemma <ref>. An extra technical point comes from how we constructed in the velocity fields { u_q }_q ≥ 1. Indeed, recall that b_q+1 is constructed from b_q, by replacing straight pipes by branching-merging pipes. When constructing the velocity fields { u_q }_q ≥ 1 we mollify b_j - b_j-4 with different mollification parameters ℓ_j. Since the mollification parameters ℓ_q go to zero as q →∞, a rectangle R ∈ℛ_j-1 with (j-1) ∈ 4 does not simply contain a mollified branching-merging pipe but also shear flows in the thin neighbourhoods given by S_j-1, R (see Figure <ref> where this set is marked in orange). Hence, we have to take into account that these shear flows with velocity bounded by v_j-1 intersect the pipes of width A_j perpendicularly. We refer to this as the intersection problem which is something that needs to be taken into account in the stability results.
Before stating the two lemmas, we introduce a subset Ω_T ⊆Ω defined by
Ω_T = {ω∈Ω : sup_t ∈ [0, T] | W_t - W_T| ≤ K }
where T and K are numbers still to be determined (it will be chosen at the end of Section <ref>).
Let q ∈ 2 + 4 and X_t,s^κ_q be the backward stochastic flow with respect to u_q+2 and noise parameter √(2 κ_q) starting from (x,y) ∈Γ_p for some p ≤ q-4. Let T_0 Ω→ [0,T] and consider the reverse stopping time τΩ→ [0, T_0] defined by
τ(ω) = sup{ s ∈ (0, T_0] : X_T_0,s^κ_q(x, y, ω) ∈Γ_p-1}∨ 0.
Then there exist three constants Q, C and C^' depending only on a_0 such that if q ≥ Q then
C a_p-1^3 δ≤ T_0(ω) - τ(ω) ≤ C^' a_p-1^ ∀ω∈Ω_T.
We fix an arbitrary ω∈Ω_T and write T_0 = T_0(ω).
Let t_min be the smallest time such that
X_T_0, t_min(x,y) ∈Γ_p - 1( A_p+1A_p/3 A_p).
and t_max be the largest time with the same property.
It is clear from the construction that
a_p-1^3 δ≲B_pv_p-1 - A_p+1A_p/3 A_p v_p-1≤ T_0 - t_max≤ T_0 - t_min≤L_p + A_p-1v_p + L_p-1v_p-1 + A_p+1A_p/3 A_p v_p-1≲ a_p-1^.
We have to consider two distinct cases: p ∈ 1 + 4 or p ∉1 + 4.
Case 1: (p ∉1 + 4).
By definition of the parameters, (see (<ref>), (<ref>))
A_p/A_pℓ_p ≤A_p+1A_p/3 A_p
Therefore, Lemma <ref> gives
∫_t_min^T_0∇ u_q+2_L^∞( B_A_p+1A_p/3 A_p(X_T_0,t(x,y)) ) dt ≤ C
where C is a universal constant given by Lemma <ref>.
Define the reverse stopping time τΩ→ [t_min, T_0] by
τ(ω) = sup{ t ∈ [t_min, T_0] : |X_T_0,t^κ_q(x,y,ω) - X_T_0,t(x,y)| > A_p+1A_p/3 A_p}∨ t_min.
Then, thanks to Lemma <ref>, for all ω∈Ω_T and all t ∈ [τ(ω), T_0]
|X_T_0, t^κ_q(x,y,ω) - X_T_0, t(x,y)| ≤ 2 B_q K exp( ∫_τ(ω)^T_0∇ u_q+2_L^∞( B_A_p+1A_p/3 A_p(X_T_0, s(x,y)) ) ds )
≤ 2 B_q K exp( ∫_t_min^T_0∇ u_q+2_L^∞( B_A_p+1A_p/3 A_p(X_T_0, s(x,y)) ) ds ) (<ref>)≤ 2 ^C B_q K.
If 2 ^C B_q K ≤A_p+1A_p/3 A_p then it immediately follows from the inequalities above that τ(ω) = t_min for all ω∈Ω̃_T. Hence
|X_T_0, t^κ_q(x,y,ω) - X_T_0, t(x,y)| ≤ 2 ^C B_q K ∀ω∈Ω_T, ∀ t ∈ [t_min, T_0].
From this, it immediately follows that (<ref>) holds for the stopping time τ. To finish the proof in this case, all we need to show is 2 ^C B_q K ≤A_p+1A_p/3 A_p, which thanks to (<ref>) and (<ref>) reduces to a_q^1+δ/2 + δ≪ a_p^δ a_p-1^ + δ + 1 + δ/2 + δ to reabsorb constants depending on a_0, C and K. This holds since p ≤ q-4 and (<ref>).
Case 2: (p ∈ 1 + 4).
In this case, we have to consider the so-called intersection problem.
We note that p ≤ q-5.
We start by studying the backward Lagrangian flow.
We will work in the coordinates of an arbitrary rectangle R ∈ℰ_p-1, see Figure <ref>.
Let t∈ [0, T_0] be the largest time such that X_T_0, t(x,y) ∉S_p-1, R(2 ℓ_p-1) for all t ∈ [0, t].
We refer to (<ref>) for the definition of the set S_p-1, R as well as to Figure <ref>.
Define the reverse stopping time τ̃Ω→ [0,T] as
τ̃(ω) = sup{ t ∈ [t,T_0] : |X_T_0,t^κ_q(x,y,ω) - X_T_0,t(x,y)| > A_p+1/10}.
It is clear that u_q+2^(2)(X_T_0,t^κ_q(x,y,ω)) = u_q+2^(2)(X_T_0,t(x,y)) for all [τ̃(ω), T_0] and all ω∈Ω_T. For this we conclude that
|X_T_0, t^κ_q (2)(x,y,ω) - X_T_0, t^(2)(x,y)| ≤ B_q K
for all [τ̃(ω), T_0] and all ω∈Ω_T.
For the first component, we observe that for all [τ̃(ω), T_0] and all ω∈Ω_T
|X_T_0, t^κ_q (1)(x,y,ω) - X_T_0, t^(1)(x,y)| ≤∫_τ̃(ω)^T_0 |u_q+2^(1)(X_T_0, s^κ_q(x,y,ω) )| ds + ∫_τ̃(ω)^T_0 |u_q+2^(1)(X_T_0, s(x,y) )| + B_q
≤6 ℓ_p-1/v_p v_p-1 + 2 ℓ_p-1/v_p v_p-1 + B_q K ≤A_p+1/20
where we used (<ref>) and a_0 sufficiently small so that a_0^≤ K. Hence τ̃(ω) = t.
At this point we can finish the proof in the same way as in the first case.
Let q ∈ 2 + 4 and X_t,s^κ_q be the backward stochastic flow with respect to u_q+2 and noise parameter √(2 κ_q) starting from (x,y) ∈ C_q-1. Let T_0 Ω→ [0,T] and consider the stopping time τΩ→ [0, T_0] defined by
τ(ω) = sup{ s ∈ (0, T_0] : X_T_0,s^κ_q(x,y,ω) ∈Γ_q-4}
Then there exist three constants Q, C and C^' independent of q such that for all q ≥ Q
C a_q-4^3 δ≤ T_0(ω) - τ(ω) ≤ C^' a_q-4^ ∀ω∈Ω_T.
We fix an arbitrary ω∈Ω_T and write T_0 = T_0(ω).
The first challenge to overcome is the so-called intersection problem. By arguments similar to Case 2 in the proof of the previous lemma, we overcome the intersection problem because
B_q K v_q-2/v_q-1≤A_q-1/100.
for q large enough.
The remainder is based on Lemma <ref> and Lemma <ref> in the same way as the proof of Proposition <ref>.
Let t_min be the smallest time so that
X_T_0, t_min(x,y) ∈Γ_q-4( A_q-1/100)
and let t_max be the largest time such that the same property holds.
In a similar way as in the previous proof, we have
a_q-3^3 δ≲ T_0 - t_max≤ T_0 - t_min≲ a_q-3^.
Since A_q-1/A_q-1ℓ_q-1≤A_q-1/100 for q large enough, Lemma <ref> applies and gives
∫_t_min^T_0∇ u_q+2_L^∞( B_A_q-1/100(X_T_0,t(x,y)) ) dt ≤ 3C.
where C is the universal constant coming from Lemma <ref>.
Define the stopping time τ̃Ω→ [t_min, T_0] by
τ̃(ω) = sup{ t ∈ [0, T_0] : |X_T_0,t^κ_q(x,y,ω) - X_T_0,t(x,y)| > A_q-1/100}∨ t_min.
Then, thanks to Lemma <ref>, for all ω∈Ω_T and all t ∈ [τ̃(ω), T_0]
|X_T_0,t^κ_q(x,y,ω) - X_T_0, t(x,y)| ≤ 2 B_q K exp( ∫_τ̃(ω)^T_0∇ u_q+2_L^∞( B_A_q-1/100(X_T_0, s(x,y)) ) ds )
≤ 2 B_q K exp( ∫_t_min^T_0∇ u_q+2_L^∞( B_A_q-1/100(X_T_0, s(x,y)) ) ds ) ≤ 2 ^3 C B_q K.
If 2 ^3 C B_q K ≤A_q-1/100 then it immediately follows from the inequalities above that τ^β(ω) = t_min for all ω∈Ω_T. Hence
|X_T_0, t^κ_q(x,y,ω) - X_T_0, t(x,y)| ≤ 2 ^3 C B_q K ∀ω∈Ω_T, ∀ t ∈ [t_min, T_0].
From this, it immediately follows that (<ref>) holds for the stopping time defined in the statement of the proposition. To finish the proof in this case, all we need to show is that 2 ^3 C B_q K ≤A_q-1/100 for q large enough which holds thanks to (<ref>).
The following corollary is an immediate consequence of the two previous lemmas.
Let q ∈ 2 + 4 with q ≥ 10, let j < q and X_t,s^κ_q be the backward stochastic flow with respect to u_q+2 and noise parameter √(2 κ_q) starting from (x,y) ∈ C_q-1. Let T_0 Ω→ [0,T] and consider the stopping time τΩ→ [0,T_0] defined by
τ(ω) = sup{ s ∈ [0,T_0] : X_T_0,s^κ_q(x,y,ω) ∈Γ_j}
Then there exist three positive constants Q, C and C^' independent of q such that for all q ≥ Q
C a_j^3 δ≤ T_0(ω) - τ(ω) ≤ C^' a_j^ ∀ω∈Ω_T.
In particular, for all ω∈Ω_T, (X_T_0,0^κ_q(x,y,ω), (x,y) ) ≥ A_0.
§ PROOF OF THEOREM <REF>
In this section we prove that all the assumptions of Proposition <ref> are satisfied for the sequence of velocity fields { u_q+2}_q ∈ 2+4 and the sequence of diffusivity parameters {κ_q }_q ∈ 2+4.
Recall the definition of the dissipative sets D_q in Item <ref> in Lemma <ref>. We stress again that
inf_q ≥ 1^2(D_q) >0.
We consider X_t,s the backward flow of u_q+2 with noise parameter √(2 κ_q) defined in Section <ref>. Firstly, we notice that
u - u_q+2_L^2^2/κ_q ≲v_q+2^2/κ_q≲ a_q+1^- 2 (1 + δ) + 2 δ/2 + δ≲ a_q+1^→ 0 ,
thanks to Equation (<ref>), Equations (<ref>) and (<ref>) in Lemma <ref> and the fact that 0< ≪δ < 1.
Before stating the main claim, we take T=1, but we could have considered any fixed positive time.
Our goal is now to prove the following:
Main Claim:
There exist two constants c_1, c_2, Q > 0 depending only on a_0 such that for any q ≥ Q and any x ∈ D_q, there exist two sets Ω_q,1,x, Ω_q,2,x⊆Ω for which:
* min{ (Ω_q,1,x), (Ω_q,2,x) }≥c_1 for all q ≥ Q;
* for all q ≥ Q, we have
X_T,0(x, ω) ∈ B_c_2(x) ∀ω∈Ω_q,1,x;
and
X_T,0(x, ω) ∉B_2 c_2(x) ∀ω∈Ω_q,2,x.
Assuming that this last claim holds, the proof follows from Proposition <ref>. Indeed, the main claim implies that
max( inf_ω∈Ω_q,1,x |X_T,0(x, ω) - [X_T,0(x, ·)]| , inf_ω∈Ω_q,2,x |X_T,0(x, ω) - [X_T,0(x, ·)]| ) ≥c_2/2 ∀ x ∈ D_q
which in turn leads to
inf_x ∈ D_q[ |X_T,0(x, ω) - [X_T,0(x, ·)]|^2 ] ≥c_1 c_2^2/4.
Since this last inequality holds for all q ≥ Q, all the assumptions of Proposition <ref> are satisfied and the proof of Theorem <ref> follows. It only remains to prove the main claim above.
Proof of Main Claim: The proof is divided into two parts. In the first part, we prove that there exists a set Ω_q,1,x⊂Ω such that (<ref>) holds. In the second part, we show that for any integer q ≥ 1, there exists a set Ω_q,2⊂Ω such that (<ref>) holds.
Part 1: (Proof of (<ref>))
We now fix x ∈ D_q, then x ∈ D_q,R for some R ∈𝒢_q (see (<ref>)). We define the stopping time (recalling that we denote as A(·) the enlarged set of A⊂^2)
τ (ω) = sup{ s ∈ [0,T]: X_T,s (x, ω) ∉ D_q,R (B_q/100 ) } .
We claim that there exists a set Ω_q,1,x⊂Ω with ℙ (Ω_q,1,x) ≥ c >0, with the constant c>0 independent on q, such that τ (ω ) =0 for any ω∈Ω_q,1,x.
Let
Ω_1 = {ω∈Ω : sup_t ∈ [0,T] |W_t - W_T| ≤1200}.
It is clear that (Ω_1) ≥ c >0. We work in the coordinates of R and we denote u_q+2 = (u_q+2^(1) , u_q+2^(2)).
It is clear from property <ref> in Lemma <ref> that u_q+2 is A_q+1 + B_q+1 periodic in the dissipative set D_q,R [B_q 150].
Recall the collection of rectangles ℱ_q(R) coming from Item <ref> in Lemma <ref> and note that it follows from Item <ref> of Lemma <ref> that
∫_Ω_1∫_τ(ω)^T | u_q+2^(2) ( X_T,s (x , ω)) | ds dℙ (ω) ≤∫_0^T v_q+1ℙ[ Ω_1 ∩{ω∈Ω : X_T,s (x , ω) ∈⋃_R∈ℱ_q(R)R}] ds.
An Itǒ-Tanaka trick applied to the first component of the velocity field (which is zero average) together with(<ref>) yields
sup_t ∈ [0,T]∫_Ω | ∫_t^Tu_q+2^(1)(X_1,s) ds | ≲ a_q+1^- εℓ_q+2^- εB_q+1/B_q v_q+2≲ a_q+1^δ/2 + δ - 4 (1 + δ) A_q+1.
Indeed, using the Itǒ formula with f such that
Δ f = u_q+2^(1), ∫ f =0 ,
we deduce
| ∫_t^Tu_q+2^(1) (X_s,1) ds | = 1/κ_q | f(X_t,1) - f(x) + ∫_t^T∇ f (X_s,1) ·u_q+2 (X_s,1) ds + √(2 κ_q)∫_t^T ∇ f (X_s,1) · d W_s |.
Thanks to Property <ref> in Lemma <ref> and (<ref>) we have
∇^k f _L^∞≲ B_q+1^2-k u_q+2^(1)_C^ε (D_q,R (B_q /100) )≲ℓ_q+2^- ε B_q+1^2-k v_q+2
for k=0,1. Therefore, using that κ_q = B_q^2 and the bounds on the parameters (<ref>), in particular v_q+2≤ B_q, we have
| ∫_t^Tu_q+2^(1) (X_s,1) ds | ≤ f _L^∞/κ_q + u_q+2_L^∞∇ f _L^∞/κ_q + ∇ f _L^∞/√(κ_q)≲ a_q+1^- εB_q+1/B_q v_q+2 ,
where the norms are taken in the set D_q,R (B_q /100).
This proves (<ref>).
Therefore, by (<ref>) and Markov's inequality we obtain
ℙ ( ω∈Ω : sup_t ∈ [0,T] | ∫_t^Tu_q+2^(1) (X_T,s (x, ω)) ds | ≥ A_q+1 ) ≤ C a_q+1^δ/2 + δ - 4 (1 + δ) ,
and we denote this set Ω_u_q+2.
Using the previous property we conclude from the formula of the integral curves of X_T,s that
{ω∈Ω: X_T,s (x, ω ) ∈⋃_R∈ℱ_q(R)R}⊆{ω∈ (Ω_u_q+2)^c : √(κ_q) W_s^(1)∈⋃_R∈ℱ_q(R)π_1 ( R (A_q+1) ) }∪Ω_u_q+2 .
From this, it follows that
∫_0^T v_q+1ℙ( Ω_1 ∩ X_T,s (x , ω) ∈⋃_R∈ℱ_q(R)R) ds
≤∫_0^T v_q+1ℙ( Ω_1 ∩√(κ_q) W_s^(1)∈⋃_R∈ℱ_q(R)π_1( R (A_q+1) ) ) ds
+ C v_q+1 a_q+1^δ/2 + δ - 4 (1+ δ) ,
and we can bound, C v_q+1 a_q+1^δ/2 + δ - 4 (1+ δ)≤ a_0^2 B_q thanks to (<ref>) for any q sufficiently large thanks to δ≫ε, i.e. (<ref>).
Using Item <ref> of Lemma <ref> about the structure of the collection of rectangles ℱ_q(R), we can estimate the first term on the right-hand side above as follows:
∫_0^T v_q+1ℙ( Ω_1 ∩√(κ_q) W_s^(1)∈⋃_R∈ℱ_q(R)π_1( R (A_q+1) ) ) ds
≤∫_0^T∑_j=0^n_q+1∫_ j B_q+1 +[ - 2 A_q+1 , 2 A_q+1] ∩ |y | ≤√(κ_q)1/√(4 π s κ_q)exp ( - |y|^2/4 κ_q s ) v_q+1 dy ds
≤∫_0^T∑_j=0^2 B_q /B_q+1∫_j B_q+1 +[ - 2 A_q+1 , 2 A_q+1] 1/√(4 π s κ_q)exp ( - |y|^2/4 κ_q s ) v_q+1 dy ds
≤ B_q B_q+1A_q+1 v_q+1/√(κ_q)≤ a_0 B_q ,
where in the last we used the properties of the parameters (<ref>). Then, by Markov for any x ∈ D_q we bound
ℙ ( ω∈Ω_1 : ∫_0^T | u_q+2^(2) (X_T,s (x, ω)) | ds > a_0^1/2 B_q ) ≤∫_Ω_1∫_0^T | u_q+2^(2) (X_T,s (x, ω)) | ds /a_0^1/2 B_q≤ a_0^1/2 ,
where in the last inequality we used the previous computation (<ref>). Therefore,
there exists Ω_q,1,x,1⊂Ω_1 with ℙ (Ω_q,1,x,1) ≥ℙ(Ω_1) - a_0^1/2 such that
∫_0^T | u_q+2^(2) (X_T,s (x, ω)) | ds ≤ a_0^1/2 B_q
for any ω∈Ω_q,1,x,1.
Finally, choosing Ω_q,1,x = Ω_q,1,x,1∩Ω_u_q+2, satisfying ℙ (Ω_q,1,x) ≥ℙ (Ω_1) - 2 a_0 ≥ c>0 thanks to the fact that a_0 can be taken sufficiently small compared to the universal constant ℙ (Ω_1), we get
|X_T, τ(x, ω) - x| ≤∫_0^T |u_q+2(X_T, s (x, ω))| ds + √(2 k_q) |W_1 - W_0| ≤ 2 a_0 B_q + B_q/200≤B_q/100 ∀ω∈Ω_q,1,x ,
and thanks to the fact that a_0 ≤1/200 we get τ (ω) ≡ 0 for any ω∈Ω_q,1,x. Therefore, we conclude the proof of (<ref>).
Part 2: (Proof of (<ref>))
Fix some x ∈ D_q. Let R ∈𝒢_q be such that x ∈ D_q,R.
Recall that the set D_q,R (in the coordinates of R) is given by (see Item <ref> in Lemma <ref>)
D_q,R = [ 0, L_q/3] ×[ A_q/2 + B_q/100, A_q/2 + B_q/50] ⊆ R.
From now on, we will always work in the coordinates of R. Let τ^(1), τ^(2), τ^(out)Ω→ [0, T] be the reverse stopping times defined by
τ^()(ω) = sup{ t ∈ [0, T] : X_T, t(x, ω) ∉ [0, L_q/3] × [ - A_q/2 - B_q/2, A_q/2 + B_q/2] }.
τ^(1)(ω) = sup{ t ∈ [0, T] : X_T, t(x, ω) ∈[ 0, L_q/3] ×{ 0 }};
τ^(2)(ω) = sup{ t ∈ [0, T] : X_T, t(x, ω) ∈{ 0 }×[ - A_q/2 - B_q/2, A_q/2 + B_q/2] };
Our first goal is to prove
[ τ^(2)≥T2] ≥ c > 0
for some constant depending only on a_0.
We will compute
[ τ^(1)≥3 T4, τ^()≥T2, τ^(2)≥T2]
= [ τ^(2)≥T2|τ^(1)≥3 T4, τ^(out)≥T2] [ τ^(1)≥3 T4, τ^()≥T2]
Thanks to the fact that κ_q = B_q^2 and (<ref>) it holds that [ τ^(1)≥3T4, τ^()≥T2] ≥ c > 0.
Then we notice that
[ τ^(2)≥T2|τ^(1)≥3 T4, τ^()≥T2]
≥[ ∫_T/2^3 T/4 v_q 1_ [ X_T, t(x, ω) ∈ [0, L_q3] × [- A_q2, A_q2 ]] dt ≥ L_q |τ^(1)≥34, τ^()≥12]
≥[ ∫_T2^3 T4 v_q 1_ [ √(2 κ_q) W_t ∈ [- A_q2, A_q2 ] ] dt ≥ L_q ]
≥[ ∫_0^T41_[ √(2 κ_q/A_q^2) W_t ∈ [- 12, 12 ] ] dt ≥L_qv_q]
= [ A_q^22 κ_q∫_0^2 κ_q/4 A_q^2 T1_[ √(2 κ_q/A_q^2) W_A_q^2/2 κ_q s∈ [- 12, 12 ] ] ds ≥L_qv_q]
= [ ∫_0^2 κ_q/4 A_q^2 T 1_ [ W_s∈ [- 12, 12 ] ] ds ≥2 κ_q L_qA_q^2 v_q] ≥ c_0
for some universal c_0 > 0.
Indeed, 2 κ_q/4 A_q^2≈ a_q^-2 and 2 κ_q L_qA_q^2 v_q≈ a_q^- up to constants depending only on a_0 and therefore the last inequality holds for q sufficiently large thanks to Theorem <ref>.
Hence, (<ref>) holds.
Define a new reverse stopping time, but now in the coordinates of R_i. Precisely, we define τ^(3)Ω→ [0,T] by
τ^(3)(ω) = sup{ t ∈ [0, T] : X_T, t(x, ω) ∈{ u_q+2 = v_q-1} [ B_q] }.
Note that
[ τ^(3)≥T4] ≥ [ τ^(3)≥T4 , τ^(2)≥T2 ]
= [ τ^(3)≥T4 | τ^(2)≥T2 ] [ τ^(2)≥T2 ].
By construction of the velocity fields in Section <ref>, we see that
[ τ^(3)≥T4 | τ^(2)≥T2 ] ≥ c_1 ,
for some constant c_1 >0.
From this and (<ref>) we conclude that
[ τ^(3)≥T4] ≥ c_1 c >0 .
Now select K sufficiently large so that (Ω_T) > 1 - c_1 c/10. Thus, [ Ω_T ∩{τ^(3)≥T4} ] > c_1 c/2. We then conclude thanks to Corollary <ref>.
§ PROOF OF THEOREM <REF> ABOUT THE 3D FORCED NAVIER–STOKES EQUATIONS
We use the trick of the (2+ 1/2)-dimensional Navier–Stokes equations that has been already exploited in other papers, see for instance <cit.>. More precisely, we suppose that all the functions in the system are independent on the last variable x_3 where x = (x_1, x_2, x_3) ∈^3. In this case the first two equations decouple with the last one, i.e. given the initial data u_, κ : ^2 →^2 and θ_, κ : ^2 → and the force f_ν : ^2 →^2, if (u_ν, p_ν , θ_ν) is a solution of
∂_t u_ν + u_ν·∇ u_ν + ∇ p_ν = νΔ u_ν + f_ν ,
(u_ν) =0 ,
u (0, · ) = u_, ν (·) ,
∂_t θ_ν + u_ν·∇θ_ν = νΔθ_ν ,
θ_ν (0, ·) = θ_, ν (·) ,
where u_ν : [0,1 ] ×^2 →^2 and p_ν : ^2 → and θ_ν : [0,1] ×^2 →,
then
v_ν (t, x_1, x_2, x_3) = (u_ν (t, x_1, x_2), θ_ν (t, x_1, x_2)) and P_ν (t, x_1, x_2, x_3) = p_ν (t, x_1, x_2) is a solution of (<ref>) with initial datum v_, ν (x_1, x_2, x_3)= (u_, ν (x_1, x_2) , θ_, ν (x_1, x_2)) and force F_ν (x_1, x_2, x_3) = f_ν (x_1, x_2).
We aim at finding a solution of (<ref>) which is independent on x_3.
We choose ν_q = κ_q, u_, ν_q (x_1, x_2)= u_q+2, where u_q+2 is the autonomous velocity field defined in (<ref>),
F_ν_q = ℒ ( u_q+2·∇ u_q+2 ) - ν_q Δ u_q+2, where ℒ is the Leray projector into divergence-free velocity fields and θ_, ν = θ_ defined in Proposition <ref>. Let θ_κ_q : [0,1] ×^2 → be the solution to
∂_t θ_κ_q + u_q+2·∇θ_κ_q = κ_q Δθ_κ_q
θ_κ_q (0, ·) = θ_ (·) .
and p_q+2 : ^2 → be the zero average solution to
Δ p_q+2 = - (u_q+2⊗ u_q+2) .
It is straightforward to check that (u_ν, p_ν, θ_ν)= (u_q+2, p_q+2, θ_κ_q)
is the unique solution to (<ref>) with the given initial data and force defined above (recall that u_q+2 is the autonomous velocity field defined in (<ref>)).
Finally, for v_ν_q = (u_ν_q, θ_ν_q) we have
lim sup_ν_q → 0ν_q ∫_0^1 ∫_^3 | ∇ v_ν_q (t, x)|^2 dx dt ≥lim sup_κ_q → 0κ_q ∫_0^1 ∫_^3 | ∇θ_κ_q (t, x)|^2 dx dt >0 ,
where for the last we used Proposition <ref>, Proposition and Lemma <ref> together with u - u_q+2_L^2/√(κ_q)→ 0 as κ_q → 0 thanks to (<ref>).
Thanks to Lemma <ref> it is straightforward to check that
F_ν_q - F_0 _C^α + v_, ν_q - v__C^α→ 0 as ν_q → 0 for some force F_0 ∈ C^α and initial datum v_∈ C^α.
We now prove the last remaining property. Thanks to the fact that u_q+2→ u in C^α and
sup_ν_q θ_ν_q_L^∞≤θ__L^∞≤ C
then there exists a converging subsequence of { v_ν_q = (u_q+2, θ_ν_q) }_q in the weak*-L^∞ topology and v_0 = (u, θ_0) where θ_ν_q*-L^∞⇀θ_0 up to subsequences.
Thanks to the fact that all the functions are independent on x_3, it is straightforward to check that v_0 is a solution to the 3D forced Euler equations with initial datum v_∈ C^α and force F_0 ∈ C^α. The Lipschitz property of the energy of v_0 follows from Proposition <ref> since u is independent of time. This concludes the proof.
plain
|
http://arxiv.org/abs/2409.02202v1 | 20240903181210 | The growth of Tate-Shafarevich groups of $p$-supersingular elliptic curves over anticyclotomic $\mathbb{Z}_p$-extensions at inert primes | [
"Erman Isik",
"Antonio Lei"
] | math.NT | [
"math.NT",
"11R23 (primary), 11G05, 11R20 (secondary)"
] |
Checkpoint and Restart: An Energy Consumption Characterization in Clusters
Marina Morán1 Javier Balladini1 Dolores Rexachs2 Emilio Luque2
September 9, 2024
==========================================================================
§ ABSTRACT
Let E be an elliptic curve defined over , and let K be an imaginary quadratic field. Consider an odd prime p at which E has good supersingular reduction with a_p(E)=0 and which is inert in K. Under the assumption that the signed Selmer groups are cotorsion modules over the corresponding Iwasawa algebra, we prove that the Mordell–Weil ranks of E are bounded over any subextensions of the anticyclotomic _p-extension of K. Additionally, we provide an asymptotic formula for the growth of the p-parts of the Tate–Shafarevich groups of E over these extensions.
§ INTRODUCTION
Let E be an elliptic curve defined over and p be an odd prime of good reduction. Let _ denote the cyclotomic _p-extension of , and _(n) be the unique subextension of _ of degree p^n for n≥ 0. Suppose that the p-primary component of the Tate–Shafarevich group (E/_(n)) is finite for all n, and let p^e_n =| (E/K_n )[p^∞]|. The variation of e_n in n depends on whether the elliptic curve E has ordinary or supersingular reduction at p.
When E has ordinary reduction at p, it follows from <cit.> that the p-primary Selmer group of E over _ is cotorsion over the associated Iwasawa algebra. It then follows from Mazur's control theorem <cit.> that for n≫0,
e_n-e_n-1 = λ + (p^n-p^n-1) μ - r_∞,
where r_∞ is the rank of E over _ (which is finite by <cit.>), and λ and μ are the Iwasawa invariants of the Pontryagin dual of the p-primary Selmer group of E over _.
When E has supersingular reduction at p, the case where a_p(E)=0 has been studied by Kurihara <cit.>, Kobayashi <cit.> and Pollack <cit.>. For n≫ 0, we have the following formula:
e_n - e_n-1 = _n^∓ + λ_± + μ_±(p^n-p^n-1) - r_∞,
where _n^∓ is the product of certain cyclotomic polynomials, r_∞ is the rank of E over _, λ_± and μ_± are the Iwasawa invariants of the cotorsion signed Selmer groups of E over _, and the sign ± depends on the parity of n.
In <cit.>, Sprung obtained an analogous formula for the p-supersingular elliptic curves with a_p(E)≠ 0 and abelian varieties of GL(2)-type. See also <cit.> for related work on upper bounds on the growth of the Bloch–Kato–Shafarevich–Tate groups for higher-weight modular forms. Furthermore, the growth of Mordell–Weil ranks and the Tate–Shafarevich groups of an elliptic curve over the cyclotomic _p-extension of certain number fields have been studied in <cit.>.
:
e_n - e_n-1 = q_n^∙ + λ_∙ + μ_∙(p^n-p^n-1) - r_∞, for n≫ 0,
where ∙∈{♯, ♭}, q_n^∙ is again a sum of powers of p, λ_∙, μ_∙ are the Iwasawa invariants of certain chromatic Selmer groups defined in <cit.>, where the choice of ∙ is decided by the Modesty Algorithm <cit.>.
The focus of this article is analogous results for the anticyclotomic -extension of an imaginary quadratic field K. Suppose that p splits in K and the two primes above p splits completely in the anticyclotomic -extension of K.
Under certain hypotheses on the vanishing of the Mordell–Weil rank and the p-part of the Tate–Shafarevich group of E over K, Iovita and Pollack <cit.> derived a formula of the growth of the Tate–Shafarevich groups over anticyclotomic tower that mimics the cyclotomic case.
If we assume in addition that p ≥ 5 and that (E,K) satisfies the Heegner hypothesis, Çiperiani proved in <cit.> that the Tate–Shafarevich group of E has trivial corank over the Iwasawa algebra associated with the anticyclotomic _p-extension of K. In <cit.>, a more precise formula of the growth of the Tate–Shafarevich groups inside this tower is given. More recently, Burungale–Kobayashi–Ota <cit.> studied this question when E has complex multiplication by an order in K.
§.§ Statement of the main theorem
Let K be an imaginary quadratic field such that p is inert in K. Assume that p does not divide the class number h_K of K and a_p(E)=0. Let K_∞ denote the anticyclotomic _p-extension of K. Note that the unique prime of K above p is totally ramified in K_∞. For an integer n ≥ 0, we write K_n for the unique subextension of K_∞ such that [K_n : K] = p^n.
The Galois group of K_∞ over K is denoted by Γ. In addition, write Γ_n = (K_∞/K_n) and G_n = (K_n/K). We fix once and for all a topological generator γ of Γ. Let Λ denote the Iwasawa algebra _p[[Γ]]=_n[G_n], which we shall identify with the power series ring _p[[X]] by sending γ-1 to X. Let (-)^∨ := (-,_p/_p) denote the Pontryagin duality functor.
In this article, we prove that the Mordell–Weil rank of E over K_∞ is bounded assuming that the signed Selmer groups ^±(E/K_∞) are Λ-cotorsion. Furthermore, we derive an asymptotic formula for the growth of the p-parts of the Tate–Shafarevich groups (E/K_n ) of E in terms of the Iwasawa invariants of ^±(E/K_∞)^∨. These Selmer groups have recently been studied in <cit.>, and are similar to those defined by Rubin in <cit.> for CM elliptic curves (see Definition <ref> for the precise definition and Remark <ref> for a discussion on how the plus and minus subgroups used to define these groups compare with the counterparts used to define Kobayashi's plus and minus Selmer groups in <cit.>). The main result of this article is the following theorem.
Let E be an elliptic curve defined over and p an odd prime where E has good supersingular reduction with a_p(E)=0.
Let K be an imaginary quadratic field such that p is inert in K. Assume that p does not divide the class number h_K. Let K_∞ denote the anticyclotomic _p-extension of K.
Assume that ^±(E/K_∞)^∨ are both Λ-torsion. Then, we have:
* _ E(K_n) is bounded independently of n;
* Assume that (E/K_n)[p^
∞] is finite for all n≥ 0. Let p^e_n =| (E/K_n )[p^∞]|. Define r_∞=sup_n≥0{_ E(K_n)}. Let λ_± and μ_± be the Iwasawa invariants of ^±(E/K_∞)^∨. Then, for n≫0,
e_n-e_n-1=
2s_n-1 + λ_- + ϕ(p^n)μ_- -r_∞ if n is odd,
2s_n-1+λ_+ + ϕ(p^n)μ_+ -r_∞ if n is even,
where ϕ(p^n)=p^n-p^n-1, and s_n=∑_k=1^n(-1)^n-kp^k for n≥ 0.
q_n=
p^n-1- p^n-2+ p^n-3-p^n-4+… +p if n is odd,
p^n-1-p^n-2+… + p^2-p if n is even.
Under certain hypotheses, it is known that ^±(E/K_∞)^∨ are Λ-torsion; see <cit.> and Remark <ref> for a more detailed discussion. Our result is complementary to the recent work of <cit.>, where E is assumed to have complex multiplication by an order in K. In their setting, there exists exactly one element ∙∈{+,-} (the choice of which depends on the root number of E) such that ^∙(E/K_∞)^∨ is Λ-torsion. Furthermore, _ E(K_n) is unbounded as n→∞.
§.§ Organization
In Section <ref>, we review several plus and minus objects. We first review a system of local points introduced in <cit.>, which is used to construct signed (or plus and minus) Coleman maps following <cit.>. These maps are then utilized to define the signed Selmer groups that appear in the statement of Theorem <ref>. After analyzing the images of these Coleman maps, we review results on certain global cohomology groups under the assumption that the signed Selmer groups are cotorsion. We review the definition of Kobayashi ranks for projective systems of ℤ_p-modules in Section <ref>. We derive a formula for the Kobayashi ranks of modules arising from specific 2 × 2 matrices defined over Λ, which is central to our proof of Theorem <ref> and is the principal novelty of this article. In Section <ref>, we establish a connection between Coleman maps and Kobayashi ranks, and show how this relationship enables us to study the growth of certain local modules. We conclude by combining these results to prove Theorem <ref>.
§.§ Outlook
Our proof of Theorem <ref> follows closely the ideas of <cit.>. In loc. cit., the author presented a detailed study of the growth of modules of the form (Λ/⟨ f⟩)_Γ_n, where f is a non-zero element of Λ, as n→∞. In the context of the current article, several Iwasawa modules have rank 2, and we are led to study the growth of modules of the form (Λ^⊕ 2/⟨ u_1,u_2⟩)_Γ_n, where u_1,u_2 are linearly independent elements of Λ^⊕ 2. It should be possible to extend our results to modules arising from Λ^⊕ d, d≥2. This would potentially allow us to remove some of the hypotheses imposed in <cit.> to study elliptic curves defined over more general number fields. In a different vein, it would be interesting to study similar results in the context of the present article without assuming a_p(E)=0. This would require an appropriate extension of the results of <cit.> on local points. We plan to study these questions in the future.
§.§ Acknowledgements
We thank Ashay Burungale and Fırtına Küçük for interesting discussions during the preparation of this article. The authors' research is supported by the NSERC Discovery Grants Program RGPIN-2020-04259 and RGPAS-2020-00096. EI is also partially supported by a postdoctoral fellowship from the Fields Institute.
§ PLUS AND MINUS OBJECTS
§.§ Local points of Burungale–Kobayashi–Ota
For 0 ≤ n ≤∞, let k_n denote the localization of K_n at the unique prime above p (the uniqueness of this prime is a consequence of p∤ h_K). Let _n denote the maximal ideal of the integer ring of k_n. For simplicity, set k:=k_0, and let denote the integer ring of k. When n <∞, we identify the Galois group (k_n/k) with G_n, and similarly (k_∞/k) with Γ.
Let be the formal group of the minimal model of E over . Let λ:=log_ denote the logarithm of which is normalized by λ'(0)=1.
Let Ξ denote the set of finite characters of Γ. For χ∈Ξ, we say that χ is of order p^n if it factors through G_n, but not G_n-1. Define
Ξ^+ ={χ∈Ξ|the order of χ is a positive even power of p};
Ξ^- ={χ∈Ξ|the order of χ is an odd power of p}∪{1}.
Here, 1 is the trivial character.
For n≥ 0, let Ξ_n^± denote the set of χ∈Ξ^± factoring through (k_n/k). For χ∈Ξ^±_n, let
λ_χ(x):=1/p^n∑_σ∈(k_n/k)χ^-1(σ)λ(x)^σ.
Define
^±(_n)= {x∈(_n) |λ_χ(x)=0 for all χ∈Ξ_n^∓}.
For n≥ 0, there exist c_n^+,c_n^-∈(_n) such that:
* If (-1)^n+1=±1 and n≥1, then
_n+1/nc_n+1^±= c_n-1^± , c_n^±=c_n-1^±,
where _n+1/n: (_n+1)⟶(_n) is the trace map.
* We have c_n^±∈^±(_n).
This is <cit.> when p>3. The case where p=3 follows from the same proof after replacing <cit.> by <cit.>.
Define the Iwasawa algebra Λ_:=[[Γ]] and set Λ_,n:= Λ_/(ω_n)=[G_n], where ω_n=(1+X)^p^n-1 for n≥ 0.
Let n≥ 0 be an integer.
* As Λ_,n-modules, we have (_n)=^+(_n)⊕^-(_n).
* ^±(_n) is generated by c_n^± as a Λ_,n-module.
This is <cit.> when p>3. Once again, we replace the input of <cit.> by <cit.> in the case where p=3.
Let q_n=∑_k=0^n(-1)^n-kp^k, for n≥ 0 with q_-1=0. Then
_^+(_n)=
q_n-1 if n even,
q_n if n odd; and _^-(_n)=
q_n if n even,
q_n-1 if n odd.
I think this is not correct anymore. But I also don't know where we actually use it.
Since the -rank of (_n) is p^n, the corollary follows from Theorem <ref> and Lemma <ref> by induction.
Note that c_0^- is an -basis of (_0)=^-(_0), whereas ^+(_0)=^+(_1)={0}.
Let n≥ 0 be an integer. The subgroup ^-(_n) can be described in terms of the trace map on the formal group , in the same manner as the cyclotomic counterpart given in <cit.>. However, this does not apply to ^+(_n). In fact, both plus and minus subgroups defined in loc. cit. contain (p), whereas the group ^+(_n) defined here does not contain any non-zero elements of (_0). The plus and minus subgroups studied here correspond to those considered in <cit.>.
The construction of the local points relies on Rubin's conjecture <cit.>, which has recently been proven in <cit.> for p≥ 5 and <cit.> for p=3. For p=3, Lemma <ref> and Theorem <ref> can be proven in a similar manner.
§.§ Local Tate pairing
Let T be the p-adic Tate module of E. We regard (_n) ⊆ H^1(k_n , T ) via the Kummer
map. Let
⟨-,-⟩_n: (_n)× H^1(k_n,T) H^2(k_n,(1))
denote cup-product pairing.
For c∈(_n), we define the [G_n]-morphism
P_n,c: H^1(k_n,T) ⟶[G_n]
z ↦∑_σ∈ G_n⟨ c^σ,z⟩_n·σ.
Furthermore, as in <cit.>, the following lemma holds.
The following diagram commutes:
(Q1) at (0,2) H^1(k_n,T);
(Q2) at (4,2) Λ_,n;
(Q3) at (0,0) H^1(k_n-1,T);
(Q4) at (4,0) Λ_,n-1;
[->] (Q1)–node [above] P_n,c(Q2);
[->] (Q1)–node [left] _k_n/k_n-1(Q3);
[->] (Q3)–node [above] P_n-1,_n/n-1(c)(Q4);
[->] (Q2)–node [right] (Q4);
where the vertical maps are the corestriction and the projection.
We have E(K)[p^∞]=0 and E(K_∞)[p^∞]=0.
This follows from the same proof as <cit.>.
The Kummer map induces an injection ^±(_n)⊗_p/_p ↪ H^1(k_n,E[p^∞]).
See <cit.>.
For ∙∈{+,-}, we define H^1_∙(k_n,T) as the orthogonal complement of ^±(_n)⊗_p/_p⊆ H^1(k_n,E[p^∞]) with respect to the local Tate pairing
H^1(k_n,T)× H^1(k_n,E[p^∞]) ⟶ k/.
Define P_n^±:=P_n,c_n^±. Then the kernel of P_n^± is H^1_±(k_n,T).
The proof is the same as that of <cit.>.
For n≥ 0, recall that ω_n=(1+X)^p^n-1=∏_0≤ m≤ nΦ_m, where Φ_m is the p^m-th cyclotomic polynomial in 1+X. We put
ω^+_n := X ∏_1≤ m≤ n,
m: evenΦ_m ,
ω_n^- :=X ∏_1≤ m≤ n,
m: oddΦ_m,
and _n^±:=ω_n^±/X, ω^+_0 := X, ^-_0:=1.
The image of P_n^+ is contained in ω_n^-Λ_,n, whereas the image of P_n^- is contained in _n^+Λ_,n.
The proof is similar to <cit.>. We illustrate this for P_n^+. Let χ be a character of G_n that belongs to Ξ_n^-. Then, χ sends γ=X+1 to 0 or a p^m-th primitive root of unity for some odd non-negative integer m≤ n. Since
c_n^+∈^+(_n),
P_n^+(z)(χ)=∑_σ∈ G_n⟨χ(σ)(c_n^+)^σ,z⟩_n=0.
Therefore, as an element of Λ_,n, P_n^+(z) is divisible by Φ_m for all non-negative odd integers m≤ n. Therefore,
P_n^+(z)∈ X∏_1≤ m≤ n,
m: oddΦ_mΛ_,n=ω_n^-Λ_,n.
Similarly, we have
P_n^-(z)∈∏_1≤ m≤ n,
m: evenΦ_mΛ_,n=_n^+Λ_,n.
Note that ω_n^-Λ_,n⊂_n^-Λ_,n. In particular, Proposition <ref> tells us that the image of P_n^+ is contained in _n^-Λ_,n. We follow the strategy of <cit.> to use this weaker congruence to define the corresponding Coleman map in the following section.
§.§ Signed Coleman maps
It is clear from the definitions that ω_n=_n^∓ω_n^±. Thus, the multiplication by ω_n^∓ induces an isomorphism
Λ_,n^±:=Λ_/(ω_n^±)_n^∓Λ_,n.
There exist unique morphisms _n^± such that the following diagram commutes:
(Q1) at (0,2) H^1(k_n,T);
(Q2) at (4,2) Λ_,n^±;
(Q3) at (0,0) H^1(k_n,T)/H^1_±(k_n,T);
(Q4) at (4,0) Λ_,n;
[->] (Q1)–node [above] _n^±(Q2);
[->] (Q1)–(Q3);
[->] (Q3)–node [above] P_n^±(Q4);
[right hook->] (Q2)–node [right] ×ω_n^∓(Q4);
This follows from Propositions <ref> and <ref>, together with (<ref>).
(Q1) at (-7,2) H^1(k_n,T);
(Q2) at (-3,2) XΛ_,n^-;
(Q3) at (-7,0) H^1(k_n,T)/H^1_+(k_n,T);
(Q4) at (-3,0) Λ_,n;
[->] (Q1)–node [above] _n^+(Q2);
[->] (Q1)–(Q3);
[->] (Q3)–node [above] P_n^+(Q4);
[right hook->] (Q2)–node [right] ×ω^+_n(Q4);
(Q5) at (0,2) H^1(k_n,T);
(Q6) at (4,2) Λ_,n^+;
(Q7) at (0,0) H^1(k_n,T)/H^1_-(k_n,T);
(Q8) at (4,0) Λ_,n;
[->] (Q5)–node [above] _n^-(Q6);
[->] (Q5)–(Q6);
[->] (Q5)–(Q7);
[->] (Q7)–node [above] P_n^-(Q8);
[right hook->] (Q6)–node [right] ×^-_n(Q8);
Or we can use the following
(Q1) at (0,2) H^1(k_n,T);
(Q2) at (4,2) X^*Λ_,n^∓;
(Q3) at (0,0) H^1(k_n,T)/H^1_±(k_n,T);
(Q4) at (4,0) Λ_,n;
[->] (Q1)–node [above] _n^±(Q2);
[->] (Q1)–(Q3);
[->] (Q3)–node [above] P_n^±(Q4);
[right hook->] (Q2)–node [right] ×ω(Q4);
where (*,ω)=(1,ω_n^+) for +, and (*,ω)=(0,_n^-) for -.
A brief explanation on the definition of Coleman maps.
The following diagram commutes:
(Q1) at (0,2) H^1(k_n+1,T);
(Q2) at (4,2) Λ_,n+1^±;
(Q3) at (0,0) H^1(k_n,T);
(Q4) at (4,0) Λ_,n^±;
[->] (Q1)–node [above] _n+1^±(Q2);
[->] (Q1)–(Q3);
[->] (Q3)–node [above] _n^±(Q4);
[->] (Q2)–(Q4);
where the vertical maps are the corestriction and the projection.
The proof is the same as <cit.>.
Observe that
_n Λ_,n= _n Λ_/(ω_n^±) = Λ_.
This allows us to give the following definition.
We define the plus and minus Coleman maps
^± : H^1_(k_∞,T)⟶Λ_
as the inverse limits of _n^±: H^1(k_n,T)⟶Λ_,n^±.
§.§ Image of the signed Coleman maps
The corestriction map H^1(k_m,T)⟶ H^1(k_n,T) is surjective for all m≥ n.
The proof is similar to <cit.>.
Let I_,n be the augmentation ideal of Λ_,n=[G_n]:
I_,n=( [G_n]⟶).
We denote I_:=_n I_,n=XΛ_ and I_,n^±=I_,n∩Λ^±_,n=XΛ_,n^±.
The image of ^- (resp. ^-_n) is equal to Λ_ (resp. Λ_,n^-), whereas the image of ^+ (resp. _n^+) is given by I_ (resp. I_,n^+).
By Lemma <ref> and Nakayama’s lemma, it is enough to show that ⊷(^-_0)=Λ_,0^-= and ⊷(^+_2)=I_,2^+. Note that (_0) is generated by c_0^- as an -module, so the proof of <cit.> can be extended to show that (^-_0)=.
We have the following -isomorphism
I_,2^+=XΛ_,2^+=XΛ_/(XΦ_2)≃[ζ_p^2],
where ζ_p^2 is a primitive p^2-th root of unity, and Xf(X) is sent to f(ζ_p^2-1) under this isomorphism.
Since {ζ_p^2^i:0≤ i≤ϕ(p^2)-1} is an -basis of [ζ_p^2], we see that
{X(1+X)^i:0≤ i≤ϕ(p^2)-1} is an -basis of I_,2^+.
Let σ be the image of our chosen topological generator γ=1+X of Γ in G_2. In particular, it is a generator of the cyclic group G_2, and (k_2/k_1)=⟨σ^p⟩. Since c_2^+∈^+(_2), for all integer j, we have
∑_i=0^p-1(c_2^+)^σ^j+ip=_2/1(c_2^+)^σ^j=0.
Furthermore, Theorem <ref> tells us that
(_2)=^+(_2)⊕(_1),
and ^+(_2)=[G_2]· c_2^+.
Therefore, {(c_2^+)^σ^i:0≤ i≤ϕ(p^2)-1} is an -basis of ^+(_2). This gives the following [G_2]-isomorphism
^+(_2)≃ I_,2^+,
where c_2^+ is sent to X.
Consequently, the morphism _2^+ can be described as
H^1(k_2,T)⟶((_2),[G_2]) ⟶(^+(_2),[G_2]) ≃ I_,2^+,
where the first map is induced by the pairing (<ref>) and the second map is the projection by the decomposition (<ref>), and the last isomorphism is induced from (<ref>). The first arrow is surjective since its dual is the injection (_2)⊗_p/_p ⟶ H^1(k_2,E[p^∞]). It is clear from the definition that the second arrow is surjective. Therefore, the image of _2^+ is I_,2^+, as desired.
§.§ Signed Selmer groups
Recall that K_∞ is the anticyclotomic _p-extension of K, and that K_n ⊂ K_∞ denotes the unique subextension such that [K_n : K] = p^n, for an integer n ≥ 0.
For a rational prime ℓ, let
K_n,ℓ:=K_n⊗__ℓ, H^1(K_n,ℓ, E[p^∞]):=⊕_λ|ℓ H^1(K_n,λ, E[p^∞]),
where the direct sum runs over all primes of K_n above ℓ. We have the natural restriction map
_ℓ: H^1(K_n, E[p^∞]) ⟶ H^1(K_n,ℓ, E[p^∞]).
Let H^1_(K_n,ℓ, E[p^∞])⊂ H^1(K_n,ℓ, E[p^∞]) for the Bloch–Kato subgroup. The singular quotient is given by
H^1_/(K_n,ℓ, X):=H^1(K_n,ℓ, E[p^∞])H^1_(K_n,ℓ, E[p^∞]).
For ∙∈{+,-}, we define the signed Selmer group of E over K_n by
^∙(E/K_n):=(H^1(K_n,E[p^∞])⟶∏_ℓ∤ p H^1_/(K_m,ℓ, E[p^∞])×H^1(k_n, E[p^∞])H^1_∙(k_n, E[p^∞])).
Further, define ^∙(E/K_∞)=_n ^∙(E/K_n).
If _p^∞(E/K_∞) denotes the classical p^∞-Selmer group, then ^∙(E/K_∞)⊂_p^∞(E/K_∞). It follows from <cit.> that _p^∞(E/K_∞) is cofinitely generated over Λ. Hence, so is ^∙(E/K_∞).
For ∙∈{+,-}, the signed Selmer groups ^∙(E/K_∞) is cotorsion over Λ.
Let N denote the conductor of E, and p≥ 5. Assume that (D_K,pN)=1, where D_K is the discriminant of K. Write N=N^+N^-, where N^+ (resp. N^-) is divisible only by primes that are split (resp. inert) in K. In <cit.>, it has been proved that Conjecture <ref> is valid under the following hypotheses:
* N^- is a square-free product of odd number of primes.
* If p=5, the residual representation ρ_E( G_ (μ_p^∞)) contains a conjugate of _2(_p). If p>5, the G_-representation ρ_E is irreducible.
* ρ_E is ramified at the primes ℓ that satisfy one of the following conditions:
* ℓ| N^- with ℓ^2 ≡ 1 p,
* ℓ| N^+.
For the remainder of the article, we assume that Conjecture <ref> holds true.
For ∙∈{+,-}, we write μ_∙ and λ_∙ for the μ- and λ-invariants of the torsion Λ-module ^∙(E/K_∞)^∨.
§.§ Structure of global cohomologies
In this section, we record several consequences of Conjecture <ref> following <cit.>.
Let Σ denote a fixed finite set of primes of K containing p, the ramified primes of K/, the archimedean place, and all the bad reduction primes of E. Write K_Σ for the maximal algebraic extension of K which is unramified outside Σ. For any (possibly infinite) extension K ⊆ L ⊆ K_Σ, write G_Σ(L) = (K_Σ/L).
The signed Selmer group of E over K_∞ can be equivalently defined as follows:
^∙(E/K_∞):=(H^1(G_Σ(K_∞),E[p^∞])⟶∏_ℓ∤ p H^1_/(K_∞,ℓ, E[p^∞])×H^1(k_∞, E[p^∞])H^1_∙(k_∞, E[p^∞])).
For i∈{1,2}, we define H^i_,Σ(K_∞,T)=_n H^i(G_Σ(K_n),T), where the transition maps are given by the corestriction maps. Note that H^i_,Σ(K_∞,T) is independent of the choice of Σ (see <cit.>, <cit.> and <cit.>). Since the set Σ is fixed, we will drop the subscript Σ from the notation for simplicity and simply write H^i_(K_∞,T).
The group H^1_(K_∞,T) is a torsion-free Λ-module. Further, H^1(G_Σ(K),T) is a torsion-free _p-module.
See <cit.>.
By considering the low degree terms of the spectral sequence of Jannsen (<cit.>)
_Λ^i(H^j(G_Σ(K_∞),E[p^∞])^∨,Λ) H^i+j_(K_∞,T)
we obtain the following exact sequence
0⟶_Λ^1((E(K_∞)[p^∞])^∨,Λ) ⟶ H^1_(K_∞,T) ⟶_Λ^0(H^1(G_Σ(K_∞),E[p^∞])^∨,Λ).
By Lemma <ref>, we have E(K_∞)[p^∞]=0, so the leftmost term vanishes. This in turn implies that H^1_(K_∞,T) injects into an ^0-term. Since the latter is a reflexive Λ-module by <cit.>, H^1_(K_∞,T) must be torsion-free.
For the second assertion, we consider the low degree terms of the spectral sequence
__p^i(H^j(G_Σ(K),E[p^∞])^∨,_p) H^i+j(G_Σ(K),T),
which yields the following exact sequence:
0⟶__p^1((E(K_∞)[p^∞])^∨,_p) ⟶ H^1(G_Σ(K),T)
⟶__p^0(H^1(G_Σ(K_∞),E[p^∞])^∨,_p).
By Lemma <ref>, we have E(K)[p^∞]=0, so the leftmost term vanishes. Hence, similar to the previous case, we deduce that H^1(G_Σ(K),T) is a torsion-free _p-module.
One can also show that H^1(G_Σ(K_n), T ) is a torsion-free _p-module for every n.
For ∙∈{+,-}, the Λ-module ^∙(E/K_∞) is cotorsion if and only if we have H^2(G_Σ(K_∞),E[p^∞])=0 and that the following sequence:
0⟶^∙(E/K_∞)⟶ H^1(G_Σ(K_∞),E[p^∞])
⟶∏_ℓ∤ p H^1_/(K_∞,ℓ, E[p^∞])×H^1(k_∞, E[p^∞])H^1_∙(k_∞, E[p^∞])⟶ 0,
is exact.
See <cit.>.
To simplify the notation, we write J_v(E/K_∞) for each of the local summands. It follows from <cit.> that we have an exact sequence
0⟶^∙(E/K_∞)⟶ H^1(G_Σ(K_∞),E[p^∞])⟶∏_v∈ΣJ_v(E/K_∞)
⟶^∙(E/K_∞)^∨⟶ H^2(G_Σ(K_∞),E[p^∞]) ⟶ 0,
where ^∙(E/K_∞) is a Λ-submodule of H^1_(K_∞,T). It follows from the corank calculation <cit.> that we have
_Λ(H^1(G_Σ(K_∞),E[p^∞])) - _Λ(H^2(G_Σ(K_∞),E[p^∞])) = [K:]=2
and
_Λ(∏_v∈ΣJ_v(E/K_∞)) = [K:]=2.
Therefore, we see that ^∙(E/K_∞) is a cotorsion Λ-module if and only if ^∙(E/K_∞) is a torsion Λ-module. Since ^∙(E/K_∞) is a Λ-submodule of H^1_(K_∞,T), which is torsion-free by Lemma <ref>, the latter statement holds if and only if ^∙(E/K_∞)=0. This is equivalent to having H^2_(K_∞,T)=0 as well as the short exact sequence in the statement of the proposition.
We conclude this section with the following statement on the structure of H^i_(K_∞,T).
If Conjecture <ref> holds, the following statements are valid:
* H^1_(K_∞,T) is a free Λ-module of rank 2.
* H^2_(K_∞,T) is a torsion Λ-module.
See <cit.>.
§ KOBAYASHI RANKS
§.§ Definition and basic properties
Following <cit.>, we define the Kobayashi ranks as follows.
Let (M_n)_n≥ 1 be a projective system of finitely generated _p-modules. Given an integer n≥1, if π_n:M_n⟶ M_n-1 has finite kernel and cokernel, we define
∇ M_n:=__p(π_n)-__p( π_n)+__pM_n-1⊗_p.
Let (M'_n)_n≥1, (M_n)_n≥1 and (M”_n)_n≥1 be projective systems of finitely generated -modules.
* Suppose that for all n≥1, there is an exact sequence
0⟶ (M'_n)⟶ (M_n)⟶ (M”_n)⟶ 0.
If two of ∇ M_n, ∇ M'_n,∇ M”_n are defined, then the other is also defined, in which case
∇ M_n=∇ M'_n+∇ M”_n.
* Suppose that M_n are constant. If M_n is finite or the transition map M_n⟶ M_n-1 is given by the multiplication map by p, then ∇ M_n=0.
See <cit.>.
Let f∈Λ be a non-zero element of Λ. Let (N_n)_n≥1 be a projective system given by N_n= Λ/ (f,ω_n), where the connecting maps are natural projections.
* Suppose that Φ_n∤ f. Then ∇ N_n is defined and is equal to _ϵ_nf(ϵ_n), where ϵ_n=ζ_p^n-1.
* Let M be a finitely generated torsion Λ-module with the characteristic polynomial f, and let M_n=M/ω_nM. Consider the natural projective system (M_n)_n≥ 1. Then, for n≫ 0, ∇ M_n is defined and
∇ M_n= _ϵ_nf(ϵ_n)=λ(M)+ϕ(p^n)μ(M),
where λ(M) and μ(M) are the Iwasawa invariants of M and ϕ is the Euler totient function.
See <cit.>.
Let M be a finitely generated Λ-module. Then M^δ:=∪_nM^Γ_n is finitely generated as a _p-module and there exists an integer n_0 such that M^δ=M^Γ_n_0.
In particular, ∇ M^Γ_n=0 for n≫ 0, where the transition maps M^Γ_n+1⟶ M^Γ_n are given by the multiplication by 1+γ_n+…+γ_n^p, where γ_n is a topological generator for Γ_n which is chosen so that γ_n^p=γ_n+1.
The proof is the same as <cit.>.
§.§ Kobayashi ranks of modules arising from 2× 2 matrices
We derive a formula for the Kobayashi ranks of modules arising from certain types of 2× 2 matrices defined over Λ. We begin with the following definition.
Let A=[ a c; b d ]∈ such that (A)0.
* We write ⟨ A⟩⊆ for the Λ-module generated by the columns of A. Similarly, given two such matrices A and B, we write ⟨ A,B⟩⊆ for the Λ-module generated by the columns of A and B.
* For n≥0, we write ⟨ A⟩_n for the Λ_n-module generated by the columns of A modulo ω_n inside _n. Furthermore, we write =_n/⟨ A⟩_n=/⟨ω_n I_2,A⟩, where I_2 denotes the 2× 2 identity matrix.
* We say that A is special relative to an integer n if it satisfies the following property:
Φ_m|(A), 0≤ m≤ n ⇒ Φ_m | a,b or Φ_m | c,d.
We prove the following generalization of <cit.>.
Let n≥0 be an integer. If A∈ is special relative to n and Φ_n∤(A), then ∇ is defined and is equal to _ϵ_n( A(ϵ_n)). Here, the connecting map π_n:→ A_(n-1) is the natural projection.
For each integer m such that 0≤ m≤ n-1, we write i_m∈{0,1,2} for the number of columns of A that are divisible by Φ_m. Then, A can be written as
A=BD,
where B,D∈ such that D is a diagonal matrix whose diagonal entries are square-free products of elements of a subset {Φ_m:0≤ m≤ n-1}, and the number of times Φ_m appears in D is precisely i_m.
By the Chinese remainder theorem,
A_(n-1)⊗≃⊕_0≤ m≤ n-1(ζ_p^m)^⊕ 2/⟨ A(ϵ_m)⟩,
where ⟨ A(ϵ_m)⟩ denotes the image of the (ζ_p^m)-linear endomorphism on (ζ_p^m)^⊕ 2 defined by the matrix A(ϵ_m). The rank of the matrix A(ϵ_m) is equal to 2-i_m. Therefore, the rank-nullity theorem tells us that
_(ζ_p^m)(ζ_p^m)^⊕ 2/⟨ A(ϵ_m)⟩=i_m.
Hence,
_A_(n-1)⊗=∑_m=0^n-1ϕ(p^m)i_m=_ϵ_n( D(ϵ_n)).
As the connecting map π_n is surjective, it remains to show that the length of π_n is given by _ϵ_n(_ϵ_n B(ϵ_n)). The isomorphism theorem tells us that
π_n≃⟨ω_n-1I_2,A⟩/⟨ω_nI_2,A⟩≃⟨ω_n-1I_2⟩/⟨ω_nI_2,A⟩⋂⟨ω_n-1I_2⟩.
Claim: ⟨ω_nI_2,A⟩⋂⟨ω_n-1I_2⟩=⟨ω_n I_2,ω_n-1B⟩.
It is clear that ⟨ω_n I_2,ω_n-1B⟩⊆⟨ω_n-1 I_2⟩.
Since the diagonal entries of the diagonal matrix D are square-free products of elements of a subset of {Φ_m:0≤ m≤ n-1}, we may write ω_n-1I_2=DD' for some diagonal matrix D'.
Thus,
ω_n-1B=BDD'=AD',
which implies that ⟨ω_n-1B⟩⊆⟨ A⟩. Therefore, we deduce that
⟨ω_n I_2,ω_n-1B⟩⊆⟨ω_nI_2,A⟩⋂⟨ω_n-1I_2⟩.
We now prove the opposite inclusion. Let v∈⟨ω_nI_2,A⟩⋂⟨ω_n-1I_2⟩. We can regard v as a column vector with entries in Λ and write
v=A[ x; y ]+ω_nu,
where x,y∈Λ and u∈. Since v∈⟨ω_n-1I_2⟩, we have
A[ x; y ]=BD[ x; y ]≡ 0Φ_m, 0≤ m≤ n-1.
We consider three cases.
Case 1: If i_m=0, A(ϵ_m) is an invertible matrix by definition. In this case, (<ref>) implies that x,y≡ 0Φ_m.
Case 2: If i_m=1, there exists a non-zero element [ a; b ]∈ such that
A≡[ a 0; b 0 ]Φ_m or A≡[ 0 a; 0 b ]Φ_m
Let's suppose the first congruence relation holds. Then (<ref>) implies that Φ_m| x. Furthermore, D≡[ d_m 0; 0 0 ]Φ_m for some non-zero d_m∈Λ. Therefore, D[ x; y ]≡0Φ_m. If the second congruence relation in (<ref>) holds, the same argument shows that D[ x; y ]≡0Φ_m.
Case 3: If i_m=2, then D≡0Φ_m. In particular, D[ x; y ]≡0Φ_m.
In all three cases, we have D[ x; y ]≡ 0Φ_m. Therefore, we deduce that
D[ x; y ]=ω_n-1[ x'; y' ]
for some x',y'∈Λ. Combined with (<ref>), we have
v=ω_n-1B[ x'; y' ]+ω_nu∈⟨ω_nI_2,ω_n-1B⟩.
This shows that ⟨ω_nI_2,A⟩⋂⟨ω_n-1I_2⟩⊆⟨ω_n I_2,ω_n-1B⟩, and our claim follows.
We can now rewrite (<ref>) as
π_n≃⟨ω_n-1I_2⟩/⟨ω_n I_2,ω_n-1B⟩≃/⟨Φ_n I_2,B⟩,
where the last isomorphism is given by
/⟨Φ_n I_2,B⟩ →⟨ω_n-1I_2⟩/⟨ω_n I_2,ω_n-1B⟩
x ↦ω_n-1x.
As Φ_n∤ A by assumption, it follows from <cit.> that π_n is finite, with length equal to _ϵ_n( B(ϵ_n)), as desired.
§ PROOF OF THEOREM <REF>
The proof of Theorem <ref> is divided into a number of steps. Following <cit.>, we define for each integer n≥0
(E/K_n) :=( H^1(G_Σ(K_n),T) ⟶H^1(k_n,T)/E(k_n)⊗_p).
The key ingredient to studying the growth of _ E(K_n) and (E/K_n )[p^∞] is understanding ∇(E/K_n). Similar to <cit.>, we do so using fine Selmer groups and several auxiliary modules.
§.§ Fine Selmer groups
We recall the definition of fine Selmer groups:
For 0≤ n ≤∞, we define the fine Selmer group
^0(E/K_n):=( _p^∞(E/K_n)⟶ H^1(k_n, E[p^∞]) ).
Equivalently, we have
^0(E/K_n):=( H^1(G_Σ(K_n),E/K_n)⟶∏__n∈Σ(K_n) H^1(K_n,_n, E[p^∞]) ),
where Σ(K_n) denotes the set of primes of K_n lying above Σ. The Pontryagin duals of _p^∞(E/K_n), ^±(E/K_n) and ^0(E/K_n) are denoted by (E/K_n), ^±(E/K_n) and ^0(E/K_n), respectively.
The natural restriction map
^0(E/K_n) ⟶^0(E/K_∞)^Γ_n
has finite kernel and cokernel. Furthermore, their cardinalities stabilize as n→∞.
This is a special case of <cit.> (see also <cit.>).
Assume that Conjecture <ref> holds. Let n≥ 0 be an integer.
* We have the following short exact sequence:
0 ⟶(E/K_n) ⟶(E/K_n) ⟶^0(E/K_n) ⟶ 0.
* For n≫ 0, ∇^0(E/K_n) is defined and satisfies the equality
∇^0(E/K_n)= ∇^0(E/K_∞)_Γ_n.
Assertion (i) follows from the Poitou–Tate exact sequence and the definition of (E/K_n).
As ^0(E/K_∞) is a quotient of ^±(E/K_∞), which are assumed to be Λ-torsion, it follows that ^0(E/K_∞) is also Λ-torsion. By Lemma <ref>, we see that the kernel and cokernel of the natural map
^0(E/K_∞)_Γ_n⟶^0(E/K_n)
are finite and independent of n. Combining this with Lemma <ref>(ii), the second assertion follows.
§.§ Calculating certain Kobayashi ranks via special matrices
We introduce auxiliary modules (denoted by M_n) in preparation for the calculation of ∇(E/K_n).
The composition
H^1_(K,T) H^1(k_∞,T) Λ_≃Λ^⊕ 2
is a Λ-homomorphism between two free Λ-modules of rank 2 (see Proposition <ref>). We write the composition of this map with projection to the two coordinates as
^±_i:H^1_(K,T)→Λ, i=1,2.
Let =(u_1,u_2)∈ H^1_(K,T)^⊕ 2 and n≥0 be an integer. We define
F_n()=[ _n^+^-_1(u_1)+_n^-_1^+(u_1) _n^+^-_1(u_2)+_n^-_1^+(u_2); _n^+^-_2(u_1)+_n^-_2^+(u_1) _n^+^-_2(u_2)+_n^-_2^+(u_2) ]∈.
Furthermore, we define for ∙∈{+,-}
^∙()=[ _1^∙(u_1) _1^∙(u_2); _2^∙(u_1) _2^∙(u_2) ]∈.
Let =(u_1,u_2)∈ H^1_(K,T)^⊕ 2 and n≥0 be an integer. For 0≤ m≤ n, there exists a non-zero scalar c_m,n∈(ζ_p^m)^× such that
F_n()(ϵ_m) is equal to c_m,n^+()(ϵ_m) or c_m,n^-()(ϵ_m), depending on the parity of m.
Recall that _n^+_n^-=∏_1≤ m≤ nΦ_m. Therefore, for each 1≤ m≤ n, exactly one of the two elements of {_n^+(ϵ_m),_n^-(ϵ_m)} vanishes, from which the assertion for m≥1 follows.
If m=0, we have ϵ_m=0. Proposition <ref> tells us that ^+_i(u_j)(0)=0. Therefore,
F_n()(0)=_n^+(0)^-(),
as desired.
Let ∙∈{+,-}. Suppose that ^∙(E/K_∞) is Λ-torsion.
If =(u_1,u_2)∈ H^1_(K,T)^⊕ 2 such that the quotient of Λ-modules
H^1_(K,T)/⟨ u_1,u_2⟩ is Λ-torsion, then (^∙()) 0.
By <cit.>, we have the following exact sequence
H^1_(K,T) ⟶H^1(k_∞,T)/^∙⟶^∙(E/K_∞) ⟶^0(E/K_∞) ⟶ 0.
Hence, the result follows from the assumption that ^∙(E/K_∞) is Λ-torsion.
Suppose that Conjecture <ref> holds.
Then, there exists =(u_1,u_2)∈ H^1_(K,T)^⊕ 2 such that
H^1_(K,T)/⟨ u_1,u_2⟩ is Λ-torsion and that F_n() is a special matrix relative to n (in the sense of Definition <ref>) for all n≥0.
Let =(z_1,z_2)∈ H^1_(K,T)^⊕ 2 such that {z_1,z_2} be a Λ-basis of H^1_(K,T). We write
_n={m: _(ζ_p^m) F_n()(ϵ_m)=1,0≤ m≤ n}.
It follows from Lemma <ref> that
_n⊆_n+1
Furthermore, as ^±() 0 by Lemma <ref>, the number of cyclotomic polynomials dividing either ^+() or ^-() is finite. Therefore, the cardinality of _n is bounded above as n→∞. Let _∞=⋃_n≥0_n.
Let m∈_∞ with 1≤ m≤ n. Let ∙ be the unique element of {+,-} such that
F_n()(ϵ_m)= c_m,n^∙()(ϵ_m)
as given by Lemma <ref>. By linear algebra, there exists a 2× 2 invertible matrix B_m over (ζ_p^m) such that
^∙()(ϵ_m)B_m=[ 0 *; 0 * ],
where the second column is non-zero.
Identifying (ζ_p^m) with [X]/(Φ_m), the Chinese remainder theorem implies that there exists a 2× 2 matrix B defined over [X] (after multiplying by a power of p, if necessary) such that B(ϵ_m) is a non-zero scalar multiple of B_m for all m∈_∞. Consequently,
F_n()B≡[ 0 *; 0 * ]Φ_m
for all m∈_n.
We claim that we can choose B so that Φ_m|(B) for all m≥0. Let M=max_∞. Then, we may choose BΦ_m to be an invertible matrix for all m≤ M by the Chinese remainder theorem. Once such a B is chosen, any other B' such that B≡ B'ω_M satisfies the same congruence relation modulo Φ_m as B for all m∈_∞. We may choose recursively a sequence of matrices {B_m}_m≥ M such that B_M is our initial choice of B, and B_m+1≡ B_mω_m and Φ_m+1∤ B_m+1. Indeed, suppose that we have chosen
B_m=[ a b; c d ]
for some m≥ M. If Φ_m+1∤ B_m, then we can simply choose B_m+1=B_m. If Φ_m+1| B_m, the vectors
[ a(ϵ_m+1); b(ϵ_m+1) ], [ c(ϵ_m+1); d(ϵ_m+1) ]
are linearly dependent. There exist α,β,γ,δ∈ such that
[ a(ϵ_m+1)+αω_m(ϵ_m+1); b(ϵ_m+1+βω_m(ϵ_m+1)) ], [ c(ϵ_m+1)+γω_m(ϵ_m+1); d(ϵ_m+1+δω_m(ϵ_m+1)) ]
are linearly independent. Therefore, we can choose
B_m+1=B_m+ω_m[ α γ; β δ ].
In particular, our claim follows from taking the limit of this sequence.
For this choice of matrix B, we have
F_n()B≡[ 0 0; 0 0 ]Φ_m
whenever F_n()Φ_m is the zero matrix, and
F_n()B(ϵ_m) is invertible whenever Φ_m∤( F_n()). Together with (<ref>), we deduce that F_n()B is special relative to n for all n≥0. The assertion of the proposition now follows from taking
u_1=az_1+bz_2, u_2=cz_1+dz_2,
where B=[ a c; b d ].
From now on, we fix =(u_1,u_2) satisfying the properties given by Proposition <ref>. For an integer n≥1, we define the following modules
M_n=_n/⟨ F_n()⟩_n, M_n-1=_n-1/⟨ F_n()⟩_n-1,
where ⟨ F_n()⟩_m denotes the Λ_m-submodule of Λ_m^⊕ 2 generated by the columns of the matrix F_n().
Suppose that Conjecture <ref> holds. For n≫0, we have
∇ M_n =
2 _n^+ + _ϵ_n(^-()(ϵ_n)) if n is odd,
2 _n^- +_ϵ_n(^+()(ϵ_n)) if n is even.
This follows from Theorem <ref> and Proposition <ref>.
§.§ Final steps of the proof
We study the variation of (E/K_n) defined at the beginning of the section using the following -modules:
'(E/K_n) :=( H^1_(K_∞,T)_Γ_n⟶H^1(k_n,T)/E(k_n)⊗_p),
”(E/K_n) :=( H^1_(K_∞,T)_Γ_n⟶H^1(k_n,T)/H^1_+(k_n,T)⊕H^1(k_n,T)/H^1_-(k_n,T)).
For all n≥ 0, we have '(E/K_n)=”(E/K_n).
It follows from Theorem <ref>(i) that there is an isomorphism of Λ-modules
(_n)⊗/≃((_n)^+⊗/)⊕((_n)^-⊗/).
On taking Pontryagin duals, we deduce:
H^1(k_n,T)/E(k_n)⊗_p≃H^1(k_n,T)/H^1_+(k_n,T)⊕H^1(k_n,T)/H^1_-(k_n,T),
which implies the assertion.
Suppose that Conjecture <ref> holds. Let {z_1,z_2} be a Λ-basis of H^1_(K,T) and write =(z_1,z_2). For n≫0, we have
∇(E/K_n) =
2 _n^+ + _ϵ_n(^-()(ϵ_n)) if n is odd,
2 _n^- +_ϵ_n(^+()(ϵ_n)) if n is even.
Let Λ_n^-=Λ/(ω_n^-) and I_n^+=XΛ_n^+=XΛ/(ω_n^+).
Consider the following commutative diagram with exact rows
(Q1) at (-3.5,2) 0;
(Q2) at (-1,2) ( I_n^+⊕Λ_n^-)^⊕ 2;
(Q3) at (3,2) Λ_n^⊕ 2;
(Q4) at (7,2) (Λ_n/ (_n^+,ω_n^- ))^⊕ 2;
(Q5) at (10,2) 0;
(Q6) at (-3.5,0) 0;
(Q7) at (-1,0) ( I_n-1^+⊕Λ_n-1^-)^⊕ 2;
(Q8) at (3,0) Λ_n-1^⊕ 2;
(Q9) at (7,0) (Λ_n-1/ (_n^+,ω_n^- ))^⊕ 2;
(Q10) at (10,0) 0,;
[->] (Q1)–(Q2);
[->] (Q2)–node [above] _n^⊕ 2(Q3);
[->] (Q2)–node [right] pr(Q7);
[->] (Q3)–node [right] pr(Q8);
[->] (Q4)–node [right] pr(Q9);
[->] (Q3)–(Q4);
[->] (Q4)–(Q5);
[->] (Q6)–(Q7);
[->] (Q7)–node [above] _n^⊕ 2(Q8);
[->] (Q8)–(Q9);
[->] (Q9)–(Q10);
where _n: I_n^+⊕Λ_n^-⟶Λ_n is given by (f,g)↦_n^-f +_n^+gω_n, and _n: I_n-1^+⊕Λ_n-1^-⟶Λ_n-1 is the map (f,g)↦_n^-f+_n^+gω_n-1. This is well-defined since _n^±ω_n^∓=ω_n.
It follows from <cit.> that the third vertical map is an isomorphism. Furthermore, Proposition <ref> tells us that the Coleman maps ^±_n give the following identifications
H^1(k_n,T)/H^1_+(k_n,T)≃ I_,n^+≃ I_n^+,⊕ 2 and H^1(k_n,T)/H^1_-(k_n,T)≃Λ
_,n^-≃Λ_n^-,⊕ 2.
Therefore, we may rewrite the commutative diagram above as
(Q1) at (-3.5,2) 0;
(Q2) at (-1,2) H^1(k_n,T)/H^1_+(k_n,T)⊕H^1(k_n,T)/H^1_-(k_n,T);
(Q3) at (3,2) Λ_n^⊕ 2;
(Q4) at (7,2) (Λ_n/ (_n^+,ω_n^- ))^⊕ 2;
(Q5) at (10,2) 0;
(Q6) at (-3.5,0) 0;
(Q7) at (-1,0) H^1(k_n-1,T)/H^1_+(k_n-1,T)⊕H^1(k_n-1,T)/H^1_-(k_n-1,T);
(Q8) at (3,0) Λ_n-1^⊕ 2;
(Q9) at (7,0) (Λ_n-1/ (_n^+,ω_n^- ))^⊕ 2;
(Q10) at (10,0) 0,;
[->] (Q1)–(Q2);
[->] (Q2)–node [above] (Q3);
[->] (Q2)–node [right] pr(Q7);
[->] (Q3)–node [right] pr(Q8);
[->] (Q4)–node [right] ≃(Q9);
[->] (Q3)–(Q4);
[->] (Q4)–(Q5);
[->] (Q6)–(Q7);
[->] (Q7)–node [above] (Q8);
[->] (Q8)–(Q9);
[->] (Q9)–(Q10);
where the first two horizontal maps are given by the compositions of _n^⊕2 with (^+_n,^-_n), and _n^⊕2 with (^+_n-1,^-_n-1), respectively.
Consequently, if we write
”'(E/K_n)=(⟨ u_1,u_2⟩⟶H^1(k_n,T)/H^1_+(k_n,T)⊕H^1(k_n,T)/H^1_-(k_n,T)),
we have the following commutative diagram with exact rows:
(Qa) at (-3.5,2) 0;
(Qb) at (-3.5,0) 0;
(Qc) at (10,0) 0.;
(Qd) at (10,2) 0;
(Q2) at (-1,2) ”'(E/K_n);
(Q3) at (6,2) (Λ_n/ (_n^+,ω_n^- ))^⊕ 2;
(Q5) at (-1,0) ”'(E/K_n-1);
(Q6) at (6,0) (Λ_n-1/ (_n^+,ω_n^- ))^⊕ 2;
(Q7) at (2.5,0) M_n-1;
(Q8) at (2.5,2) M_n;
[->] (Q2)–(Q8);
[->] (Q5)–(Q7);
[->] (Q7)–(Q6);
[->] (Q8)–(Q3);
[->] (Q8)– (Q7);
[->] (Q3)–node [right] ≃ (Q6);
[->] (Qa)–(Q2);
[->] (Q3)–(Qd);
[->] (Qb)–(Q5);
[->] (Q6)–(Qc);
[-] (Q2)–(Q5);
Therefore, ∇”'(E/K_n)=∇ M_n.
By definition, there is a short exact sequence
0⟶H^1_(K,T)_Γ_n/⟨ u_1,u_2⟩_Γ_n⟶”'(E/K_n)⟶”(E/K_n)⟶ 0.
Furthermore, the first term of this exact sequence is isomorphic to (H^1_(K,T)/⟨ u_1,u_2⟩)_Γ_n. Let A∈ be the change of basis matrix so that
[ u_1 u_2 ]=[ z_1 z_2 ]A.
Then (A) generates the Λ-characteristic ideal of H^1_(K,T)/⟨ u_1,u_2⟩. Therefore, for n≫0,
∇H^1_(K,T)_Γ_n/⟨ u_1,u_2⟩_Γ_n=_ϵ_n((A)(ϵ_n))
by Lemma <ref>(ii). Thus, for n≫0, ∇”(E/K_n) is defined and equal to ∇ M_n-_ϵ_n((A)(ϵ_n)).
Furthermore, ^±()=^±()(A) by definition. Hence, after combining Lemmas <ref> and <ref>, we deduce that
∇'(E/K_n) =
2 _n^+ + _ϵ_n(^-()(ϵ_n)) if n is odd,
2 _n^- +_ϵ_n(^+()(ϵ_n)) if n is even.
It follows from an adaptation of <cit.> and <cit.> to the current setting that for n≫ 0,
∇(E/K_n)=∇'(E/K_n),
which concludes the proof.
Suppose that Conjecture <ref> holds. For n≫ 0, we have
∇(E/K_n)=
2 s_n-1 + λ_- + (p^n-p^n-1)μ_- if n is odd,
2s_n-1+λ_+ + (p^n-p^n-1)μ_+ if n is even,
where s_n=∑_k=1^n(-1)^n-kp^k for n≥ 0.
Let ∙∈{+,-}. Let f_∙∈Λ (resp. f_0) denote a characteristic element of the torsion Λ-module ^∙(E/K_∞) (resp. ^0(E/K_∞)). We have the following Poitou–Tate exact sequence of Λ-torsion modules:
0⟶⊷(^∙)/⟨^∙()⟩⟶^∙(E/K_∞)⟶^0(E/K_∞)⟶0.
As ⊷(^-)= and ⊷(^+)=(XΛ)^⊕2, this gives the equality of Λ-ideals
(X^n^∙^∙()f_0)=(f^∙),
where n^-=0 and n^+=-2.
Lemma <ref> tells us that
∇(E/K_n)=∇(E/K_n)+∇^0(E/K_n).
Hence, the corollary follows from Lemma <ref> and Proposition <ref>.
Let {z_1,z_2} be a Λ-basis of H^1_(K,T). Let us write ^±∘_p (z_i)=(z_i1^±,z_i2^±) for i=1,2. Set f_±:=(z_ij^±)∈Λ.
We consider the following commutative diagram with exact rows
(Q1) at (-3.5,2) 0;
(Q2) at (-1,2) ( Λ_n^-⊕ I_n^+)^⊕ 2;
(Q3) at (3,2) Λ_n^⊕ 2;
(Q4) at (7,2) (Λ_n/ (_n^+,ω_n^- ))^⊕ 2;
(Q5) at (10,2) 0;
(Q6) at (-3.5,0) 0;
(Q7) at (-1,0) ( Λ_n-1^-⊕ I_n-1^+)^⊕ 2;
(Q8) at (3,0) Λ_n-1^⊕ 2;
(Q9) at (7,0) (Λ_n-1/ (_n^+,ω_n^- ))^⊕ 2;
(Q10) at (10,0) 0;
[->] (Q1)–(Q2);
[->] (Q2)–node [above] F_n^⊕ 2(Q3);
[->] (Q2)–node [right] pr(Q7);
[->] (Q3)–node [right] pr(Q8);
[->] (Q4)–node [right] pr(Q9);
[->] (Q3)–(Q4);
[->] (Q4)–(Q5);
[->] (Q6)–(Q7);
[->] (Q7)–node [above] F̂_n^⊕ 2(Q8);
[->] (Q8)–(Q9);
[->] (Q9)–(Q10);
where F_n: Λ_n^-⊕ I_n^+⟶Λ_n is given by (f,g)↦_n^+f+ω_n^-gω_n, and F̂_n: Λ_n-1^-⊕ I_n-1^+⟶Λ_n-1 is the map (f,g)↦_n^+f+ω_n^-gω_n-1. This is well-defined since Λ_n^-≃Λ/ω_n^-, I_n^+≃Λ/ω_n^+ and ω_n^+ω_n^-=ω_n^-ω_n^+=ω_n.
The projection Λ_n/ (_n^+,ω_n^- )⟶Λ_n-1/ (_n^+,ω_n^- ) is bijective.
This is <cit.>.
Define
M_n:=Λ_n^⊕ 2/⟨ F_n^⊕ 2(z_1),F_n^⊕ 2(z_2) ⟩ and M_n-1:=Λ_n-1^⊕ 2/⟨F̂_n^⊕ 2(z_1),F̂_n^⊕ 2(z_2) ⟩,
where F_n^⊕ 2(z_i), i=1,2 denotes the image of z_i under the composition
H^1_(K,T) H^1(k_∞,T)⟶ H^1(k_n,T)Λ_,n^-⊕ I_,n^+≃(Λ_n^-⊕ I_n^+)^⊕ 2Λ_n^⊕ 2,
and F̂_n^⊕ 2(z_i) are defined similarly. Our goal is to study ∇ M_n, via special matrices introduced in Definition <ref>.
For n≫0,
∇ M_n =
2 (_n^+(X)) + _ϵ_n(f_-(ϵ_n)) if n is odd,
2 (ω_n^-(X)) +_ϵ_n(f_+(ϵ_n)) if n is even,
where ϵ_n=ζ_p^n-1.
Since the projection map π_n: M_n ⟶ M_n-1 is surjective and M_n-1 is finite, we have
∇ M_n=__p(π_n)
.
One can see that π_n≃(Λ/⟨Φ_n⟩)^2/⟨ F_n^⊕ 2(c_1^-,+),F_n^⊕ 2(c_2^-,+) ⟩. Therefore, it follows from <cit.> that
__p(π_n)=
2 (_n^+(X)) + _ϵ_n(f_-(ϵ_n)) if n is odd,
2 (ω_n^-(X)) +_ϵ_n(f_+(ϵ_n)) if n is even.
For n≫ 0, we have
∇(E/K_n)=
2( q_n-1-1) + λ_- + (p^n-p^n-1)μ_- if n is odd,
2( q_n-1+1)+λ_+ + (p^n-p^n-1)μ_++1 if n is even.
It follows from Lemma <ref> that we have
∇ M_n =
2 (_n^+(X)) + _ϵ_n(f_-(ϵ_n)) if n is odd,
2 (ω_n^-(X)) +_ϵ_n(f_+(ϵ_n)) if n is even,
where ϵ_n=ζ_p^n-1. By Lemma <ref> and the commutative diagram above, we have
(Qa) at (-3.5,2) 0;
(Qb) at (-3.5,0) 0;
(Qc) at (10,0) 0.;
(Qd) at (10,2) 0;
(Q2) at (-1,2) ”(E/K_n);
(Q3) at (6,2) (Λ_n/ (_n^+,ω_n^- ))^⊕ 2;
(Q5) at (-1,0) ”(E/K_n-1);
(Q6) at (6,0) (Λ_n-1/ (_n^+,ω_n^- ))^⊕ 2;
(Q7) at (2.5,0) M_n-1;
(Q8) at (2.5,2) M_n;
[->] (Q2)–(Q8);
[->] (Q5)–(Q7);
[->] (Q7)–(Q6);
[->] (Q8)–(Q3);
[->] (Q8)– (Q7);
[->] (Q3)–node [right] = (Q6);
[->] (Qa)–(Q2);
[->] (Q3)–(Qd);
[->] (Qb)–(Q5);
[->] (Q6)–(Qc);
[-] (Q2)–(Q5);
It then follows from Lemma <ref> that ∇'(E/K_n)=∇ M_n, for n≫ 0. By Lemma <ref> (i), Proposition <ref> and Lemma <ref> we have
∇(E/K_n)=∇ M_n +∇^0(E/K_n).
On the other hand, we have ( ^+(E/K_∞))=( f_0(X)f_+(X)/X ) and ( ^-(E/K_∞))=( f_0(X)f_-(X)). Hence, the result follows.
We are now ready to conclude the proof of Theorem <ref>.
Suppose that Conjecture <ref> holds. Under our running hypotheses, we have:
* _ E(K_n) is bounded independently of n;
* Assume that (E/K_n )[p^∞] is finite for all n≥ 0. Define r_∞=sup_n≥0{_ E(K_n)}. Then, for n≫ 0, we have
∇(E/K_n )[p^∞]=
2s_n-1 + λ_- + ϕ(p^n)μ_- -r_∞ if n is odd,
2s_n-1 +λ_+ + ϕ(p^n)μ_+ -r_∞ if n is even,
where ϕ(p^n)=p^n-p^n-1, and s_n=∑_k=1^n(-1)^n-kp^k, for n≥ 0.
Since ∇(E/K_n) is defined when n is sufficiently large by Corollary <ref>, the kernel and cokernel of the natural map (E/K_n+1)→(E/K_n) are finite for such n. In particular, ∇(E/K_n) is bounded independently of n. Hence, the first assertion follows from the well-known exact sequence
0 ⟶ E(K_n)⊗_p/_p ⟶_p^∞(E/K_n) ⟶(E/K_n )[p^∞] ⟶ 0.
It follows from the exact sequence (<ref>) and Lemma <ref> that
∇(E/K_n )[p^∞] = ∇(E/K_n)-∇ E(K_n)⊗_p.
Part (i) tells us that ∇ E(K_n)⊗_p=r_∞ for n≫0. Thus, the assertion (ii) follows from Corollary <ref>.
Similarly to <cit.>, it would be possible to give an alternative proof of part (i) of Theorem <ref> via analogues of the control theorems in 9 of op. cit.
alpha
|
http://arxiv.org/abs/2409.02373v1 | 20240904014250 | Multifractaility, topology and anomalous Hall conductivity on a 30 degrees twisted bilayer honeycomb lattice | [
"Grigory Bednik"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.dis-nn",
"cond-mat.other",
"cond-mat.stat-mech"
] |
arabic
Department of Physics, University of Nebraska Omaha
§ ABSTRACT
We consider 30^∘ twisted bilayer formed by two copies of Haldane model and explore evolution of its properties with varying interlayer coupling strength. Specifically, we compute the system's energy spectrum, its fractal dimensions, topological entanglement entropy, local Chern markers and anomalous Hall conductivity. We find that at weak interlayer coupling, the system remains gapped and retains topological properties of the isolated layers, but at strong interlayer coupling, the system forms a gapless multifractal state. We also establish that anomalous Hall conductivity can be used to characterize the system's topological properties in the same way as a local Chern marker.
Multifractaility, topology and anomalous Hall conductivity on a 30 degrees twisted bilayer honeycomb lattice
Grigory Bednik
September 4, 2024
============================================================================================================
§ INTRODUCTION
Quasicrystals are a novel class of materials discovered in 1980-s, whose lattice is non-periodic and yet is ordered according to certain rules.
One possible way to obtain a quasicrystal is to consider a periodic lattice in higher dimension and project it into a plane with a normal vector incommensurate to the original lattice <cit.>. Other specific models of quasicrystals include Fibbonacci lattice <cit.>, Penrose tiling, Rauzy tiling <cit.>, Aubry-André model <cit.> etc. It was established that quasicrystals generally host multifractal electronic states, which are neither extended, nor localized, but instead characterized by non-integer fractal dimension <cit.>.
It is widely known that periodic crystals can host multiple topological phases.
On the other hand, topological properties of quasicrystals are still not well-understood. In a recent work <cit.>, it was suggested that topological quasicrystals can be built from crystals in higher dimensions, and their topology can be deduced from crystal topology. In a few other works, empirical models of topological quasicrystals were proposed <cit.>,
which host gapless edge states and whose topology was characterized directly using empirical 'local Chern markers' <cit.>. However, there is still no systematic understanding of how non-trivial topology in quasicrystals may arise.
Motivated by this, we are interested whether it is possible to obtain a topological quasicrystal by smoothly deforming a crystal with known non-trivial topology. Specifically, we choose to consider 30 degrees twisted bilayer honeycomb lattice - a model which is currently actively studied in the context of twisted bilayer graphene <cit.>. However we consider a
twisted bilayer, in which each monolayer is described by gapped Haldane model <cit.>. We explore our model's evolution once the interlayer coupling is increased. We obtain that at weak coupling, the model retains all properties of Haldane model: bulk states remain extended and edge states still exists.
But as the interlayer coupling increases, the model undergoes a transition into a strongly coupled state, which is gapless, and whose eigenstates are multifractal.
Characterizing topological properties of our model turns out to be challenging because most well-established topological concepts (e.g. Chern numbers) were developed in momentum space, i.e. only for crystalline systems. Nevertheless, we are still able to find out if the system is topological by computing its topological entanglement entropy <cit.>. Furthermore, we study its topological properties by computing 'local Chern marker' <cit.>. Finally we numerically compute its anomalous Hall conductivity by using Kubo formula and find that it can be used to characterize topological properties of the quasicrystal in the same way as local Chern marker.
This paper is organized as follows. In Sec. <ref> we introduce our model, and in Sec. <ref> we describe its phase diagram, which consists of three phases: topological weakly-coupled, non-topological weakly coupled and strongly coupled multifractal. In Sec. <ref> we describe its topological properties. Specifically, in Sec. <ref> we present our results for its topological entanglement entropy and in Sec. <ref> we describe its topological properties using the 'local Chern markers'. In Sec. <ref> we discuss anomalous Hall conductivity. We summarize our findings in Sec. <ref> .
§ MODEL
We consider a tight-binding model of electrons on a bilayer of two honeycomb lattices twisted relative to each other by 30^∘ angle (see Fig. <ref>). Each of the lattices is described by Haldane Hamiltonian, and in addition, electrons from the different layers interact via exponentially decaying potential. The total Hamiltonian has the form
H = ∑_α H_α + V_1-2,
where α = 1, 2 is the layer index, H_α is Haldane Hamiltonian of each layer, and V_1-2 describes interaction between the layers. Specifically,
H_α = t_intra∑_<ij> c_i, α^† c_j, α
+ t_2 ∑_≪ i, j≫ e^-i ν_ijϕ c_i, α^† c_j, α
+ 3 √(3)m ∑_i ϵ_i c_i, α^† c_i, α,
V_1-2 = t_inter∑_i, j v_i,j c_i, 1^† c_j, 2 + h.c.
Here i, j numerate the lattice sites within each layer, < i, j > is a sum over the nearest neighbors, ≪ i, j ≫ is a sum over the next-nearest neighbors. t_intra is the nearest neghbors hopping (between A, B sites), t_2 is the next-nearest neighbors hopping (between A-A or B-B sites), ν = ± 1 depending on the relative direction between i, j, ϕ = π/2, and ϵ = ± 1 for A(B) sites (see e.g. ref. <cit.> for more details). The interlayer potential has the form
v_i, j =
{
e^- r_ij/r_0,
r < r_max
0, r > r_max.
.
Here r_ij is a distance between the sites i, j, and to simplify the model we neglect interlayer spacing. Also we introduced long-distance cutoff r_max to decrease the complexity of our numerical calculations. We fix the numerical values of our parameters as t_intra=1, t_2=1, ϕ=π/2, r_0 =1, r_max = 2 and explore its evolution over m, t_inter.
§.§ Phase diagram
To study the properties of our model, we compute its energy spectrum and wavefunctions numerically using exact diagonalization. Further, we characterize their spatial properties by computing fractal dimensions D_q = log(P_q)/(q-1) log(N), where P_q is a participation ratio defined as P_q = ∑_i (ψ^†ψ )^q, and i=1… N runs over all lattice sites.
First, let us recall our model's properties in the known case when the interlayer coupling t_inter is set to zero. This model is a Chern insulator at |m| < 1 (we assume that both layers have the same signs of Chern numbers). Its bulk states form two bands separated by an energy gap, which in turn, is filled by edge states. At m=1, the bulk gap closes, and at m>1, it reopens again, and the system becomes a trivial insulator.
Now let us look at the properties of our model once the interlayer coupling is turned on. We find that the phase diagram of the system in m ,t_inter space has three phases, which we label as 'topological weakly coupled', 'non-topological weakly coupled' and 'strongly coupled' respectively (see Figs. <ref>). To be more specific, if we start from the topological phase of the Haldane model, we observe (see Figs. <ref>) that at small coupling, the bulk states are still separated by an energy gap and the edge states persist, but as t_inter is increasing, the bulk energy gap shrinks. Once t_inter reaches a critical value, the bulk gap closes, and the edge states disappear. As we increase t_inter further, the system remains gapless. Similarly, if we start from the non-topological phase (see Fig. <ref>), at small t_inter, the system remains gapped, and as t_inter is increasing, the gap becomes smaller and eventually disappears.
To explore multifractal properties of our model, we plot average fractal dimension and its standard deviation for all states against the system size and a few values of q (Fig. <ref>). At zero, as well as at small t_inter, we observe that as the system size increases, the average fractal dimension of all states in the system approaches 1 and its standard deviation approaches 0. In other words, we confirm that in both topological and non-topological weakly coupled phases, the bulk eigenstates are fully extended in the same way as in a fully periodic system.
Characterizing multifractal properties at large t_inter turns out to be challenging because finite size corrections to fractal dimensions scale as 1/log(L), and because of that it is impossible to obtain reliable results for any realistic system size, which can be handled numerically. Nevertheless,
when we plot average fractal dimensions and their deviations as a function of the system size in the 'strongly coupled' phase, we observe that the former converge to values smaller than 1, and the latter converge to values greater than 0. Furthermore, we can see that as the system size grows, average fractal dimensions converge to different values for various q. Thus we confirm that eigenstates in the strongly coupled phase are multifractal - they are neither extended, nor localized.
To confirm that the system indeed undergoes a phase transition between the weakly and strongly coupled phases, we have to plot its derivatives of free energy and confirm that it diverges at the transition point. To be more specific, we consider a many body system at half filling
and assume zero temperature. Simply speaking, we consider a many particle system, in which all states with negative energies are filled, but all states with positive energies are empty. Its free energy is just equal to the total energy, i.e. sum of energies of all filled states. First, we confirm that the topological phase transition is indeed a phase transition by plotting its second derivative as a function of m (see Fig. <ref>) and observing that it
is discontinuous.
Next, we plot the second derivative of free energy as a function of t_inter (see Fig. <ref>) and confirm that it has a peak at the transition point between the weakly and strongly coupled phases, whose magnitude grows with the system size.
§.§ Topological properties
§.§.§ Topological entanglement entropy
To study entanglement properties of a many body system, one has to partition it into two subsystems in coordinate space. For a subsystem with density matrix ρ_A, the entanglement entropy is defined as S = - trρ_A logρ_A. If a 2D system has smooth boundary, its entanglement entropy scales linearly with system size L as
S = α L - γ,
where the constant γ is a known as a topological entanglement entropy - a term present only in topological systems. We note that in this definition, it is important that the system's boundary is smooth, otherwise its entanglement entropy might get additional contributions e.g. due to corners (<cit.>).
To extract the topological contribution γ, we compute the difference between entanglement entropies of several systems in such a way that their leading term, which is linear over L, as well as subleading contributions due to corners exactly cancel out. To do this, we choose a configuration of four systems shown on the Fig. <ref>. We compute entanglement entropies of each subsystem using the method suggested in Refs. <cit.>, which we briefly describe in Sec. <ref>.
We plot the results for γ as functions of m, t_inter on the Figs. <ref> (we assume E_F = 0). We can see that in the 'weakly coupled topological' phase, γ is a non-zero constant, which in turn confirms that it is a topological quasicrystal. In a similar way, in the 'weakly coupled non-topological' phase, γ is equal to zero, which means that the system is non-topological. In the strongly coupled phase, the concept of topological entanglement entropy becomes ill-defined because the system is gapless.
§.§.§ Local Chern marker
It was argued in Ref <cit.>, that Chern number is not well-defined for a system of finite size because
if one defines it in the same way as the corresponding infinite-size expression, the answer becomes exactly zero.
Instead it was suggested that topological properites of a system can be characterized empirically using 'local Chern marker' defined in the following way. First, one can define a projector to the subspace of filled
states as P(r_i, r_j) = ∑_E_λ < E_Fψ_λ(r_i) ψ^†_λ(r_j).
Next, one may project
the coordinate operators as
X̃(r_i , r_j) = ∑_r_k P(r_i, r_k) x_k P(r_k, r_j)
and Ỹ(r_j , r_i) = ∑_r_k' P(r_j, r_k') y_k' P(r_k', r_i). The local Chern marker is defined as
C(r_i) = 2 π i/S∑_r_j[ X̃ (r_i, r_j) , Ỹ (r_j, r_i) ],
where S is an average area per each lattice site on the lattice. One may check that for conventional Haldane model on a finite lattice, this local Chern marker inside the bulk is equal to the value of Chern invariant in the corresponding continuum model, whereas near the edges, its value is very large and has opposite magnitude in such a way that its sum over all lattice sites is zero: ∑_i C(r_i) = 0.
We plot the local Chern marker of 30^∘ twisted bilayer Haldane model for various values of t_inter on Fig. <ref>. One may see that in the 'weakly coupled' phases, it behaves in the same way as in the Haldane model on a monolayer. Specifically, in the 'weakly coupled topological phase', its bulk value remains the same and equal to the value in the case of zero interlayer coupling. Similarly, in the 'weakly coupled non-topological' phase, its bulk value is equal to zero. On the other hand, at larger values of t_inter, the local Chern marker starts behaving differently from the case of periodic lattice.
Specifically, the local Chern marker no longer has a form of constant value in the bulk and large, but opposite value at the edge. Instead, regions with C(r_i) values of opposite signs start emerging within the bulk of the system, and sufficiently far away from the transition point, these regions become indistinguishable from the edges of the system. In other words, at large t_inter, the local Chern marker C(r_i) strongly fluctuates within the bulk. so that there is no distinction between the bulk and the edges as in a crystalline topological insulator.
Finally, we point out that the local Chern marker's behavior starts deviating from the limit of non-interacting layers at smaller value of t_inter than the gap closing phase transition. We interpret this result in a way that the local Chern marker can be used as a tool to discover a novel 'hidden' phase transition which does not manifest itself through the behavior of the eigenenergies.
§.§.§ Anomalous Hall conductivity
We start our analysis from introducing electric current of our model. Specifically, if a tight-binding Hamiltonian for a given system has hoppings H_ij between sites i, j, then in the presence of electric field they change according to Peierls substitution and thus become H_ij e^ie/ħ∫_r_i^r_jA⃗d⃗r⃗, where A⃗ is a vector potential. If we expand these hoppings to the first order in A⃗, we obtain that the electric current operator has the following expression
J⃗_ij = ie/ħ H_ij (r⃗_j - r⃗_i).
Here J⃗_ij is a vector directed along the bond between the sites i, j. In fact, the last expression could be immediately obtained by writing Heisenberg equation and assuming that the current is proportional to the velocity operator.
Further, one can compute a response to applied electric field by using an expression for the electric current (<ref>) and applying Kubo formula. Anomalous Hall conductivity would be its antisymmetric part, and thus it would
be given by the Eq. (<ref>).
After lengthy, but straightforward calculations (specifically writing electrons Green functions and performing Matsubaba summation), the expression for anomalous Hall conductivity can be brought to the following form
σ^(xy)_ij = - e^2/2 ħ^2 ∑_f,e∑_k, l1/(E_f - E_e)^2
×{ψ^†_e, i H_ij (x_j - x_i) ψ_f, j·ψ^†_f, k H_kl (y_l - y_k) ψ_e, l.
.
- ψ^†_f, i H_ij (x_j - x_i) ψ_e, j·ψ^†_e, k H_kl (y_l - y_k) ψ_f, l.
-
.
ψ^†_e, i H_ij (y_j - y_i) ψ_f, j·ψ^†_f, k H_kl (x_l - x_k) ψ_e, l.
.
+ ψ^†_f, i H_ij (y_j - y_i) ψ_e, j·ψ^†_e, k H_kl (x_l - x_k) ψ_f, l}.
In this equation, i, j are lattice sites between which the electric current is computed, k, l are other pairs of sites over which summation is performed, E, ψ are eigenenergies and eigenfunctions of the model, indices f, e numerate filled/empty states respectively.
Let us emphasize once again, that in the Eq. (<ref>), σ_ij describes electric current through a bond between the lattice sites i, j (we assumed that the electric field is uniform). However, one can sum electric currents through all bonds next to a given lattice site i and consider σ_i = ∑_j σ_ij. For a crystalline system, one can take a step further and split the lattice site index i into two indices i', α numerating unit cells and sublattice degrees of freedom respectively. The well-known integer value of the anomalous Hall conductivity is obtained after an additional summation over α. A simple explanation of the latter is as follows: for an infinite crystalline system one may transform any dependency over unit cell index i' into momentum representation, and the well-known integer AHE occurs at zero momentum. However, its expression in terms of Berry curvature still contains summation over sublattice degrees of freedom. Nevertheless, on a honeycomb lattice, the two sublattice degrees of freedom are equivalent to each other, and therefore the AHE for each of them would be the same, i.e. half-integer.
More interestingly, properties of AHE obtained from the Eq. (<ref>) on a finite lattice
are fundamentally different from the case of an infinite crystal. Namely AHE on a finite lattice behaves in a similar way to a local Chern marker: its sum over all lattice sites i, j is equal to zero, its bulk value is constant for a crystalline lattice and equal to the value it would have in a corresponding infinite lattice, and its value at the edges has opposite sign and large magnitude so that its sum over all sites near the eges exactly cancels out the sum over all sites in the bulk. At first sight, this may seem paradoxal, but there is a simple argument for it. Indeed it is well-known that on an infinite lattice, AHE is proportional to an integer topological invariant, and phases with different topological invariants cannot be smoothly transformed into each other, but are separated by phase transitions. However, it is also well-known that phase transitions do not exist on a finite lattice, but appear only in thermodynamic limit. This means that topologically distinct phases cannot exist on a finite lattice! In other words, any phase on a finite lattice is topologically equivalent to trivial. For this reason, if one sums the local Chern marker (<ref>) over all lattice sites r_i, or one sums AHE (<ref>) over all sites i, j, one obtains exactly zero.
We present our numerical calculations of σ_i for our model of twisted bilayer on the Fig. <ref>. One can see that in the case of zero interlayer coupling, i.e. when the model is just a superposition of two periodic layers, σ_i behaves just like a local Chern marker. In the 'weakly coupled topological phase', it has a constant value inside the bulk equal to the value it would have in the limit of infinite size, whereas at the boundary its value has an opposite sign and large magnitude so that the total sum ∑_i σ_i is zero. Once the interlayer coupling t_inter is turned on, the behavior remains the same while t_inter is small. However, once t_inter is increased, σ_i is no longer uniform inside the bulk. The behavior of σ_i shows a phase transition at the same value of t_inter as the local Chern marker, and as t_inter increases further, the distinction between the bulk and the edge behavior smoothly disappears. This fact supports our argument that for a quasicrystal there is no distinction between bulk and edge as in a crystal, but instead there are 'bulk-like' and 'edge-like' regions.
Finally let us explain the meaning of zero total ∑_i σ_i from the experimental perspective. Indeed, introducing the vector potential A⃗ into our Hamiltonian with open boundary conditions is equivalent to placing our sample under external electric field. However, if one places a finite and isolated sample of a material with non-trivial AHE under electric field, there would be no electric currents flowing into or out of the sample because it does not have any contacts, through which the current might flow. In other words, our calculations
physically mean that if a finite and isolated sample is placed under electric field, there will appear electric currents inside the sample, but no currents into or out of it. In order to obtain a physically measurable anomalous Hall conductivity, one has either to consider explicitly the flow of electric current through contacts, or just to assume that the sample is infinite in the direction of electric current, as it was done in multiple past works (e.g. <cit.>). Since within our model, the case of an infinite sample is fundamentally different from a finite one, our results can be applied only to the latter. Nevertheless, we anticipate that if our model is realized experimentally, then at zero as well as small interlayer coupling, while the system is gapped, the total Hall conuctivity would be just an integer, i.e. the sum of two Hall conductivities of the isolated samples. On the other hand, in the gapless phase, the Hall conductivity will become non-universal, and possibly it may not even have a well-defined limit at infinite system size. The latter fact can be interpreted in a way that 'strong quasicrystallinity' is qualitatively similar to strong disorder: in its presence, all observable properties become sample-dependent.
§ DISCUSSION
In this work, we have studied topological properties of quasicrystalline 30 degrees twisted bilayer honeycomb lattice. We started from the case of two uncoupled layers each of which forms Haldane model, and tracked its evolution once the interlayer coupling is increased. We found that at small interlayer coupling, two layers of crystalline topological insulators become a quasicrystalline topological insulator. The latter fact is not obvious because topological properties of Haldane model are characterized by Chern invariant defined in momentum space, which is in turn well-defined only in the presence of crystalline translational symmetry. In this regard, one could be concerned whether translational symmetry breaking might break topological protection, but we established that
at small, but non-zero interlayer coupling, this is not the case: our bilayer has exactly the same topological properties as a superposition of two monolayers. Specifically, it still has edge states, has exactly the same value of topological entanglement entropy and the same behavior of local Chern marker and anomalous Hall conductivity.
As the interlayer coupling reaches a critical value, the system undergoes a 'hidden' phase transition which manifests itself only through the behavior of local Chern markers and anomalous Hall conductivity. As the interlayer coupling increases further, our system undergoes a phase transition into a gapless phase, whose eigenstates are multifractal.
Rigorous characterization of such a gapless phase's topological properties is challenging, but can just qualitatively say that its topology starts 'deteriorating'. Specifically, in the weakly coupled phases, the local Chern marker and anomalous Hall conductivity is constant in the bulk, but have a value of opposite sign in the boundary, whereas in the strongly coupled phase, these quantities significantly fluctuate within the bulk, but do not have any special features at the boundary. In fact, one may consider an analogy between our model and a disordered topological insulator. At weak disorder, the system has exactly the same topological properties as the topological insulator without disorder. However, once the disorder becomes strong, fluctuations of the disorder potential effectively behave like edges themselves, and this fact qualitatively explains that the bulk becomes indistinguishable from the physical edges of the system.
It would be of interest to realize our model in experiments, but it is challenging because typically (e.g. in twisted bilayer graphene), at large twist angles, interlayer coupling is very weak. However, one may still try to make it larger by applying high pressure between the layer. More plausibly, one might try to realize our model using ultracold atoms (see e.g. <cit.>), which have been proved to be a powerful tool to realize various physical models.
In summary, we proposed a way to realize topological quasicrystals by starting from topological crystalline materials and breaking their translational symmetries. Moreover we proposed a model of a quasicrystal, which explicitly hosts a topological phase transition. We demonstrated that non-trivial topological properties of quasicrystals can be characterized not only by local Chern markers, but also by topological entanglement entropy and anomalous Hall conductivity. Most surprisingly, we established that the anomalous Hall conductivity in an open system, in our case in a quasicrystal behaves qualitatively in the same way as a local Chern marker. We hope that in the future, our results may be used to obtain rigorous topological classification of quasicrystals. We are also interested in studying their unusual transport properties and possibilities for future applications.
§ ACKNOWLEDGEMENTS
The author would like to thank Institute of Basic Science (Daejeon, S. Korea), in which this project started and Profs. Moon Jip Park, Kyong Min Kim, Sergej Flach, Sergey Syzranov, Predrag Nikolic, Renat Sabirianov, Luis Santos, Ivan Khaymovich for helpful discussions.
Financial support by the National Science Foundation through EPSCoR RII Track-1: Emergent Quantum Materials and Technologies (EQUATE), Award OIA-2044049 is acknowledged.
§ CALCULATION OF THE ENTANGLEMENT ENTROPY
Here we briefly describe the method we use to compute entanglement entropy, which was proposed in Refs. <cit.> and used in a number of works later on, e.g. in <cit.>. We consider the model of twisted bilayer at half filling given by the Hamiltonian (<ref>) and are interested in entanglement entropy of any subsystem from the Fig. <ref>. First, we compute eigenvectors ψ of the Hamiltonian (<ref>) and select only the ones corresponding to its filled states. Next, we compute 'correlation matrix', i.e. sum of their outer products
C_ij = ∑_filledψ (r⃗_i) ψ^† (r⃗_j).
After that we select the components whose coordinates r⃗_i, j are inside the subsystem whose entanglement entropy we are interested in and thus obtain 'reduced correlation matrix' C_A, ij. The entanglement entropy can be expressed in terms of its eigenvalues ζ as
S = - ∑_m(
ζ_m logζ_m
+ (1 - ζ_m) log (1 - ζ_m).
)
Finally, to substract the leading 'area law' terms and extract the topological contribution we compute sum of the entanglement entropies over several figures as shown on the Fig. <ref>.
§ KUBO FORMULA FOR ANOMALOUS HALL EFFECT
In this section, we derive an expression for the anomalous Hall conductivity on a finite lattice using Kubo formula. Importantly, we do not use momentum representation, which in turn makes our derivation applicable to the case of non-periodic lattice.
We start from partition function of the system, which we write as
𝒵 = ∫ D ψ D ψ̅
e^∑_w, i i w c^†_i, w c_i, w - ∑_w, i, j c^†_i, w H_ij c_j, w .
In this expression, we have transformed imaginary time into Matsubara frequencies representation, but left the coordinate representation. H_ij is the lattice Hamiltonian, and i, j numerate its sites. Electric field is introduced via Peierls substitution, which in turn may be approximated as
H_ij→ H_ij e^i ∫_r_i^r_jA⃗d⃗r⃗≈ H_ij + i H_ijA⃗ (r⃗_j - r⃗_i).
From the last equation we can see that the current components have the form
J⃗_ij, Ω = i c^†_i, w+Ω H_ij (r⃗_j - r⃗_i) c_j, w
Note, that the current is defined here as a vector for each bond connecting the lattice sites i, j.
Now we apply Kubo formula and write that the expectation current has a form
J_x, ij, Ω =
⟨
J_x, ij, Ω∑_k. l J_y, kl, -Ω⟩ A_y, kl, Ω.
Now we assume that the electric field is constant in space, and the vector potential is written in the gauge A⃗ = E⃗/(iΩ). The anomalous Hall conductivity is given by the antisymmetric part of the current-current correlator, namely
σ^(Hall)_ij = 1/2iΩ∑_kl(
⟨ J_x, ij, Ω J_y, kl, -Ω⟩
- ⟨ J_y, ij, Ω J_x, kl, -Ω⟩).
Note that in our notations, the conductivity has just two indices i, j, which physically describe electric current at the bond connecting lattice sites i, j as a response to constant electric field (if the electric field was not constant, we would have to include 4 components).
We evaluate the current-current correlator in a conventional way by substituting fermionic Green's functions and summing over Matsubara frequencies. The Green's function can be written in terms of eigenstates ψ_n, i and energies E_n (here n numerates the states) as
G_ij, w = ∑_n ψ_i ψ^†_j/i w - E_n.
After we sum over Matsubara frequencies, assume zero temperature and take the limit of zero external frequency, we can obtain an answer for σ^(Hall)_ij, which is given by the Eq. (<ref>).
Finally, let us look at the sum ∑_ijσ^(Hall)_ij over all lattice sites i, j. We can simplify the Eq. (<ref>) by using explicitly the fact that ψ are eigenvectors, namely by applying an identity
∑_k, lψ^†_f, k H_k, l(x_l - x_k) ψ_e, l
= (E_f - E_e) ∑_kψ^†_f, k x_k ψ_e, k.
In this way we can obtain that the total sum has the form
∑_i, j σ^(Hall)_ij
= 1/2∑_f,e∑_i, k
×{ψ^†_e, i x_i ψ_f, i·ψ^†_f, k y_k ψ_e, k
- ψ^†_f, i H_ij x_i ψ_e, i·ψ^†_e, k y_k ψ_f, k.
.
- ψ^†_e, i y_i ψ_f, i·ψ^†_f, k x_k ψ_e, k
+
ψ^†_f, i y_i ψ_e, i·ψ^†_e, k x_k ψ_f, k}.
After applying the completeness relation
∑_e ψ_e, i·ψ^†_e, k
= 1 - ∑_f ψ_f, i·ψ^†_f, k
one can derive that the sum σ_ij is equal to
∑_i, j σ^(Hall)_ij
= 1/2∑_f∑_i{ψ^†_f, i( y_i x_i - x_i y_i) ψ_f, i},
i.e. it is indeed zero for any finite lattice. From the last expression, one can also see how this argument may break down for an infinite lattice: we found that anomalous Hall conductivity is proportional to a trace of a commutator [X, Y]. The latter is always zero in a finite-dimensional space, but does not have to be zero in an infinite-dimensional space.
§ ANOMALOUS HALL CONDUCTIVITY ON AN INFINITE CRYSTALLINE LATTICE
The goal of this section is to show that the expression for Hall conductivity (<ref>) from the main text in the case of an infinite crystalline lattice is indeed equivalent to a well-known expression in terms of Berry curvature. Here, for clarity we separate lattice indices i, j. ... into pairs (i, α), (j, β), ..., which refer to unit cells and sublattice degrees of freedom respectively.
The key feature of a crystalline lattice is that its eigenvectors are plane waves
ψ_iα = 1/√(N) e^i k⃗x⃗_i ϕ_α, k.
Here the wavefunctions ϕ_α, k are eigenfunctions of the Schrodinger equation in momentum representation
E ϕ_α, k = ∑_β H_αβ (k) ϕ_β, k
and the corresponding momentum-space Hamiltonian is just a Fourier transformation of the original Hamiltonian
H_αβ(k) =
∑_j, β
e^i k⃗(x⃗_j - x⃗_i)
H_i,j α, β.
Due to translational symmetry, we can reverse the above expression and thus write it as
H_i,j α, β
= 1/N∑_p H_αβ(p) e^-i p⃗ (x⃗_j - x⃗_i) .
After substituting the above expression, as well as expressions for the eigenstates in the form of plane waves (<ref>) into the main expression for anomalous Hall conductivity (<ref>), one can obtain the lattice version of TKNN formula
∑_jσ^(Hall)_i,j α, β = 1/2∑_γ, δ1/N∑_k⃗1/(E_f - E_E)^2
×{ϕ^†_e, α∂ H_αβ/∂ k_xϕ_f, β·ϕ^†_f, γ∂ H_γδ (k)/∂ k_yϕ_e, δ.
-
ϕ^†_f, α∂ H_αβ/∂ k_xϕ_e, β·ϕ^†_e, γ∂ H_γδ (k)/∂ k_yϕ_f, δ
.
∂ H_γδ (k)/∂ k_y
- ( k_x ↔ k_y )
}.
Notice that in the left side of this equation we have sum ∑_j σ_ij of conductivities through all bonds adjacent to a site i - the quantity we discuss in the main text. In addition, to obtain the physical answer we have to sum over all sublattice degrees of freedom, i.e. over the indices α, β. We also note that in the Eq. (<ref>) we still assume that the lattice is discreet, but the momentum is continuum because the lattice is infinite. Hence it is straightforward to replace the momentum summation with an integration
1/N∑_k → S ∫d^2 k/(2π)^2,
where S is an area of the unit cell.
Finally, by making use of identities
∑_αβϕ^†_e, α∂ H_αβ/∂ k_xϕ_f, β
=
∑_α (E_f - E_e ) ϕ^†_e, α∂ϕ_f, α/∂ k_x
combined with the completeness relation (<ref>) we obtain a familiar expression for anomalous Hall conductivity in terms of Berry curvature
∑_j αβσ_i, j α, β
= S ∫d^2 k/(2π)^2∑_α{∂ϕ^†_f, α/∂ k_y∂ϕ_f, α/∂ k_x
-
∂ϕ^†_f, α/∂ k_x∂ϕ_f, α/∂ k_y}.
apsrev4-2
|
http://arxiv.org/abs/2409.02824v1 | 20240904154300 | Locally Trivial Deformations of Toric Varieties | [
"Nathan Ilten",
"Sharon Robins"
] | math.AG | [
"math.AG",
"14D15, 14M25"
] |
arabic
§ ABSTRACT
We study locally trivial deformations of toric varieties from a combinatorial point of view. For any fan Σ, we construct a deformation functor _Σ by considering Čech zero-cochains on certain simplicial complexes. We show that under appropriate hypotheses, _Σ is isomorphic to '_X_Σ, the functor of locally trivial deformations for the toric variety X_Σ associated to Σ. In particular, for any complete toric variety X that is smooth in codimension 2 and -factorial in codimension 3, there exists a fan Σ such that _Σ is isomorphic to _X, the functor of deformations of X. We apply these results to give a new criterion for a smooth complete toric variety to have unobstructed deformations, and to compute formulas for higher order obstructions, generalizing a formula of Ilten and Turo for the cup product. We use the functor _Σ to explicitly compute the deformation spaces for a number of toric varieties, and provide examples exhibiting previously unobserved phenomena. In particular, we classify exactly which toric threefolds arising as iterated ^1-bundles have unobstructed deformation space.
Evolution of radiation profiles in a strongly baffled divertor on MAST Upgrade
the MAST Upgrade team
===============================================================================
§ INTRODUCTION
§.§ Background and motivation
Let be an algebraically closed field of characteristic zero and X a variety over . The functor _X of isomorphism classes of (infinitesimal) deformations of X provides useful information on how X might fit into a moduli space.
In the setting where X is smooth, _X coincides with the functor _X' of isomorphism classes of locally trivial deformations of X.
In general, locally trivial first order deformations are described by H^1(X,_X) and obstructions to lifting locally trivial deformations live in H^2(X,_X). In fact, all locally trivial deformations of X are controlled by the Čech complex of the tangent sheaf _X, see <ref> below.
Despite this seemingly concrete description of _X', it is still quite challenging to explicitly understand _X' for specific examples.
In this paper, we will give a purely combinatorial description of the deformation functor _X' when X is a -factorial complete toric variety.
First order deformations of smooth complete toric varieties were described combinatorially by N. Ilten in <cit.>. In <cit.> Ilten and R. Vollmert showed that homogeneous first order deformations can be extended to one-parameter families over ^1, providing a sort of skeleton of the versal deformation (see also work by A. Mavlyutov <cit.> and A. Petracci <cit.>). However, it turns out that in general there are obstructions to combining these one-parameter families. This was observed by Ilten and C. Turo in <cit.>, which contains a combinatorial description of the cup product.
These results place the deformation theory of smooth complete toric varieties in a strange liminal space: although they can be obstructed, unlike say Calabi-Yau varieties <cit.>, they do not satisfy “Murphy's law” <cit.>, that predicts deformation spaces will have arbitrarily bad singularities. It thus remains an interesting challenge to determine exactly what spaces can occur as the deformation space of a smooth complete toric variety.
§.§ Main results
We now summarize the main results of the paper. Let X=X_Σ be a -factorial toric variety with corresponding fan Σ. See <ref> for notation and details on toric varieties.
Our starting point is the combinatorial description of H^1(X,_X) and H^2(X,_X) from <cit.>.
Indeed, when X is complete, for every k≥ 1 there are isomorphisms
H^k(X,_X)≅⊕_ρ, H^k-1(V_ρ,,),
see prop:cohom2.
Here, ρ ranges over rays of Σ, ranges over characters of the torus, and V_ρ, is a particular simplicial complex contained in Σ, see <ref>.
By the above, locally trivial first order deformations of X and obstructions to lifting locally trivial deformations can be described via cohomology of the simplicial complexes V_ρ,.
Hence, it is natural to try to completely understand the functor _X' in terms of Čech complexes for the V_ρ,.
The fan Σ induces a closed cover _ρ, of each V_ρ, and we will consider the Čech complex Č^∙(_ρ,,) with respect to this cover.
For any local Artinian -algebra A with residue field and maximal ideal _A, we will define a natural map
: ⊕_ρ,Č^0(_ρ,,_A)→⊕_ρ,Č^1(_ρ,,_A).
We then set
_Σ(A) = {α∈⊕_ρ, Č^0(_ρ,,_A): (α)=0 } /∼
where ∼ is a certain equivalence relation. See Definition <ref> for details. This can be made functorial in the obvious way; we call the resulting functor the combinatorial deformation functor.
Our primary result is the following:
thm:main
Let X=X_Σ be a -factorial toric variety without any torus factors and assume that H^1(X,)=H^2(X,)=0, for example, X is a smooth complete toric variety. Then the functor '_X of locally trivial deformations of X is isomorphic to the combinatorial deformation functor _Σ.
By utilizing a comparison theorem, we obtain a similar combinatorial description of _X for complete toric varieties that are smooth in codimension two and -factorial in codimension three, see mildsing for the precise statement.
Our main motivation in introducing the combinatorial deformation functor _Σ was to be able to effectively compute the hull of _X when X=X_Σ is a toric variety with sufficiently mild singularities. Using the combinatorial deformation functor, we introduce the combinatorial deformation equation and show that one can compute a hull of _X by solving this equation to higher and higher order through a combinatorial process, see <ref>. We also show that in some cases, this process can be simplified further by removing certain maximal cones from the fan Σ (thm:samehull) and limiting those pairs of maximal cones on which we must consider obstruction terms (prop:closure and prop:codimone).
Given two first order deformations of X=X_Σ over [t_1]/t_1^2 and [t_2]/t_2^2, the cup product computes the obstruction to combining them to a deformation over [t_1,t_2]/⟨ t_1^2,t_2^2⟩. This cup product has been described in combinatorial terms by Ilten and Turo in <cit.>. Using the functor _Σ, we are able provide explicit combinatorial formulas for the obstructions to lifting deformations to arbitrary order, see thm:obsformula. In particular, we easily recover the cup product formula of <cit.>. In contrast to the situation of the cup product, our higher order obstruction formulas include not only first order combinatorial deformation data, but also higher order data.
The combinatorial deformation functor _Σ exhibits a large amount of structure. Utilizing this structure, we provide non-trivial conditions guaranteeing that X_Σ has unobstructed deformations:
(See thm:unobstructed)
Let X_Σ be a complete toric variety that is smooth in codimension 2 and -factorial in codimension 3. Let consist of all pairs (ρ,) of rays and characters for which H^0(V_ρ,,) does not vanish.
Set
:= { (ρ,+)∈Σ(1)× M |
(ρ,)∈𝒜; ∈∑_(ρ',')∈𝒜_≥0·'
}.
If H^1(V_ρ,,)=0 for all pairs (ρ,)∈, then X_Σ is unobstructed.
In Example <ref>, we give an example of a smooth toric threefold whose unobstructedness follows from our conditions but cannot be deduced by degree reasons alone.
Finally, we put our machinery to use to calculate the hull of _X for numerous examples.
It makes sense to start with examples of low Picard rank.
By analyzing H^1(X,_X) and H^2(X,_X) for X a smooth complete toric variety of Picard rank two we show:
Let X be a smooth complete toric variety of Picard rank one or two. Then X has unobstructed deformations. Furthermore, X is rigid unless it is the projectivization of a direct sum of line bundles on ^1 such that the largest and smallest degrees differ by at least two.
The first interesting case of examples to consider is thus toric threefolds X_Σ of Picard rank three (toric surfaces are always unobstructed by <cit.>). We focus on the case where Σ is a splitting fan, that is, X_Σ is an iterated ^1-bundle.
These ^1-bundles have the form
(__e⊕__e(aF+bH))
for e,a,b∈, e,b≥ 0
where _e=(_^1⊕_^1(e)) is the eth Hirzebruch surface, and F and H respectively represent the classes in (_e) of the fiber and _𝔽_e(1) in the ^1-bundle fibration of _e over ^1.
We discover that such threefolds may have obstruction equations whose lowest terms are quadratic or cubic, or they may be unobstructed:
thm:obstructed
Let
X= (__e⊕__e(aF+bH))
with e,b≥ 0.
Then X is obstructed in exactly the following cases:
* The case e=1, a≤ -2, and b≥ 3-a. In this case, the minimal degree of obstructions is three.
* The case e≥ 2, a≤ -e, and b ≥ 1+2-ae. If a≡ 1 e, then the minimal degree of obstructions is three. In all other cases, the minimal degree of obstructions is two.
By computing the hull of _X for a number of such examples, we find examples whose hulls exhibit the following behaviour:
* The hull has a generically non-reduced component (Example <ref>).
* The hull is irreducible but has a singularity at the origin (Example <ref>).
* The hull has a pair of irreducible components whose difference in dimension is arbitrarily large (Example <ref>).
None of these phenomena had previously been observed for deformation spaces of smooth toric varieties.
In <cit.>, it was asked if the deformation space of a smooth toric variety X is cut out by quadrics. By thm:obstructed, we see that the answer to this question is negative. In particular, any differential graded lie algebra controlling _X cannot be formal.
§.§ Our approach
As mentioned above, for any variety X the functor _X' is controlled by the Čech complex
Č^∙ (,_X) for the tangent sheaf _X with respect to an affine open cover of X. Indeed, isomorphism classes of deformations of X over a local Artinian -algebra A with residue field are given by
_X'(A)≅{x∈Č^1(,_X)⊗_A: x_ij x_jk -x_ik =0}/∼
where _A is the maximal ideal of A, is the Baker-Campbell-Hausdorff (BCH) product, and ∼ is an equivalence relation induced by an action of Čech zero-cochains, see <ref>, <ref>, and <ref> for details.
More generally, any sheaf of Lie algebras on a topological space X naturally gives rise to a deformation functor _, see liefun1 and <ref>.
The first step in proving thm:main is to replace the tangent sheaf _X by a simpler sheaf of Lie algebras. For -factorial toric X, the generalized Euler sequence gives a surjection
:=⊕_ρ(D_ρ)→_X
where the D_ρ are the toric boundary divisors, see <ref>. Moreover, the sheaf has a natural bracket and the above map is a map of sheaves of Lie algebras (thm:liebracket). This induces a map of functors _→__X≅_X' that is an isomorphism under appropriate cohomological vanishing conditions.
The Lie algebra structure on may be seen as coming from the Cox torsor of X, see Remark <ref>. In fact, this is a manifestation of the fact that, under the hypotheses of thm:main, invariant deformations of the Cox torsor are equivalent to locally trivial deformations of X, see rem:cox2 for an even more general statement. This is very much inspired by the approach of J. Christophersen and J. Kleppe <cit.>.
The second step in proving thm:main involves a homotopy fiber analogue in our setting. The sheaf is a subsheaf of a constant sheaf
which also comes with a natural Lie bracket. Given such an inclusion ↪ of sheaves of Lie algebras on any space X, we construct a new deformation functor _↪ (functornew2) that can be understood as a certain quotient of the deformation functor controlled by the homotopy fiber of the inclusion ↪ (even though the homotopy fiber doesn't exist in the category of sheaves of Lie algebras).
Under appropriate hypotheses, _ and _↪ are isomorphic (compa2 and prop:isosecfunctor).
We believe that this construction will be of independent interest for studying other deformation problems only involving locally trivial deformations.
In our specific setting of locally trivial deformations of a toric variety X_Σ, it turns out that _↪ is exactly the combinatorial deformation functor _Σ from above, and our primary result follows.
§.§ Other related literature and motivation
K. Altmann began a systematic study of the deformation theory of affine toric singularities in the 1990's, giving combinatorial descriptions of first order deformations <cit.>, obstructions <cit.>, and homogeneous deformations <cit.>. For the special case of a Gorenstein toric threefold X with an isolated singularity, he gave a combinatorial description of the hull of _X and its irreducible components <cit.>. This has recently been revisited by Altmann, A. Constantinescu, and M. Filip in <cit.> with similar results in a slightly more general setting using different methods. Filip has also given a combinatorial description of the cup product in the affine case, see <cit.>.
There are numerous motivations for studying deformations of toric varieties, both in the affine and in the global situations; we now mention several. Deformations of toric varieties are useful in mirror symmetry for studying deformations of embedded Calabi-Yau hypersurfaces <cit.> and for classifying deformation families of Fano varieties <cit.>. Deformation theory of toric varieties has been used to study singularities on the boundary of the K-moduli spaces of Fano varieties <cit.>, extremal metrics <cit.>, the boundary of Gieseker moduli spaces <cit.>, and to show that deformations of log Calabi-Yau pairs can be obstructed <cit.>.
We hope that our results here will find similar applications.
§.§ Organization
We now describe the organization of the remainder of this paper. Section <ref> concerns itself with deformation functors governed by a sheaf of Lie algebras. In <ref> and <ref> we recall preliminaries on deformation functors and Baker-Campbell-Hausdorff products, respectively. In <ref> we define the deformation functor controlled by a sheaf of Lie algebras and study various properties of it. This is similar to the contents of <cit.>, but we work with alternating instead of singular or ordered Čech cochains. Subsection <ref> contains the first significant innovation of the paper: our definition and study of the homotopy fiber analogue deformation functor _↪.
In Section <ref> we describe a procedure to algorithmically construct the hull of our homotopy fiber analogue _↪ by iteratively solving a deformation equation. The idea of constructing a hull via iterated lifting goes back at least to <cit.> and is made more algorithmic in certain settings in <cit.>, but our setting is distinct enough that we provide a thorough treatment. We set up the situation in <ref>, introduce the deformation equation in <ref>, and show in <ref> that iteratively solving it produces a hull of our homotopy fiber analogue.
In Section <ref> we turn our attention to toric varieties. We recall preliminaries and set notation in <ref>. We then discuss cohomology of the structure sheaf in <ref>, introduce the simplicial complexes V_ρ, and discuss their relation to boundary divisors in <ref>, and introduce the Euler sequence in <ref>.
In Section <ref> we study deformations of toric varieties. We define the combinatorial deformation functor _Σ in <ref> and prove our main result thm:main. We also discuss connections to Cox torsors. In <ref> we specialize the discussion of <ref> to the toric setting and discuss how to compute the hull of _Σ by solving the combinatorial deformation equation. We discuss formulas for higher order obstructions in <ref>. In <ref> we discuss how to further simplify computations by removing certain cones from the fan Σ.
We prove our criterion for unobstructedness in <ref>.
In <ref> we turn our attention to examples. In <ref> we introduce primitive collections and prove a sufficient criterion for rigidity (lemma:rigid). We show that smooth complete toric varieties of Picard rank less than three are unobstructed in <ref>. In <ref> we study toric threefolds X that are iterated ^1-bundles, obtaining very explicit descriptions of H^1(X,_X) and H^2(X,_X). We continue this study in <ref>, providing several examples whose deformation spaces exhibit interesting behaviour and proving thm:obstructed.
We conclude with three appendices. In Appendix <ref>, we provide a proof of compa which gives a criterion for deformation functors controlled by two different sheaves of Lie algebras to be isomorphic. In Appendix <ref> we state a folklore theorem (CMiso) comparing deformations of a scheme X with deformations of an open subscheme U; for lack of a suitable reference we provide a proof. The theorem implies that in particular, for X a Cohen-Macaulay variety that is smooth in codimension two, deformations of X may be identified with deformations of the non-singular locus.
Finally, in Appendix <ref> we show that the deformation equation of <ref> can in fact be iteratively solved.
For the reader who is interested in understanding the precise definition of the combinatorial deformation functor _Σ and the statement of Theorem <ref> as quickly as possible, we recommend reading <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>.
§.§ Acknowledgements
We thank A. Petracci and F. Meazzini for productive discussions. Both authors were partially supported by NSERC.
§ DEFORMATION FUNCTORS
§.§ Preliminaries
In this section, we will provide a brief review of the definitions and notation concerning functors of Artin rings. For detailed information regarding the functor of Artin rings, we refer to <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
Let be an algebraically closed field of characteristic zero. Let be the category of complete local noetherian -algebras with the residue field . For every R∈ we denote by _R the maximal ideal of R. We also consider the subcategory , which consists of local Artinian -algebras with the residue field . We denote by the category of sets.
A functor of Artin rings is a covariant functor : → such that F() is singleton. For any morphisms A'→ A and A”→ A in , we consider the natural map induced from the fiber product:
(A' ×_A A”) →(A')×_(A)(A”).
A functor of Artin rings is called a deformation functor if:
* (<ref>) is surjective whenever A'→ A is surjective; and
* (<ref>) is bijective whenever A=.
We say is homogeneous if (<ref>) is bijective whenever A'→ A is surjective.
The set
^1 = ([t]/t^2),
known as the tangent space of , exhibits a natural -vector space structure if is a deformation functor.
A small extension in is an exact sequence
0→ I → A' A → 0,
where π is a morphism in and I is an ideal of A' such that _A'· I=0. The A'-module structure on I induces a -vector space structure on I.
Definition <ref> is standard, but there is some inconsistency in the literature about the terminology. We opt to maintain consistency with the terminology used in <cit.>,<cit.>.
An important tool for studying deformation functors is obstruction theory, defined as follows:
A complete obstruction theory (W,ϕ) for a functor of Artin rings consists of a -vector space W, called the obstruction space, and a function ϕ that assigns to every small extension
0→ I → A' A → 0
and any ζ∈(A) an element ϕ(ζ,A')∈ W⊗ I such that:
* (completeness) ϕ(ζ,A')=0 if and only if ζ
lifts to (A'); and
* (functoriality) for every morphism of small extensions
0 [r] I_1 [r] [d, "π_I"] A'_1 [r] [d,"π_A'"] A_1 [r] [d, "π_A"] 0
0 [r] I_2 [r] A'_2 [r] A_2 [r] 0
and ζ∈(A_1), we have
ϕ(π_A(ζ),A_2')= ⊗π_I (ϕ(ζ,A_1')).
Here, is the identity map on W.
rem:vanishing
If ζ lifts to (A'), then the vanishing of ϕ(ζ,A') is an immediate consequence of the functoriality property, see <cit.>.
When comparing two deformation functors, it is beneficial to ensure that their associated obstruction theories are comparable, as precisely defined by the following:
obstrutionmapb/w
Let (,ob_, ϕ_) and (,ob_,ϕ_) be functors of Artin rings together with associated complete obstruction theories, and let f: → be a morphism of functors. We say that a linear map ob_f: ob_→ ob_ is an obstruction map for f if it is compatible with the obstruction theories of and . In other words, for every small extension
0→ I → A' A → 0
and any ζ∈(A), we have
ϕ_(f(ζ),A')= ob_f (ϕ_(ζ,A')).
Recall that a map of functors of Artin rings → is smooth if for every surjection A'→ A in , the induced map
(A')→(A)×_(A)(A')
is surjective.
We will frequently make use of the standard smoothness criterion to show that a morphism of functors is surjective:
standardsmooth
Let (,ob_, ϕ_) and (,ob_,ϕ_) be deformation functors together with associated complete obstruction theories, and let f: → be a morphism of functors that admits an obstruction map ob_f: ob_→ ob_. Assume that:
* ^1→^1 is surjective; and
* ob_f is injective.
Then f is smooth. In particular, it is surjective.
§.§ The Baker-Campbell-Hausdorff product
In this section, our main aim is to lay down the necessary groundwork for defining the deformation functors in <ref> and <ref>. We will begin by giving an overview of the definition of the Baker-Campbell-Hausdorff product for a Lie algebra. Additionally, we will present proofs of several basic properties that will be useful in the subsequent sections. For more details about Lie algebras, we refer to <cit.>, <cit.>.
Let be a Lie algebra. By possibly embedding into a universal enveloping algebras (see <cit.>), we can always assume that the Lie bracket on is given by the commutator in some associative algebra. Through this embedding into an associative algebra, we define exp(x) for any A∈ and x∈⊗_A using the formal power series of the exponential map. These maps are clearly convergent in the _A-adic topology.
The Baker-Campbell-Hausdorff (BCH) product x y of x,y ∈⊗_A is defined by the relation
exp(x)·exp(y)=exp(x y).
The non-linear terms of x y can be expressed as nested commutators of x and y with rational coefficients (see e.g. <cit.>). The expression in nested commutators is not unique, and we will call any expression of this form a BCH formula. The first few terms are easily calculated and well known:
x y= x+y+12[x,y]+112[x,[x,y]]-112[y,[x,y]]+ (Higher order terms).
An explicit and complete description of a BCH formula is provided by <cit.>
x y=∑ _n=1^∞(-1)^n-1/n∑ _[ r_1+s_1>0; ⋮; r_n+s_n>0 ][x^r_1y^s_1x^r_2y^s_2… x^r_ny^s_n]/(∑ _j=1^n(r_j+s_j))·∏ _i=1^nr_i!s_i!.
In this expression, the summation extends to all nonnegative integer values of s_i and r_i, and the following notation is utilized:
[x^r_1y^s_1… x^r_ny^s_n]=[x,[x,… [x^r_1,[y,[y,… [y ^s_1, … [x,[x,… [x ^r_n,[y,[y,… y ^s_n]]… ]]
with the definition [x] := x.
For x,y,z ∈⊗_A, it is immediate to see that
(x y) z= x (y z); x0=0 x=x; x-x=0,
and (⊗_A,) is a group (see also <cit.>, <cit.>).
We will repeatedly use the following result to establish other calculations.
comm
Let be a Lie algebra. Given a small extension
0→ I → A' → A → 0
in and elements x ∈⊗_A', w ∈⊗ I we have the following:
* x w = w x = x+w;
* (-x) w x= w.
Furthermore, if x,y,z ∈⊗_A' such that x y z ∈⊗ I then
* x y z= y z x= z x y.
Since x· w=0, the BCH product reduces to exp(x)·exp(w)=exp(x+w), and the proof of claim <ref> follows. Through an immediate application of claim <ref>, we have
(-x) w x = (-x) x w=w.
This establishes claim <ref>. By applying claim <ref> of the lemma, we obtain
x y z =(-x) (x y z) x
= y z x .
By symmetry, we also obtain y z x= z x y, and claim <ref> follows.
Now, we will show that commutes with fiber products in . Let x'∈⊗_A' and A' A be a morphism in .
For convenience, we also refer to the map ⊗π': ⊗_A'→⊗_A as π' when the context is clear. Since is A'-linear, it is immediate that
π'(x' y')= π'(x') π'(y').
BCHfiberprod
Let A' A, A” A be morphisms in . Consider x',y' ∈⊗_A' and x”,y”∈⊗_A” with
π'(x') = π”(x”), π'(y') = π”(y”).
Then (x',x”), (y',y”)∈⊗_A'×_A A” and
(x',x”) (y',y”)= (x' y', x” y”)
Consider the following commutative diagram:
⊗_A'×_A A”[r,"pr_2"] [d,"pr_1"] ⊗_A”[d, "π”"]
⊗_A'[r,"π'"] ⊗_A
The proof involves a diagram chase of the diagram above, utilizing the fact that commutes with pr_1, pr_2, π', π”, which follows from (<ref>).
Using we will define maps that generalize the differential of Čech complexes. These new maps will be used to define the deformation functors in <ref>.
Let X be a topological space, and ={U_i} be an open or closed cover of X. For any sheaf of abelian groups on X, we denote by Č^∙_(,) the Čech complex of singular cochains with respect to the cover . It is often advantageous to consider the subcomplex Č^∙(,) ⊆Č^∙_(,) of alternating Cech cochains. We will present our results in alternating cochains whenever possible. Let
d:Č^k(,)→Č^k+1(,)
denote the Čech differential, let Ž^k(,) denote the group of k-cocycles, and let Ȟ^k(,) denote k-th Čech cohomology group. In this section we will only consider Čech complexes for an open cover , but in <ref> we will also consider Čech complexes for a constant sheaf on a simplicial complex with respect to a closed cover.
For the remainder of this section, we take to be a sheaf of Lie algebras on a topological space X, and ={U_i} to be an open cover of X. For any open set U⊆ X, H^0(U,) is a Lie algebra. Hence, we can apply all the discussions presented at the beginning of this section to H^0(U,).
action
Let A∈. We define a left action of the group (Č^0(,)⊗_A,) on the set Č^1_(,)⊗_A given by
Č^0(,)⊗_A ×Č^1_(,)⊗_A →Č^1_(,)⊗_A
(a,x) ↦ a⊙ x
via
(a⊙ x)_ij= a_i x_ij -a_j.
If x∈Č^1(,)⊗_A,
it can be readily verified that
(a⊙ x)_ij (a⊙ x)_ji=0
for all a∈Č^0(,)⊗_A. Therefore, according to (<ref>), we have (a⊙ x)_ji=-(a⊙ x)_ij. As a result, action can be restricted to alternating cochains.
defn:omaps
Let A∈. We use the product to define the maps
:Č^0(,)⊗_A →Č^1(,)⊗_A,
:Č^1(,)⊗_A →Č^2_(,)⊗_A
via
(a)_ij =a_i -a_j,
(x)_ijk =x_ij x_jk -x_ik.
obs:alt
For every a ∈Č^0(,)⊗_A, we have
(a)_ij(a)_ji=0.
Since (H^0(U_i∩ U_j,)⊗_A, ) is a group, we can conclude that (a)_ji=-(a)_ij and
(a) ∈Č^1(,)⊗_A. However, it is important to note that (x) may not be alternating.
rem:obs
The maps and exhibit several notable properties.
* Both and , as well as the action described in action, are compatible with the morphisms in , as can be seen from (<ref>).
* Compatibility with the fiber products in holds for both and , as implied by BCHfiberprod.
*
Consider a small extension
0→ I→ A' → A→ 0.
Let x ∈Č^1(,)⊗_A'.
For any y∈Č^1(,)⊗ I, we have
(x+y)=(x)+d(y).
This result follows by a repeated application of claims <ref> and <ref> of comm.
A similar result holds for .
*
For every x ∈Č^0(,)⊗_A, we observe that ((a)) is zero. This follows from a direct computation, utilizing the fact that (a) is alternating.
* If (x)=0, then (a⊙ x)=0 for every a∈Č^0(,)⊗_A.
§.§ Deformation functors governed by the sheaves of Lie algebras
In this section, we employ sheaves of Lie algebras as a source for constructing deformation functors. In <cit.>, the authors consider the deformation functors controlled by semicosimplicial Lie algebras (see also <cit.>). As a special case, this includes semicosimplicial Lie algebras arising from the singular Čech complex of a sheaf of Lie algebras (see <cit.>). Since we are working primarily with the alternating Čech complex, we adapt some of the arguments of <cit.> to our setting.
liefun1
Let be a sheaf of Lie algebras on a topological space X, and let be an open cover of X.
We define the functor _,: → on objects as follows:
_,(A)= {x ∈Č^1(,)⊗_A: (x)=0 }/∼
where ∼ is the equivalence relation induced by the action of Č^0(,)⊗_A on Č^1(,)⊗_A as given by action. The functor sends a morphism
π': A' → A to the map
_,(π'): _,(A') →_,(A)
induced by the map Č^1(,)⊗_A'→Č^1(,)⊗_A. For convenience, we also refer to _,(π') as π'.
By claims <ref> and <ref> of rem:obs, it is evident that the functor is well-defined on both objects and morphisms. Given that we are working over alternating cochains instead of singular cochains (cf. <cit.>), the following lemma is needed for discussing an obstruction theory of _,.
obsL
Let A∈ and x∈Č^1(,)⊗_A such that (x)=0. Let
0→ I → A' → A → 0
be a small extension in , and let x'∈Č^0(,)⊗_A' be any lift of x. Then:
* (x') is an alternating 2-cocycle;
* for any other lifting y'∈Č^1(,)⊗_A' of x, we have
(y')=(x')+d(y'-x');
* (a'⊙ x')=(x') for every a'∈Č^1(,)⊗_A'.
First, we will prove that (x') is a 2-cocycle. To do that, we need to show the cocycle condition
-(x')_ijk+ (x')_ijℓ-(x')_ikℓ+(x')_jkℓ=0
for all U_i,U_j,U_k,U_ℓ in . It is immediate that (x')∈Č^2(,)⊗ I, since its image in Č^2(,)⊗_A is zero. Hence, by comm<ref>, it is enough to show that
-(x')_ijk(x')_ijℓ -(x')_ikℓ(x')_jkℓ=0.
Note that,
-(x')_ijk(x')_ijℓ -(x')_ikℓ(x')_jkℓ = x'_ik-x'_jk-x'_ij
x'_ij x'_jℓ -x'_iℓ
x'_iℓ-x'_kℓ-x'_ik
x'_jk x'_kℓ -x'_jℓ.
Since (x') ∈Č^2(,)⊗ I, we can apply comm<ref>. By permuting the terms, it is evident that the right-hand side of the above equation is zero. Thus, (<ref>) and (<ref>) follows.
We will now show that (x') is alternating as well. It is enough to show that
(x')_iik=0; (x')_ijk=- (x')_jik; (x')_ijk= -(x')_ikj.
Since x' is alternating, it is straightforward to verify that
(x')_iik= x'_ii x'_ik -x'_ik=0.
Similarly, using x' is alternating, we obtain that
(x')_ijk(x')_ikj=x'_ij x'_jk-x'_ik x'_ik x'_kj-x'_ij=0.
Thus, we obtain (x')_ijk=-(x')_ikj.
By applying comm<ref> (since (x')∈Č^1(,)⊗ I), and using the fact that x' is alternating, we obtain
(x')_ijk(x')_jik= x'_ij x'_jk -x'_ik x'_ji x'_ik -x'_ik=0.
Hence, (x')_ijk=-(x')_jik and
this completes the proof of claim <ref>.
To show the claim <ref>, note that, y'-x' ∈Č^1(,)⊗ I, since its image in Č^1(,)⊗_A is zero. Hence,
(y') =(x'+ y'-x')
=(x')+d(y'-x').
Here, the last equation follows from rem:obs<ref>.
Finally, claim <ref> follows by an application of comm<ref>, since (x')∈Č^2(,)⊗ I.
thm:deffunctor1
Let be a sheaf of Lie algebras on a topological space X, and let be an open cover of X. Then:
* _, is a deformation functor;
* the tangent space of the functor _, is isomorphic to Ȟ^1(,);
* _, has a natural complete obstruction theory (Ȟ^2(,), ϕ), where for a given small extension
0→ I → A'→ A → 0,
and ζ∈_,(A), the obstruction ϕ(ζ,A') is defined to be the class of (x') in Ȟ^2(,). Here, x'∈Č^1(,)⊗_A' is any lift of x∈Č^1(,)⊗_A which represents ζ.
While one could explicitly establish claim <ref> using liefun1, we opt to provide a proof similar to that of <cit.>. However, we cannot directly apply the result of <cit.> since we are dealing with alternating cochains. We define two functors _,: → and Č^0_,:→ on objects as follows:
_,(A) = {x ∈Č^1(,)⊗_A: (x)=0 };
Č^0_,(A) =Č^0(,)⊗_A.
Here, denotes the category of groups, and the group structure on Č^0(,)⊗_A is defined by .
It is immediate that Č^0_, is a smooth homogeneous functor (see also <cit.>). From rem:obs<ref>, we infer that _, is also homogeneous and indeed a deformation functor. Since
_,=_,Č^0_,,
claim <ref> follows by <cit.>. The proof of claim <ref> is a direct calculation, and it follows from the discussion in <cit.> by replacing singular cochains with alternating cochains. Finally, for the proof of claim <ref>, Lemma <ref> allows us to work with alternating cochains and uses the discussion in <cit.>.
Next, we will study the morphisms of deformation functors governed by sheaves of Lie algebras. Let f:→ be a morphism of sheaves of Lie algebras on X. Let be an open cover of X and let be a refinement of . Then f induces a morphism of functors f:_,→_,.
compa
Let f:→ be a morphism of sheaves of Lie algebras on X. Let be an open cover of X, and let be a refinement of . Assume that the induced morphism of Čech cohomology f:
Ȟ^i(,) →Ȟ^i(,) is surjective for i=0, bijective for i=1, and injective for i=2.
Then the induced morphism of functors f:_,→_, is an isomorphism.
We defer a proof of compa until Appendix <ref>.
cor:comparison
Let X be a separated scheme of finite type over , let , be affine open covers of X, and let be a quasi-coherent sheaf of Lie algebras on X. Then
_,≅_,.
Čech cohomology coincides with sheaf cohomology for any affine open cover <cit.>, and the proof directly follows from compa.
In the next sections, we work over affine open coverings and in light of cor:comparison, we suppress and write _ for the functor _,.
rem:applications
The functors _ are useful to study several geometric deformation problems. We list some of them here.
* Let X be variety over , and let _X and '_X denote the functor of deformations of X and locally trivial deformations of X respectively. When X is smooth they coincide. The locally trivial deformations of X are governed by the tangent sheaf _X and
__X'_X
is an isomorphism of functors, see <cit.>. We will study this situation in detail in <ref> when X is a -factorial toric variety.
* Deformations of a vector bundle ℰ on a scheme X over is governed by the endomorphism bundle ℰnd(ℰ), see <cit.>.
* Let Z ⊂ X be a closed subscheme of a smooth variety X. Let _(X,Z) and '_(X,Z) respectively denote the functors of deformations and locally trivial deformations of the pair (X,Z). When Z is smooth the above functors coincide. Locally trivial deformations of the pair (X,Z) are governed by the logarithmic tangent sheaf _X(-log Z), see <cit.> and <cit.>.
§.§ A homotopy fiber analogue
In this section, we continue to study the deformation functor _ governed by a quasi-coherent sheaf of Lie algebras on a separated scheme of finite type over . We introduce new deformation functors that arise from an injective morphism of sheaves of Lie algebras →, see functornew.
A significant result of our paper is the proof that these new functors are isomorphic to _ under the assumption that H^1(,) vanishes, see compa2.
Let X be a separated scheme of finite type over , and fix an affine open covering of X.
Let ι:→ be an injective morphism of quasi-coherent sheaves of Lie algebras on X. Then we have an exact sequence of quasi-coherent sheaves on X given by
0 [r] [r,"ι"] [r,"λ"] / [r] 0.
It is worth noting that / is not a sheaf of Lie algebras in general.
The exact sequence (<ref>) induces a map of Čech complexes given by
0 [r] Č^∙(,) [r,"ι"] Č^∙(,) [r,"λ"] Č^∙(,/) [r] 0.
We will use d to refer to the differential of any one of these complexes. Note that ι and λ are compatible with d.
Since ι is injective, we will use
the notation ι^-1 to denote the map
ι(Č^∙(,)) →Č^∙(,)
inverse to ι.
Let be a quasi-coherent torsion-free sheaf of Lie algebras on a variety X over . In this case, we can always construct an exact sequence of the form (<ref>) as follows. We fix an affine open covering ={U_i} of X and choose a non-empty open set V⊆⋂ U_i. Using the injection ι: V ↪ X, we define
=ι_*(_|V).
Since is a torsion-free sheaf, the induced map → is injective. Moreover, the Lie bracket on induces one on .
functornew
In the situation (<ref>), we define the functor _↪:→ on objects as follows:
_↪(A) = {α∈Č^0(, )⊗_A : λ((α))=0 } /∼
where α=β in _↪(A) if and only if there exists a ∈Č^0(,)⊗_A such that
ι(a)⊙(α) = (β).
The functor sends a morphism
π: A' → A to the map
_↪(π): _↪(A') →_↪(A)
induced by the map Č^0(,)⊗_A'→Č^0(,)⊗_A.
In a sense that we will not make precise here, the functor _↪ can be viewed as a quotient of the deformation functor controlled by the homotopy fiber of ↪. For a similar construction in the DGLA setting, see <cit.>).
Before studying properties of the functor _↪, we define a related functor _↪. For this, we need to fix the following data: for each affine open set U⊆ X obtained by intersecting elements of , we fix a -linear section
s: (/)(U) →(U)
to λ, that is, λ∘ s: (/)(U)→ (/)(U) is the identity map.
For every A∈, we thus have the following commutative diagram:
[H]
Č^0(,)⊗_A [r,hook,"ι"] [d,"" left] Č^0(,)⊗_A [r,twoheadrightarrow,"λ"below] [d,"" left] Č^0(,/)⊗_A [l, bend right=20,"s" above]
Č^1(,)⊗_A [r,hook,"ι"] [d,"" left] Č^1(,)⊗_A[r,twoheadrightarrow,"λ"] [d,"" left] Č^1(,/)⊗_A [l, bend right=20,"s" above]
Č^2(,)⊗_A[r,hook,"ι"] Č^2(,)⊗_A [r,twoheadrightarrow,"λ"] Č^2(,/)⊗_A .
equation1
Map between Čech complexes involving the maps and
functornew2
In the situation (<ref>) with a fixed collection of sections s as above, we define the functor _↪:→ on objects as follows:
_↪(A) = {α∈Č^0(, /)⊗_A : λ((s(α)))=0 } /∼
where α∼β in _↪(A) if and only if there exists a∈Č^0(,)⊗_A such that
ι(a)⊙(s(α)) = (s(β)).
The functor sends a morphism
π: A' → A to the map
_↪(π): _↪(A') →_↪(A)
induced by the map Č^0(,/)⊗_A'→Č^0(,/)⊗_A.
In the remainder of this section, we show that _↪ and _↪ are isomorphic deformation functors, and they are furthermore isomorphic to _ if Ȟ^1(,)=0.
We first observe that the equivalence relation in functornew can also be defined by a group action.
alternatedefn
We claim that the equivalence relation in functornew can be expressed as follows: α = β in _↪(A) if and only if there exist a ∈Č^0(, ) ⊗_A and γ∈Ȟ^0(, ) ⊗_A such that
ι(a) α -γ = β.
Indeed, note that γ∈Ȟ^0(, ) ⊗_A is equivalent to γ∈Č^0(, ) ⊗_A and γ_i -γ_j=0 for all U_i,U_j ∈. Thus, (<ref>) and γ∈Ȟ^0(, ) ⊗_A imply that
β_i -β_j =ι(a)_iα_i -γ_iγ_j -α_j -ι(a)_j
=ι(a)_iα_i -α_j -ι(a)_j.
Hence, we obtain
ι(a) ⊙(α) = (β).
Moreover, we note that if λ((α))=0, there exists x∈Č^1(,)⊗_A such that ι(x)=(α). Hence, we obtain
λ((β))= λ(ι(a⊙ x))=0.
Conversely, if
ι(a) ⊙(α) = (β),
by setting γ = -βι(a) α, we can see that
ι(a) α -γ = β, γ∈Ȟ^0(, ) ⊗_A .
To define an obstruction theory for the functor _↪, we require the following proposition.
prop:cocycle:fun2
Let α∈Č^0(,)⊗_A such that λ((α))=0, where A∈. Let
0→ I → A' → A → 0
be a small extension, and let α'∈Č^0(,)⊗_A' be any lift of α. Then:
* λ((α')) is an alternating 1-cocycle;
* for any other lift ξ'∈Č^0(,)⊗_A' of α, we have
λ((ξ'))=λ((α'))+ d(λ(ξ'-α'));
* for any a'∈Č^0(,)⊗_A', we have
λ(ι(a')⊙(α'))= λ((α')).
By obs:alt, λ((α')) is an alternating cochain. We will show that it is a cocycle as well. Since λ((α))=0, there exists x ∈Č^1(,)⊗_A such that ι(x)=(α). Furthermore, λ((α'))∈Č^1(,/)⊗ I, since the image of λ((α')) in Č^1(,/)⊗_A is λ((α))=0. We set
η'=s(λ((α'))) ∈Č^1(,)⊗ I.
To prove claim <ref>, it is enough to show that d(λ(η'))=0. Since λ((α')- η')=0, there exists x'∈Č^1(,)⊗_A' such that ι(x')= (α')- η'. By the commutativity of Diagram <ref>, we have
ι((x')) = (ι(x'))
= ((α')- η')
= ((α'))- d(η')
=-d(η').
Here, the penultimate equation follows from rem:obs<ref>, and the last equation follows from rem:obs<ref>. Then it follows that
d(λ(-η'))=λ(d(-η'))= λ(ι((x'))=0,
completing the proof of claim <ref>.
Now we will prove claim <ref>. Since α',ξ' are liftings of α, we obtain ξ'-α'∈Č^0(,)⊗ I. Then we have
(ξ') = (α'+ξ'-α')
= (α')+d(ξ'-α')
Here, the last equation follows from rem:obs<ref>. By applying λ, and using the fact that λ commutes with d we obtain
λ((ξ'))=λ((α'))+ d(λ(ξ'-α')).
Now we will prove claim <ref>. As in the proof of claim <ref>, we set η'=s(λ((α'))) and find x' such that
ι(x') = (α')- η'
= (α') -η'.
Here, the last equation follows by an application of comm<ref>.
Then for any a'∈Č^0(,)⊗_A', we have
ι(a'⊙ x') = ι(a')⊙ι(x')
= ι(a')⊙ ((α')-η')
= (ι(a')⊙(α'))-η'.
Here, the last equation follows by an application of comm<ref>.
By applying λ to this equation we obtain
0 = λ(ι(a')⊙(α'))-λ(η')
=λ(ι(a')⊙(α'))- λ( (α')).
showdeffun2
Let X be a separated scheme of finite type over , and let be an affine open cover of X. Consider sheaves of Lie algebras and on X which fit into an exact sequence of the form (<ref>). Let s be a collection of sections as in (<ref>). Then:
* _↪ is a deformation functor;
* the map
λ: ^1_↪=_↪([t]/t^2) →Ȟ^0(,/)/ λ(Ȟ^0(,))⊗ t
is an isomorphism with inverse s;
* the functor _↪ has a natural complete obstruction theory
(Ȟ^1(,/), ϕ),
where for a given small extension
0→ I → A'→ A → 0,
and ζ∈_↪(A), the obstruction ϕ(ζ,A') is defined to be the class of λ((α')) in Ȟ^1(,/)⊗ I. Here, α'∈Č^0(,)⊗_A' is any lift of α∈Č^0(,)⊗_A which represents ζ.
The proof of claim <ref> is similar to the proof of Theorem <ref>.
We define two functors _↪:→ and C_↪: → on objects as follows:
_↪(A) ={α∈Č^0(,)⊗_A: λ((α))=0};
C_↪(A) ={(a,γ)∈ (Č^0(,)⊗_A) × (Ȟ^0(,)⊗_A)}.
Here, again denotes the category of groups, and the group structure on C_↪(A) is defined by . Since λ is _A-linear for any A∈, it commutes with fiber products in . Additionally, as remarked in rem:obs<ref>, also commutes with fiber products in . Hence, the functor _↪ is homogeneous. Also, it is immediate that C_↪ is a smooth homogeneous functor (see also <cit.>).
We claim that there exists a left action of the functor C_↪ on _↪ as follows:
C_↪(A) ×_↪(A) →_↪(A),
((a,γ),α)↦ι(a)α -γ.
By alternatedefn, we have ι(a)α -γ∈_↪(A). It is immediate to verify that (<ref>) is a group action, and _↪ is the quotient of _↪ by C_↪.
Thus, by <cit.>, claim <ref> follows.
Now we will compute the tangent space of _↪. We have
^1 _↪ = {α∈Č^0(, ) ⊗ t : λ((α)) = 0 t^2 } / ∼
= {α∈Č^0(, ) ⊗ t : λ(d(α)) = 0 } / ∼
= {α∈Č^0(, ) ⊗ t : d(λ(α)) = 0 } / ∼.
In order to show the claim <ref>, notice that there are -linear maps given by
λ: {α∈Č^0(, ) ⊗ t : d(λ(α)) = 0 }→ H^0(, / )⊗ t,
and
s: H^0(, / )⊗ t →{α∈Č^0(, ) ⊗ t : d(λ(α)) = 0 }.
Since λ∘ s is the identity map, the above map λ in (<ref>) is surjective. Here, α∼ 0 in ^1_↪ if and only if there exist a ∈Č^0(, ) ⊗ t and γ∈Ȟ^0(, ) ⊗ t such that
ι(a) + α -γ= 0.
By applying d, we obtain ι(a) + α∈ H^0(, ) ⊗ t and so
λ(α)=λ(ι(a) + α) ∈λ(H^0(, ) ⊗ t).
It follows that α∼ 0 if and only if λ(α)∈λ(H^0(, ) ⊗ t).
Thus, the surjective map (<ref>) induces an isomorphism
λ: ^1 _↪→ H^0(, / )/λ(H^0(, )) ⊗ t
with inverse s.
Now we will show that _↪ carries a complete obstruction theory with values in Ȟ^1(,/). Let ζ∈_↪ be represented by α∈Č^0(, )⊗_A and let
0→ I → A' → A → 0
be a small extension. By prop:cocycle:fun2<ref>, λ((α')) defines an element in Ȟ^1(,)⊗ I, where α'∈Č^1(,)⊗_A' is any lift of α.
By claims <ref> and <ref> of Proposition <ref>, it is evident that φ(ζ,A') is independent of the choice of lifting and the choice of representative of ζ. The functoriality property of the obstruction theory follows from rem:obs<ref>.
It remains to check the completeness of this obstruction theory. Assume that φ(ζ,A')=0. Then for any representative α of ζ and any lifting α'∈Č^0(, )⊗_A', there exists ξ'∈Č^0(,/)⊗ I such that d(ξ')= λ((α')). Then α'-s(ξ') is also a lift of α and
λ((α'-s(ξ'))) =λ((α'))-λ(d(s(ξ)))
=λ((α'))-d(ξ)
=0.
Here, the first equation follows from rem:obs<ref> and second equation follows from the fact that λ commute with d and λ∘ s is identity. By combining with Remark <ref>, the completeness of the obstruction theory follows.
Now we will examine the relationship between the deformation functors _ and _→. Let α∈Č^0(, )⊗_A satisfy λ((α))=0. Then, there exists a unique x∈Č^0(, )⊗_A such that ι(x)=(α). Since, ((α))=0 as noted in rem:obs<ref>, by considering the commutativity of Diagram <ref>, we can deduce that (x)=0. This mapping α↦ x=ι^-1((α)) induces a map
ι^-1∘: _→→_,
since it is well-behaved under the equivalence relations defined on _ and _→. In fact, for any a∈Č^0(,)⊗_A, we have
ι(a)⊙(α) = ι(a)⊙ι(x)
= ι(a⊙ x).
compa2
If Ȟ^1(,)=0, then the morphism of functors
ι^-1∘ :_↪→_
is an isomorphism.
First we will show the injectivity of ι^-1∘ :_↪→_. Let A∈, and let α,β∈Č^0(,)⊗_A be representatives of two elements in _↪(A) and
ι^-1∘ (α)= x; ι^-1∘ (β)=y.
From liefun1 and functornew, it is immediately seen that if x∼ y, then α∼β.
We will show that ι^-1∘ :_↪→_ is smooth; surjectivity then follows. In order to show that ι^-1∘ is smooth, we will verify that it satisfies the assumptions of the standard smoothness criterion (standardsmooth).
We have that _↪ and _ are deformation functors (see showdeffun2<ref> and thm:deffunctor1<ref> respectively). Next, we will show the surjectivity of ^1_↪→^1_ and the existence of an injective obstruction map.
Since Ȟ^1(,)=0, the long exact sequence of Čech cohomology yields that the boundary map δ:Ȟ^0(,/) →Ȟ^1(,) is surjective, and the boundary map δ: Ȟ^1(,/)→Ȟ^2(,) is injective. Here, the boundary map δ can be expressed as ι^-1∘ d ∘ s.
By thm:deffunctor1<ref> and showdeffun2<ref>, we have
H^0(, / ) ⊗ t _↪([t]/t^2) _([t]/t^2)=Ȟ^1(,) ⊗ t.
Hence, we obtain ^1 _↪→^1 _ is surjective.
We will now show that δ: Ȟ^1(,/)→Ȟ^2(,) is an obstruction map for the morphism of functors ι^-1∘.
Let α∈Č^0(,)⊗_A represent an element in _↪(A) and map to an element in _(A), represented by x=ι^-1((α)). Consider the small extension
0 → I → A' → A → 0,
and let α'∈Č^0(,)⊗_A' be a lift of α. By prop:cocycle:fun2<ref>, we have
ξ':=λ((α')) ∈Ž^1(,/)⊗ I.
Moreover, we have
λ((α')- s(ξ'))=0. Hence, there exists x'∈Č^1(,)⊗_A' such that ι(x')= (α')- s(ξ'). By the commutativity of Diagram <ref> and by rem:obs<ref>, and rem:obs<ref> we have ι((x'))= -d(s(ξ')). Hence, (x') and δ(ξ')=ι^-1(d( s(ξ'))) define the same class in Ȟ^1(,)⊗ I.
Therefore, by the above discussion, the morphism of functors ι^-1∘ satisfies the assumptions in the standard smoothness criterion (standardsmooth) and ι^-1∘ is smooth.
rem:H1vanishing
Let be a sheaf of Lie algebras on a variety X over , and let ={U_i} be an affine open cover of X. If constructed as described in Remark <ref>, we claim that Ȟ^1(,)=0. Since has a finite subcover, and Čech cohomology coincides with the sheaf cohomology (<cit.>), we may assume that is a finite cover. As is constant on the cover , the claim Ȟ^1(,)=0 follows from the contractibility of a simplex.
In situations where we have a good understanding of the quotient sheaf /, we will use the functor _↪ (functornew2) instead of _↪ (functornew). Given a representative α∈Č^0(,/)⊗_A of an element in _↪(A), it is clear that s(α)∈Č^0(,)⊗_A represents an element in _↪(A). In fact, this induces a morphism of functors
s: _↪→_↪,
since the mapping on the level of cochains is well-behaved under the equivalence relations on _↪ and _↪. In the following theorem, we will show that this map is an isomorphism.
prop:isosecfunctor
s: _↪→_↪ is an isomorphism.
Injectivity is an immediate consequence of the definitions of the functors _↪ and _↪.
To show surjectivity, instead of working with the functor _↪, we consider the functor _↪:→ defined on objects as:
_↪(A) = {α∈Č^0(, /)⊗_A : λ((s(α)))=0 }.
Then, we have the following commutative diagram:
_↪[rr,"s∘ q"] [dr,"q"] _↪
_↪[ur, hook, "s"]
Here, q is the quotient map. We will show that s∘ q:_↪→_↪ is smooth by proving that it satisfies the assumptions of the standard smoothness criterion (standardsmooth). By showing the smoothness of s∘ q:_↪→_↪, we can infer the surjectivity of both
s∘ q:_↪→_↪ and s:_↪→_↪.
Since λ and the section s on the cochains are _A-linear for any A∈, they clearly commutes with fiber products in . Additionally, as remarked in rem:obs<ref>, also commutes with fiber products in . Hence, the functor _↪ is homogeneous, indeed a deformation functor. Moreover, by showdeffun2<ref>, _↪ is also a deformation functor.
We will compute the tangent space of _↪. We have
^1 _↪ ={α∈Č^0(,/)⊗ t: λ((s(α)))=0 t^2 }
={α∈Č^0(,/)⊗ t: λ(d(s(α)))=0}
={α∈Č^0(,/)⊗ t: d(α)=0}
=Ȟ^0(,/)⊗ t.
Combining with showdeffun2<ref>, we have
Ȟ^0(,/)⊗ t = ^1 _↪^1 _↪H^0(, / )λ(H^0(, ))⊗ t.
Since λ∘ s is the identity on Ȟ^0(,/)⊗ t, we obtain that
s∘ q:^1 _↪→^1 _↪ is surjective.
It is clear from Proposition <ref> and Theorem <ref> that the identity map on H^1(,/) is an obstruction map for the morphism of functors s: _↪→_↪. Thus, all assumptions in the standard smoothness criterion (standardsmooth) are satisfied, and we obtain that s:_↪→_↪ is smooth.
In light of the above result, we can use either the functor _↪ or _↪. Each functor offers different advantages. The functor _↪ does not require a section s, while computations with _↪ are simpler when there is a better understanding of the quotient sheaf /. In the application we are interested in, as will be described later in <ref> and <ref>, there is a unique section s, so we will prefer to work with the functor _↪.
In comparison to the functor _, the functors _↪ and _↪ offer the advantage of utilizing zero-cochains instead of one-cochains, simplifying computations. Moreover, in some situations (such as the one we will focus on in <ref>), the quotient sheaf / has a very concrete description that makes it nicer to deal with than ; this makes the use of _↪ especially attractive.
§ THE DEFORMATION EQUATION
§.§ Setup
Let X be a separated scheme of finite type over , and let be an affine open cover of X. Consider sheaves of Lie algebras and on X which fit into an exact sequence of the form (<ref>). The goal of this section is to explicitly construct the hull of the deformation functors _ and _↪ (see Definitions <ref> and functornew) under suitable hypotheses.
Although we primarily discuss this section in the context of the functor _→, all considerations apply similarly to the functor _ (see rem:defeq). According to Schlessinger's theorem (see, for example, <cit.>), _→ has a hull R∈ if
^1 _↪≅ H^0(X,/)/λ(H^0(X,))
is finite-dimensional. In other words, there exists a smooth morphism of functors f:(R,-)→_→ such that the induced map on tangent spaces is bijective. By adapting the ideas from <cit.>, we define the deformation equation and use it to explicitly construct the hull of _→, see hull.
To construct the deformation equation for _↪, we need a finite-dimensional obstruction space. Hence, we assume that H^1(X,/) is finite-dimensional, but not necessarily that ^1 _↪ is finite-dimensional. We first fix the following data:
* Elements θ_ℓ∈Ž^0(,/), ℓ=1,…,p;
* Elements ω_ℓ∈Ž^1(,/), ℓ=1,…,q whose images in H^1(X,/) form a basis.
We are working here with the Čech complex of alternating cochains Č^k(,/) with respect to the open cover .
Let S=[[t_1,…,t_p]] with maximal ideal =⟨ t_1,…,t_p⟩. We let _k denote the kth graded piece of , and _≤ k (respectively (^2)_≤ k) denote the direct sum of the graded pieces of degree at most k of (respectively ^2).
It will be useful to fix a graded local monomial order on S (see e.g. <cit.>).
For any ideal I⊆ S, the standard monomials of I are those the monomials of S that are not leading monomials for any element of I. Assuming that I contains a power of , the normal form with respect to I of any f∈ S is the unique element f such that f≡f I and f only contains standard monomials.
§.§ The deformation equation
For each r≥ 1, we inductively construct α^(r)∈Č^0(,/)⊗_≤ r and obstruction polynomials g_1^(r),…, g_q^(r)∈ (^2)_≤ r such that
λ((s(α^(r))))≡ 0 J_r,
where J_r=⟨ g_1^(r),…,g_q^(r)⟩+^r+1.
We begin by setting
α^(1)= ∑_ℓ=1^pt_ℓ·θ_ℓ and g_1^(1)=…= g_q^(1)=0.
Since α^(1)∈Ž^0(,/)⊗_≤ 1, it is immediate to see that
0=d(α^(1)) ≡λ((s(α^(1)))) J_1.
In practice, it is enough to solve the deformation equation
λ((s(α^(r))))-∑_ℓ=1^q g_ℓ^(r)·ω_ℓ≡ d(β^(r+1))+∑_ℓ=1^q γ_ℓ^(r+1)·ω_ℓ · J_r
for
β^(r+1)∈Č^0(,/)⊗_r+1; γ_ℓ^(r+1)∈_r+1.
We then set
α^(r+1)=α^(r)-β^(r+1); g_ℓ^(r+1)=g_ℓ^(r)+γ_ℓ^(r+1).
In Proposition <ref> we show that
λ((s(α^(r+1))))≡∑_ℓ=1^q g_ℓ^(r+1)·ω_ℓ · J_r.
In particular, the desired equation (<ref>) modulo J_r+1 follows from (<ref>) together with the observation that by construction, · J_r⊆ J_r+1.
As a convention, we set J_0=.
prop:defeqsolving
Let X be a separated scheme of finite type over . Consider sheaves of Lie algebras and on X which fit into an exact sequence of the form (<ref>). Suppose that H^1(X,/) is finite dimensional. Then:
* Given a solution α^(r),g_ℓ^(r) of (<ref>) modulo · J_r-1 with J_r ≡ J_r-1^r, there is a solution of (<ref>) modulo · J_r.
* Given any solution to (<ref>) modulo · J_r the resulting α^(r+1) and g_ℓ^(r+1) satisfy (<ref>) modulo · J_r and J_r+1≡ J_r^r+1.
We defer the proof of prop:defeqsolving to Appendix <ref>.
rem:sm
By passing to normal forms with respect to · J_r in (<ref>), we may assume that β^(r+1) and γ_ℓ^(r+1) only involve standard monomials of · J_r. By the inductive construction of J_r and the fact that our monomial order is graded, this means that we may assume that the g_ℓ^(r+1) only involve standard monomials of · J_r.
rem:relevant
In situations where the cohomology groups H^k(X,/) are multigraded, we may often obtain greater control over which monomials can appear in the obstruction polynomials g_ℓ^(r). This allows us to simplify computations. Given standard monomials t^w for · J_r-1 and t^w' for · J_r, we say that t^w is relevant for t^w' if there exists a monomial t^w” with t^w as a factor such that the monomial t^w' has non-zero coefficient in the normal form of t^w” with respect to · J_r.
This condition may be used as follows, assuming we are only using standard monomials as in rem:sm: if t^w is not relevant for t^w', then the coefficient of t^w in α^(r) has no effect on the coefficients of t^w' in the possible solutions β^(r+1) and γ_ℓ^(r+1) of (<ref>).
Indeed, to solve (<ref>), we must consider the normal form of
λ((s(α^(r))))-∑_ℓ=1^q g_ℓ^(r)ω_ℓ
with respect to · J_r. By rem:sm the term
∑_ℓ=1^q g_ℓ^(r)ω_ℓ is already in normal form. Furthermore, by the BCH formula (see <ref>) it follows that the coefficient of t^w in α^(r) will only affect the coefficients in λ((s(α^(r)))) of those monomials t^w” with t^w as a factor; passing to the normal form only affects coefficients of monomials for which t^w is relevant.
rem:defeq
It is also possible to give a similar treatment to the functor _ with the assumption that H^2(X,) is finite-dimensional. In this case, the zero and one-cochains in / we considered above are replaced by one and two-cochains in . Likewise, the map λ∘∘ s is replaced by . We leave it to the reader to fill in the details.
§.§ Versality
Using our solutions to the deformation equation (<ref>), we will construct the hull of _↪. Let g_ℓ
be the projective limit of g_ℓ^(r) in S, and let α be the projective limit of α^(r) in Č^0(,/)⊗ S. We define
J= ⟨ g_1,…,g_q⟩, R:=S/J and R_r:=S/J_r.
Then the pair (α, R)
defines a map of functors of Artin rings
f: (R,-)→_↪.
Indeed, every ζ∈(R,A) factors through a morphism ζ_r : R_r → A for r≫ 0 and then f(ζ_r)= ζ_r(α^(r))
defines f.
hull
Assume that the images of θ_1,…,θ_p in
Ȟ^0(,/)/λ(Ȟ^0(,))
are a basis. Then, f:(R,-)→_↪ is a hull, that is, f is smooth and induces an isomorphism on tangent spaces.
Since J⊆^2, we have ^1(R,-)≅ (_R/_R^2)^*≅ (/^2)^* (see, for example, <cit.>). The induced map on tangent spaces is given by
f:(/^2)^* →^1 _↪=_↪([t]/t^2)
t_ℓ^* ↦θ_ℓ⊗ t.
This map is an isomorphism, since θ_ℓ⊗ t forms a basis for ^1 _↪.
By lemma:obfinj (see Appendix <ref>), we have that ob_f is an injective obstruction map for f. Thus by standardsmooth it follows that f is smooth.
§ TORIC VARIETIES
§.§ Preliminaries
In this section, we will fix certain notation and recall relevant facts about toric varieties. We refer the reader to <cit.> for more details on toric geometry.
We will consider a lattice M with the dual lattice N=Hom(M,ℤ) and associated vector space N_ℝ=N⊗_ℤℝ.
Given a fan Σ in N_, there is an associated toric variety X_Σ with an action by the torus T_N=[M] (see <cit.>). The variety X_Σ has a T_N-invariant open affine cover ={U_σ}_σ∈Σ_max, where Σ_max is the set of maximal cones in Σ. Here, each U_σ is defined as
U_σ= Spec [σ^∨∩ M]; σ^∨={∈ M⊗: v()≥0 for all v∈σ}.
We will denote the regular function on T_N associated with ∈ M by χ^.
We denote the set rays of Σ by Σ(1). Given a ray ρ∈Σ(1), the primitive lattice generator of ρ is denoted by n_ρ and the evaluation of n_ρ at ∈ M is denoted by ρ().
Recall that the support of the fan Σ is |Σ|=⋃_σ∈Σσ.
Important aspects of the geometry of X_Σ can be seen from the combinatorics of the fan Σ. In particular:
* The variety X_Σ is smooth if and only if Σ is smooth, that is every cone σ∈Σ is generated by part of a basis of N <cit.>;
* X_Σ is -factorial if and only if Σ is simplicial <cit.>;
* X_Σ is complete if and only if |Σ|=N_ <cit.>.
The variety X_Σ has a torus factor if it is equivariantly isomorphic to the product of a nontrivial torus and a toric variety of smaller dimension. We will frequently assume that the toric variety does not have a torus factor; this is equivalent to the condition that N_ is spanned by the primitive ray generators n_ρ for ρ∈Σ(1) <cit.>.
§.§ Cohomology of the structure sheaf
Let Σ be a fan in N_ and X_Σ be the associated toric variety. The cohomology groups of the structure sheaf of X_Σ naturally have an M-grading, and each graded piece can be understood combinatorially using certain subsets of N_. For every ∈ M, we associate the subset of N_ given by
V_:=⋃_σ∈Σ{n_ρ: ρ()<0}_ρ∈Σ(1)∩σ⊆ N_.
The following result establishes a relation between the sheaf cohomology of the structure sheaf _X_Σ and the reduced singular cohomology of V_.
van1
For ∈ M and k≥ 0, we have
H^k(X,_X_Σ)_≅H^k-1(V_,).
When the set V_ is contractible to a point, its reduced singular cohomology vanishes. In fact, we will make use of the following vanishing result:
prop:O
Suppose that |Σ| is convex. Then for all k>0,
H^k(X_Σ,_X_Σ)=0.
§.§ Cohomology of the torus invariant divisors
Torus invariant divisors on X_Σ and the cohomology groups of the associated reflexive sheaves can be understood combinatorially as well; we focus here on the prime torus invariant divisors which are in bijection with the rays of Σ <cit.>. We denote the divisor corresponding to ρ∈Σ(1) by D_ρ.
Similar to above, the torus action on X_Σ induces a natural M-grading on the sections of the sheaf (D_ρ). The homogeneous part of H^0(U_σ,(D_ρ)) of degree is denoted by H^0(U_σ,(D_ρ))_. By <cit.>, we have
H^0(U_σ,(D_ρ))_ is nonzero if and only if for all ρ'∈Σ(1)∩σ
ρ'()≥
0 ρ'≠ρ,
-1 ρ'=ρ.
For ρ∈Σ(1), and ∈ M, we define the subset
V_ρ,:= ⋃ _σ∈Σ{ n_ρ' |[ ρ'()<0 ρ'≠ρ; ρ'()<-1 ρ'=ρ ]}_ρ'∈Σ(1)∩σ⊆ N_.
When Σ is simplicial, this is a (topological realization of a) simplicial complex.
If ρ()≠-1, it is immediate to see that V_ρ,=V_. For every σ∈Σ, there is a canonical exact sequence
0 → H^0(U_σ,(D_ρ))_· H^0(V_ρ,∩σ,) → 0,
see <cit.>.
Since λ is either an isomorphism or the zero map, there is a unique -linear section s: H^0(V_ρ,∩σ,)→ of λ.
There is a natural closed cover
_ρ,={V_ρ,∩σ}_σ∈Σ_max
of the set V_ρ, indexed by elements of Σ_max, with all of its intersections being contractible.
The above exact sequence of vector spaces leads to a short exact
sequence of alternating Čech complexes with respect to the covers ,Σ_max,_ρ, (cf. <cit.>,<cit.>):
⊕_ρ∈Σ(1)
∈ MČ^k(,(D_ρ))_[r,hook,"ι"] ⊕_ρ∈Σ(1)
∈ MČ^k(Σ_max,) [r,twoheadrightarrow,"λ"below] ⊕_ρ∈Σ(1)
∈ MČ^k(_ρ,,) [l, bend right=20,"s"].
We will use d to refer to the differential of any one of these complexes. Note that although ι and λ are compatible with d, the section s is not.
We will frequently use the notation χ^ and f_ρ to distinguish between the individual direct summands appearing in the terms of (<ref>), for example, a basis of ⊕_ρ∈Σ(1)
∈ MČ^k(Σ_max,) is given by {χ^· f_ρ}_ρ,.
For the middle and right Čech complexes in (<ref>), there exists a canonical isomorphism between Čech cohomology and singular cohomology (see <cit.>. In other words, we have
H^k(Č^∙(Σ_max,))≅ H^k(|Σ|,); H^k(Č^∙(_ρ,,))≅ H^k(V_ρ,,) .
Since H^0(|Σ|,)= and H^k(|Σ|,)=0 for k≥1, the long exact sequence of cohomology implies that the boundary map induces an isomorphism
H^k-1(V_ρ,,)≅ H^k(X_Σ,(D_ρ))_
for k≥1,
see <cit.>.
cor:vanishdegree
Suppose that H^k(X_Σ,_X_Σ)_=0. If ρ()≠-1, then
H^k-1(V_ρ,,)=0.
If ρ()≠-1, then V_ρ,=V_. Then the claim follows from van1
exactseqlocal
Since T_N⊆ U_σ for every σ, we have the injection
ι: T_N↪⋂ U_σ.
At this stage, we have not yet endowed (D_ρ) with the structure of a sheaf of Lie algebras. Nonetheless, we can imitate Construction <ref>, and obtain an exact sequence of quasi-coherent sheaves
0 [r] (D_ρ) [r,"ι"] ι_*((D_ρ)_|T_N) [r,"λ"] ι_*((D_ρ)_|T_N)/ (D_ρ) [r] 0.
Additionally, we observe that
(D_ρ)(U_σ) = ⊕_∈ M H^0(U_σ,(D_ρ))_,
ι_*((D_ρ)_|T_N)(U_σ) = (D_ρ)(T_N)
= ⊕_∈ M.
Comparing with (<ref>),
we can view the quotient sheaf as follows:
ι_*((D_ρ)_|T_N)/ (D_ρ)(U_σ) = ⊕_∈ M H^0(V_ρ,,).
Subsequently, the middle and right Čech complexes in (<ref>) can also be expressed as
⊕_ρ∈Σ(1)
∈ MČ^k(Σ_max,) =
⊕_ρ∈Σ(1)Č^k(,ι_*((D_ρ)_|T_N)),
⊕_ρ∈Σ(1)
∈ MČ^k(_ρ,,) = ⊕_ρ∈Σ(1)Č^k(,ι_*((D_ρ)_|T_N)/ (D_ρ) ).
We consider the toric threefold X_Σ whose fan Σ in ^3 may be described as follows.
The generators of its rays are given by the columns of the following matrix:
code-for-first-row =
[first-row,first-col,margin]
ρ_0 ρ_1 ρ_2 ρ_3 ρ_4 ρ_5 ρ_6 ρ_7 ρ_8 ρ_9 ρ_10 ρ_11 ρ_12
0 0 -1 -1 -1 0 1 0 0 0 1 1 2
0 1 1 0 -1 -1 0 0 1 -1 0 0 0
1 1 1 1 1 1 1 -1 0 0 0 -1 -1
.
A set of rays forms a cone in Σ if the corresponding set of vertices belong to a common simplex in Figure fig:fan2, where the ray ρ_7 corresponds to the point at infinity. Taking the lattice N=^3, it is straightforward to verify that Σ is smooth and complete.
It can be shown that H^1(X_Σ,(D_ρ))_≅H^0(V_ρ,,) is non-zero only for the (ρ,) pairs
(ρ_3,(1,0,0)),(ρ_11,(0,0,1)),(ρ_10,(-1,0,-1)),(ρ_10,(-1,0,0)).
Similarly, H^2(X_Σ,(D_ρ))_ is zero except for the (ρ,) pair (ρ_0,(0,0,-1)). Although in principle there are infinitely many ray-degree pairs that needs to be checked, there are actually only finitely many different simplicial complexes that can occur, and each case can be verified individually.
For the cases with non-vanishing cohomology, Figure fig:hyp shows the intersections of Σ with, and projections of the simplicial complexes V_ρ, onto, the hyperplane ⟨ -,⟩=-1.
The first four simplicial complexes have two connected components, hence have H^0(V_ρ,,)=1, while the final simplical complex has a single cycle, hence H^1(V_ρ,,)=1. We will see in Example <ref> that the toric threefold X_Σ is unobstructed.
§.§ Euler sequence
Let X_Σ be a -factorial toric variety with no torus factors. To understand locally trivial deformations of X_Σ we will need control of its tangent sheaf. There is an exact sequence of sheaves
0 [r] _((X_Σ),)⊗__X [r] ⊕_ρ∈Σ(1)(D_ρ) [r,"η"] _X_Σ[r] 0
called the Euler sequence, see <cit.> (and dualize). By <cit.>, the image of the local section χ^ of (D_ρ) is given by the derivation (ρ,) defined by
(ρ,)(χ^)=ρ()χ^+.
When (X_Σ) is trivial, the map η is an isomorphism.
prop:isofromEulereq
Let X_Σ be a -factorial toric variety with no torus factors. Then for k≥0
η: ⊕_ρ∈Σ(1)H^k(X_Σ,(D_ρ))→ H^k(X_Σ,_X_Σ)
is:
* injective if H^k(X_Σ,_X_Σ)=0 and k≥1;
* surjective if H^k+1(X_Σ,_X_Σ)=0 and k≥0.
The claims follow by applying the assumptions of cohomology vanishing to the long exact sequence induced by the Euler sequence, see also <cit.>,<cit.>.
Combining η with (<ref>) we obtain the following:
prop:cohom2
Let X_Σ be a -factorial toric variety with no torus factors. Then for k≥1
⊕_ρ∈Σ(1), ∈ M
ρ()=-1H^k-1(V_ρ,,) → H^k(X_Σ,_X_Σ)
is:
* injective if H^k(X_Σ,_X_Σ)=0;
* surjective if H^k+1(X_Σ,_X_Σ)=0;
The proof follows from combining the results of prop:isofromEulereq, Equation (<ref>), and cor:vanishdegree.
§ DEFORMATIONS OF TORIC VARIETIES
§.§ The combinatorial deformation functor
Let X_Σ be a -factorial toric variety with no torus factors. In this section, we define the combinatorial deformation functor and show that under appropriate hypotheses it is isomorphic to '_X_Σ, the functor of locally trivial deformations of X_Σ.
We define a bilinear map
[-,-]:⊕_ρ∈Σ(1)(D_ρ)|_T_N×⊕_ρ∈Σ(1)(D_ρ)|_T_N→⊕_ρ∈Σ(1)(D_ρ)|_T_N
by setting
[χ^· f_ρ,χ^'· f_ρ']:=ρ(')χ^+'· f_ρ'-ρ'()χ^+'· f_ρ
and extending linearly.
Here f_ρ and f_ρ' denote the canonical sections of (D_ρ)|_T_N and (D_ρ')|_T_N respectively.
It is straightforward to check that [-,-] is alternating and satisfies the Jacobi identity. Hence, it is a Lie bracket.
thm:liebracket
Let X_Σ be a -factorial toric variety without any torus factors. Then the bracket of Definition <ref> extends to a Lie bracket on ⊕_ρ∈Σ(1)(D_ρ) such that the map
η: ⊕_ρ∈Σ(1)(D_ρ) →_X_Σ
of the Euler sequence (see <ref>) is a map of sheaves of Lie algebras.
Suppose that χ^∈ H^0(U_σ,(D_ρ)) and χ^'∈ H^0(U_τ,(D_ρ')). By (<ref>), for all ρ”∈Σ(1) ∩σ∩τ we obtain the following system of inequalities:
ρ'()≥0, ρ'(')≥-1
ρ”()≥0, ρ”(')
≥0 if ρ”≠ρ,ρ'
ρ()≥ -1, ρ(')≥0.
This means that for all ρ”∈Σ(1) ∩σ∩τ,
ρ”(+')≥ 0
if ρ”≠ρ, ρ',
if ρ”=ρ and ρ(')≠0
if ρ”=ρ' and ρ'()≠0.
Thus, using (<ref>) again we observe that
ρ'()χ^+'∈ H^0(U_σ∩ U_τ, (D_ρ)), ρ(')χ^+'∈ H^0(U_σ∩ U_τ, (D_ρ')).
Therefore, the bracket of Definition <ref> extends to a Lie bracket on ⊕_ρ(D_ρ).
It is straightforward to verify that
[(ρ,), (ρ',')]=ρ(')(ρ',+')-ρ'()(ρ,+').
It follows that the Lie bracket on ⊕_ρ(D_ρ) is compatible with η and the Lie bracket on _X_Σ, hence η is a map of sheaves of Lie algebras.
Combining with exactseqlocal and Construction <ref>, the above result allows us to define the homotopy fiber analogue deformation functor (see Definition <ref>) for toric varieties. We make this explicit here.
Let X_Σ be a -factorial toric variety.
We define to be the composition λ∘∘ s.
The combinatorial deformation functor _Σ:→ is defined on objects as follows:
_Σ(A) = {α∈⊕_ρ, Č^0(_ρ,,_A) : (α)=0 } /∼
where α=β in _Σ(A) if and only if there exists γ∈⊕_ρ,Č^0(,(D_ρ))⊗_A such that
ι(γ)⊙(s(α)) = (s(β)).
It is defined on morphisms in the obvious way.
The following is our primary result:
thm:combiso
Let X_Σ be a -factorial toric variety without any torus factors.
Suppose that H^1(X_Σ,_X_Σ)=
H^2(X_Σ,_X_Σ)
=0. Then, _Σ is isomorphic to '_X_Σ. In fact, we have the following isomorphisms of deformation functors:
_Σι^-1∘∘ s≅_⊕_ρ(D_ρ)η≅__X_Σexp≅'_X_Σ.
According to rem:applications<ref>, we have __X_Σexp≅'_X_Σ. By Theorem <ref>, η induces a morphism of Čech complexes of sheaves of Lie algebras:
η:⊕_ρ∈Σ(1)Č^∙(,(D_ρ)) →Č^∙(,_X_Σ).
Using the vanishing of H^k(X_Σ,_X_Σ) for k=1 and 2 and prop:isofromEulereq, we have that ⊕ H^k(X_Σ, (D_ρ)) → H^k(X_Σ, _X_Σ) is surjective for k=0, bijective for k=1 and injective for k=2.
Hence, Theorem <ref> implies that
_⊕_ρ(D_ρ)η≅__X_Σ.
Combining thm:liebracket and exactseqlocal, _Σ may be identified with the homotopy fiber analogue deformation functor
_↪
(Definition <ref>) where =⊕_ρ(D_ρ) and =⊕_ρ(D_ρ)_|T_N. By compa2, prop:isosecfunctor, and rem:H1vanishing we obtain
_Σι^-1∘∘ s≅_⊕_ρ(D_ρ) ,
completing the proof.
We are especially interested in understanding all deformations, not just locally trivial ones. The following corollary allows us to do this when the singularities of X_Σ are mild enough.
mildsing
Let X_Σ be a complete toric variety. Assume that X_Σ is smooth in codimension 2, and -factorial in codimension 3. Let Σ be any simplicial subfan of Σ containing all three-dimensional cones of Σ. Then _X_Σ is isomorphic to _Σ.
First, we observe that any toric variety associated with a fan is Cohen-Macaulay, see <cit.>. Since Σ is a subfan of Σ, the resulting toric variety X_Σ is an open subset of X_Σ. By the construction of Σ, the inequality (X_Σ∖ X_Σ)≥ 4 follows from the orbit-cone correspondence (<cit.>. This allow us to utilize CMiso (see Appendix <ref>), thereby obtaining the isomorphism
_X_Σ≅_X_Σ.
From the preceding discussion, our focus now shifts to _X_Σ. Observe that X_Σ is smooth in codimension 2 and -factorial. By <cit.> (cf. also <cit.>), the deformations of X_Σ are locally trivial, leading us to conclude that
_X_Σ≅'_X_Σ.
We will now prove that
_Σ≅'_X_Σ
by showing that X_Σ satisfies all the assumptions in thm:combiso; this will complete the proof. For ∈ M, recall the sets V_ defined in <ref>, and denote the corresponding set for Σ by V_. Let V_^(2) denote the 2-skeleton of V_. Concretely,
V_: =⋃_σ∈Σ{n_ρ: ρ()<0}_ρ∈Σ(1)∩σ⊆ N_,
V_^(2): =⋃_σ∈Σ(3){n_ρ: ρ()<0}_ρ∈Σ(1)∩σ⊆ N_,
where Σ(3) denotes the 3-dimensional cones of Σ.
It is straightforward to see that the singular cohomology groups H^k-1(V_,) for k=1,2 depend solely on V_^(2). Moreover, since Σ contains all three-dimensional cones of Σ, we have
V_^(2)= V_^(2).
It follows from the preceding discussion combined with van1 that for k=1,2 and every ∈ M,
H^k(X_Σ,_X_Σ)_≅ H^k-1(V_,)= H^k-1(V^(2)_,)≅ H^k(X_Σ,_X_Σ)_.
Moreover, since Σ is a complete fan and hence |Σ| is convex, by prop:O, we have H^k(X_Σ,_X_Σ)=0 for k≥ 1. Therefore, by (<ref>) we obtain that
H^1(X_Σ,_X_Σ)= H^2(X_Σ,_X_Σ)=0.
Additionally, as Σ is complete, X_Σ does not have any torus factors. Since Σ(1)=Σ(1), the variety X_Σ also does not have any torus factors.
By the above discussion, X_Σ satisfies all the assumptions in thm:combiso. Therefore, we conclude that
_Σ and '_X_Σ are isomorphic.
Let X=X_Σ be a -factorial toric variety with no torus factors.
The Lie bracket on ⊕_ρ∈Σ(1)(D_ρ) may be interpreted as coming from the Lie bracket on the tangent sheaf of the affine space ^#Σ(1) associated to the Cox ring of X_Σ, as we now briefly explain.
The variety X=X_Σ arises as a geometric quotient π:X→ X of an open subset X of ^#Σ(1) under the action of the quasitorus G=((X), ^*), see <cit.>. We will call X the Cox torsor of X (although it is actually only a torsor when X is smooth).[In e.g. <cit.>, X is called the characteristic space of X.]
The variety X is itself toric, given by a fan Σ whose cones are in dimension-preserving bijection with the cones of Σ (cf. <cit.>). In particular, for a cone σ∈Σ, denote by σ∈Σ the corresponding cone. Since affine space has trivial class group, the generalized Euler exact sequence for the Cox torsor gives a torus-equivariant isomorphism
η: ⊕_ρ∈Σ(1)(D_ρ) →_X_Σ
inducing a G-equivariant Lie bracket on ⊕_ρ∈Σ(1)(D_ρ).
Moreover, for any torus-invariant open subset U_σ⊆ X, π induces an isomorphism
⊕_ρ∈Σ(1)(D_ρ)(U_σ) π^*≅(⊕_ρ∈Σ(1)(D_ρ)^G(U_σ)).
The Lie bracket on
⊕_ρ∈Σ(1)(D_ρ)
from Definition <ref> is exactly obtained by applying this isomorphism to the above bracket on ⊕_ρ∈Σ(1)(D_ρ).
rem:cox2
The discussion of Remark <ref> can be used to show that there is an isomorphism between _⊕(D_ρ) and the functor _X^G of G-invariant deformations of the Cox torsor X.
Indeed, the sheaf _X^G≅⊕_ρ∈Σ(1)(D_ρ)^G controls G-invariant deformations of X (since X is smooth). By the above we have an isomorphism of Čech complexes
⊕_ρ∈Σ(1)Č^∙({U_σ},(D_ρ)) π^*≅⊕_ρ∈Σ(1)Č^∙({U_σ},(D_ρ)^G)
and the claim follows.
Note that if H^1(X,_X)= H^2(X,_X)=0, thm:combiso implies that we actually have an isomorphism
_X^G →'_X_Σ,
that is, G-invariant deformations of the Cox torsor are equivalent to locally trivial deformations of X_Σ.
This is very much related to the work of <cit.>, in which G-invariant deformations of some affine scheme Y are compared with deformations of the quotient X under the action of G of an invariant open subscheme U⊆ Y; here G is a linearly reductive group. In the toric setting, the natural affine scheme Y to consider is the spectrum of the Cox ring ^#Σ(1). However, as affine space is rigid, one will only obtain trivial deformations in this manner.
The above discussion can be generalized far beyond toric varieties. Let X be any normal -factorial variety with finitely generated class group and no non-trivial global invertible functions, and let X be the relative spectrum of its Cox sheaf <cit.>. We again call X the Cox torsor of X (although it is only a torsor if X is factorial). As before, π:X→ X is a geometric quotient by the group G=((X),^*).
If X is factorial, there is a generalized Euler sequence
0→_ ((X),)⊗_X→π_*(_X^G) →_X→ 0,
see e.g. <cit.> for even more general conditions guaranteeing such a sequence.
In any case, given such a generalized Euler sequence, if H^1(X,_X)=H^2(X,_X)=0, we may apply Theorem <ref> to conclude that _X' is isomorphic to the functor
(_X')^G
of locally trivial G-invariant deformations of the Cox torsor X.
§.§ The combinatorial deformation equation
In this section, we specialize the setup discussed in <ref> to the combinatorial deformation functor _Σ.
As discussed in the proof of thm:combiso, this functor may be identified with a homotopy fiber analogue deformation functor _↪ with respect to the open cover ={U_σ}_σ∈Σ_max of X_Σ, allowing us to apply the setup of <ref>. However, we will think about _Σ instead as in Definition <ref> with respect to the various closed covers _ρ,, which are also indexed by elements of Σ_max.
To start solving the deformation equation (<ref>), we need to fix bases for the tangent and obstruction spaces of _Σ.
We make this explicit here.
By exactseqlocal and Equation (<ref>) we obtain the tangent space
^1 _Σ≅⊕_ρ∈Σ(1), ∈ M H^0(V_ρ,,).
To any connected component C of the simplicial complex V_ρ, we associate a zero cocycle θ_C ∈Ž^0(_ρ,,) as follows:
{θ_C}_σ=
1 if σ∈Σ_max and C∩σ≠∅,
0 otherwise.
The images of θ_C span H^0(V_ρ,,) and removing any one of these provides a basis.
Doing this for all ρ, with H^0(V_ρ,,)≠ 0, we obtain a basis
θ_1 ,…,θ_p
of ^1 _Σ
where each θ_i is of the form θ_C·χ^· f_ρ for some ρ∈Σ(1), ∈ M, and connected component C of V_ρ, (see Convention <ref>).
This will also determine
α^(1)= ∑_ℓ=1^p t_ℓ·θ_ℓ.
An obstruction space for _Σ is given by
⊕_ρ∈Σ(1), ∈ M H^1(V_ρ,,),
again by exactseqlocal and Equation (<ref>). Before choosing cocycles ω_1,…,ω_q whose images give a basis of the obstruction space,
we fix any -linear map
ψ: ⊕_ρ,Č^1(_ρ,,) →⊕_ρ,Č^0(_ρ,,)
that is compatible with the direct sum decomposition and such that d∘ψ(ω)=ω if ω∈Č^1(_ρ,,) is a coboundary.
Here is one explicit way to do this: fix an ordering of the elements of Σ_max. For each connected component C of V_ρ,, this determines a unique cone σ_C that is minimal among all cones σ∈Σ_max intersecting C non-trivially.
For any cone σ intersecting C, there is a unique sequence τ_1=σ,τ_2,…,τ_k=σ_C such that for each i≥ 1, τ_i∩τ_i+1∩ C≠∅, k is minimal, and the sequence is minimal in the lexicographic order with respect to the previous properties.
Given ω∈Č^1(_ρ,,)
we then define
ψ(ω)_σ=∑_i=1^k-1ω_τ_i,τ_i+1
if σ∩ C≠∅ for some connected component C, and set ψ(ω)_σ= 0 otherwise.
Note that in particular, ψ(ω)_σ_C=0. It is straightforward to verify that ψ has the desired property.
We now choose one-cocyles
ω_1,…,ω_q∈⊕Ž^1(_ρ,,)
whose images in
⊕_ρ∈Σ(1), ∈ M H^1(V_ρ,,)
form a basis, such that each ω_i lies in a single direct summand, and such that d(ψ(ω))=0. From an arbitrary set of cocycles ω_1',…,ω_q' whose images form a basis, we may set
ω_i=ω_i'-d(ψ(ω_i'))
to obtain that d(ψ(ω_i))=0.
In this situation, we refer to the corresponding deformation equation (<ref>) for _Σ as the combinatorial deformation equation. By prop:defeqsolving, we know that for each order r it has a solution
β^(r+1)∈⊕_ρ,Č^0(_ρ,,)⊗_r+1; γ_ℓ^(r+1)∈_r+1.
In fact, we may obtain a solution using the map ψ:
Let η∈⊕_ρ,Č^1(_ρ,,)⊗_r+1 be the normal form of (α^(r))-∑_ℓ=1^qg_ℓ^(r)ω_ℓ with respect to · J_r. Then β^(r+1)=ψ(η) and γ_ℓ^(r+1)
give a solution to the combinatorial deformation equation, where γ_ℓ^(r+1) is determined by
∑_ℓ=1^q γ_ℓ^(r+1)·ω_ℓ=η-d(β^(r+1)).
By prop:defeqsolving<ref>, there exists
β' ∈⊕_ρ,Č^0(_ρ,,)⊗_r+1; γ_ℓ'∈_r+1
such that
η= d(β')+∑_ℓ=1^q γ_ℓ' ·ω_ℓ.
Applying d∘ψ to both sides we obtain
d(β^(r+1))= d(β').
Since the ω_ℓ are linearly independent, this implies γ_ℓ^(r+1)=γ_ℓ'.
To summarize the contents of Proposition <ref>, in order to solve the combinatorial deformation equation, we only need to reduce (α^(r))-∑_ℓ=1^qg_ℓ^(r)·ω_ℓ to its normal form with respect to · J_r (an unavoidable algebraic step), and then apply the map ψ (a purely combinatorial step).
By hull, iteratively solving the combinatorial deformation equation gives us a procedure for computing the hull of _Σ.
We will do this explicitly in several examples in <ref>.
§.§ Higher order obstructions
To solve the combinatorial deformation equation discussed in <ref> we have to compute (s(α)). In this section, we will derive a general formula for (s(α)) which is of theoretical interest and present explicit formulas for lower order terms.
We first establish some notation. For w=(w_1,…,w_p)∈^p_≥0, we denote t_1^w_1… t_p^w_p by t^w. We choose θ_1,…,θ_p using the construction in <ref>. Let φ∈(^p,M) be the map sending the ℓ-th basis vector of ^p to the degree in M of θ_ℓ. Consider
α=∑_ρ∈Σ(1)∑_w∈_≥ 0^p ∖{0} c^w_ρ· t^w·χ^φ(w)· f_ρ∈⊕_ρ∈Σ(1)
∈ MČ^0(_ρ,,)⊗,
where c^w_ρ is a cochain in Č^0(_ρ,φ(w),) and χ^φ(w)· f_ρ specifies the summand in which c_ρ^w lies (see Convention <ref>).
For integers d≥ 1 and 1≤ k ≤ d we define the set Δ_d,k as follows:
Δ_d,k={(a_1,…,a_d)∈_≥ 1^d : [ a_i <i or a_i=k-∑_j>i
j< a_j a_j-j if i<k; a_i=k if i=k; a_i< i if i> k ]}.
Likewise, for a∈Δ_d,k, we set
(a)=(-1)^#{i | a_i>i}.
For an integer d≥ 1 and w ∈_≥ 0^p ∖{0} we define the set ∇_w,d as follows:
∇_w,d={=(_1,…,_d)∈ (_≥ 0^p∖{0})^d | ∑_i=1^d _i=w}.
Here we give some examples of Δ_d,k and ∇_w,d. For d=3 and k=1,2,3 we have the following sets:
Δ_3,1={{1,1,1},{1,1,2}}
Δ_3,2={{2,2,1},{2,2,2}}
Δ_3,3={{3,1,3},{2,3,3}}.
For w=(1,1,1) and d=2,3, we have the following sets:
∇_(1,1,1),3 ={(π(e_1),π(e_2),π(e_3)) : π∈ S_3}
∇_(1,1,1),2 ={(π(e_1),π(e_2+e_3)) : π∈ S_3}.
Here, π(e_i) denotes the image of the standard basis vector e_i of ^3 under the action of π∈ S_3.
These sets appear in the formula for the coefficient of
t_1t_2t_3 in Table Table:obspoly.
Consider any BCH formula
x y=∑_d ≥ 1∑_σ(x,y)∈_d b_σ [σ(x,y)]
where b_σ∈, _d is some set of words σ(x,y) of length d in x and y, and
[σ(x,y)] denotes the iterated Lie bracket (see <ref>); we may for example take Dynkin's formula. Set
(σ)=(-1)^#{i | σ_i=y}.
We will use the notation ρ⃗=(ρ_1,…,ρ_d) for a d-tuple of rays of Σ. In this section, for compactness of notation and to avoid confusion with elements of _d, we will denote elements of Σ_max by i and j.
thm:obsformula
The coefficient of t^w in (s(α))_ij is the product of χ^φ(w)
with
∑_d≥ 1
∈∇_w,d
ρ⃗∈Σ(1)^d(∑_σ∈_d(σ) b_σ∏_k=1^d (c^_k_ρ_k)_σ(i,j)_d-k+1)
(
∑_k=1… d
a∈Δ_d,k(a)∏_ℓ≠ kρ_ℓ(φ(_a_ℓ)) · f_ρ_k).
We postpone the proof of thm:obsformula until the end of this section. We first proceed to describe the explicit formulas for lower order terms. Using the notation
α^(1)_i = t_1·χ^_1· c^(1,0)_1,i· f_1 + t_2·χ^_2· c^(0,1)_2,i· f_2
α^(2)_i = α^(1)_i+
t_1t_2·χ^_1+_2· (c^(1,1)_1,i· f_1+ c^(1,1)_2,i· f_2)
α^(3)_i = α^(2)_i+ t_1^2t_2·χ^2_1+_2· (c^(2,1)_1,i· f_1+ c^(2,1)_2,i· f_2)+ t_1t_2^2·χ^_1+2_2· (c^(1,2)_1,i· f_1+ c^(1,2)_2,i· f_2)
we list the formulas for the coefficients of t_1t_2,t_1t_2^2,t_1t_2^3 and t_1^2t_2^2 in Table <ref>. By applying λ to the coefficient of t_1t_2 in (s(α^(1)))_ij, we recover the combinatorial cup-product from <cit.>. Similarly, using the notation
α^(2)_i = t_1·χ^_1· c^e_1_1,i· f_1 + t_2·χ^_2· c^e_2_2,i· f_2+ t_3·χ^_3· c^e_3_3,i· f_3+
t_1t_2·χ^_1+_2· (c^e_1+e_2_1,i· f_1+ c^e_1+e_2_2,i· f_2) + t_1t_3·χ^_1+_3· (c^e_1+e_3_1,i· f_1+ c^e_1+e_3_3,i· f_3)
+ t_2t_3·χ^_2+_3· (c^e_2+e_3_2,i· f_2+ c^e_2+e_3_3,i· f_3)
we also list a formula for the coefficient of t_1t_2t_3 of (α^(2))_ij in Table <ref>.
We thus have explicit formulas for all obstructions of third order, and fourth order obstructions involving only two deformation directions. In theory, we could write down similar formulas for higher order obstructions using thm:obsformula, but they become increasingly large. We note that unlike for the cup product case, the formulas for obstructions of degree larger than two involve not only first order deformations but also higher order perturbation data.
We now start on proving thm:obsformula.
Let ρ_1,…,ρ_m be not-necessarily distinct rays of Σ, and _1,…,_m∈ M. We will need an explicit formula for the iterated Lie bracket
[χ^_m· f_ρ_m χ^_m-1· f_ρ_m-1 ⋯ χ^_1· f_ρ_1],
see <ref> for notation.
prop:iteratedlie
The iterated Lie bracket
[χ^_m· f_ρ_m χ^_m-1· f_ρ_m-1 ⋯ χ^_1· f_ρ_1]
is equal to
χ^_1+⋯+_m·∑_k=1^m(∑_a∈Δ_m,k(a)∏_i≠ kρ_i(_a_i) )· f_ρ_k.
The proof is by induction. The base case m=1 is immediate. For proving the induction step, it is enough to show that
∑_a∈Δ_m-1,k∑_j=1^m-1(a) ∏_i≠ kρ_i(_a_i)ρ_m(_j)= ∑_a∈Δ_m,k(a) ∏_i≠ kρ_i(_a_i)
for k=1,…,m-1 and
∑_k=1^m-1-ρ_k(_m)·∑_a∈Δ_m-1,k(a)∏_i≠ kρ_i(_a_i)· = ∑_a∈Δ_m,m(a) ∏_i≠ mρ_i(_a_i).
To establish this, we use the following straightforward inductive relations on the sets Δ_m,k. For k≤ m-1 the map
π_k: Δ_m,k→Δ_m-1,k
defined by
π_k(a) =(a_1,…,a_m-1)
is (m-1)-to-1 and satisfies (π_k(a))=(a). Additionally, for k≤ m-1 the map π̂_k: Δ_m-1,k→Δ_m,m defined by
π̂_k(a)=(a_1,…,a_k-1,m,a_k+1,…,a_m-1,m)
is injective, (π̂_k(a))=-(a), and we have
Δ_m,m= ⋃̇_k=1^m-1π̂_k ( Δ_m-1,k).
The proofs of (<ref>) and (<ref>) follow directly from these observations.
We have
(α)_ij = ∑_d ≥ 1∑_σ(x,y)∈_d b_σ [σ(α_i,-α_j)].
Expanding the right-hand side, we obtain that the coefficient of t^w is
∑_d≥ 1
∈∇_w,d
ρ⃗∈Σ(1)^d∑_σ∈_d(σ)· b_σ·∏_k=1^d (c^_k_ρ_k)_σ(i,j)_d-k+1·
[χ^φ(_d)· f_ρ_d ⋯ χ^φ(_1)· f_ρ_1].
Applying prop:iteratedlie, we then obtain that this is equal to χ^φ(w)
multiplied with the quantity in the statement of the theorem.
§.§ Removing cones
In <ref>, we discussed how to compute the hull of _Σ using the combinatorial deformation equation. When X_Σ has mild singularities, this approach indeed yields the hull of _X_Σ, see mildsing. Although working with _Σ is less complex than _X_Σ, computations still involve dealing with every maximal cone of Σ and their pairwise intersections. In this section, we will explore methods to streamline the process of determining the hull of _Σ by reducing the number of maximal cones and intersections which we must consider. We will apply these techniques when we compute hull for examples in <ref>.
Throughout this section, Σ is any simplicial fan.
Let
={(ρ,)∈Σ(1)× M | H^0(V_ρ,,)≠0}.
We let Γ⊆Σ(1)× M be the smallest set containing that satisfies the following:
* If (ρ,),(ρ',')∈Γ, ρ≠ρ', and ρ'()≠ 0 then (ρ,+')∈Γ;
* If (ρ,),(ρ,')∈Γ and ρ(')≠ρ() then (ρ,+')∈Γ.
We then define _Γ= ⊕_(ρ, )∈Γ(D_ρ)_.
It is straightforward to verify that the subsheaf _Γ is stable under the Lie bracket (Definition <ref>) on
⊕_ρ∈Σ(1)(D_ρ)= ⊕_ρ∈Σ(1)
∈ M(D_ρ)_.
Let Σ' be a subfan of Σ with Σ_max'⊆Σ_max. We say that Σ' covers Γ if for every (ρ,)∈Γ, V_ρ,⊆ |Σ'|. In this case, we obtain a cover '_ρ, of V_ρ, by Σ_max'.
Fix the lattice N=^3. We consider a fan Σ with six rays, where the generator of the ith ray ρ_i is given by the ith column of the following matrix:
[ 1 0 -1 0 0 0; 0 1 e -1 0 0; 0 0 a b 1 -1 ].
We assume that e,b ≥0 (the reason for this assumption will be explained in <ref>).
Rays belong to a common cone of Σ if the corresponding set of vertices in Figure fig:cones belong to the same simplex, with the ray ρ_6 as a vertex at infinity.
Suppose that we know that the ray-degree pairs (ρ,)∈ are of the form
( ρ_2, (*,*,0) ) or (ρ_5, (*,*,-1) ) or (ρ_6, (*,*,1) ).
We will establish this in lemma:SigmaD.
It is then straightforward to verify that any element of Γ not in must be of the form
( ρ_5, (*,*,-1) ) or (ρ_6, (*,*,1) ) or (ρ_5, (*,*,0) ) or (ρ_6, (*,*,0) ).
Given the above assumption on , we claim that the fan Σ' with maximal cones
σ_1 = (ρ_1,ρ_4,ρ_5), σ_2= (ρ_1,ρ_2,ρ_5),
σ_3 = (ρ_2,ρ_3,ρ_5), σ_4= (ρ_3,ρ_4,ρ_5).
covers Γ. Indeed, it is straightforward to see that for (ρ,)∈Γ, n_ρ_6∉ V_ρ, and the claim follows.
Suppose that Σ' covers Γ.
We define the functor
_Σ',Γ(A) = {α∈⊕_(ρ, )∈ΓČ^0('_ρ,,)⊗_A : (α)=0 } /∼
where α=β in _Σ',Γ(A) if and only if there exists
γ∈⊕_(ρ,)∈ΓČ^0(',(D_ρ))_⊗_A
such that
ι(γ)⊙(s(α)) = (s(β)).
This functor is defined on morphisms in the obvious way.
thm:samehull
Let Σ be a simplicial fan and suppose that Σ' covers Γ.
Then there are smooth maps __Γ→_Σ and __Γ→_Σ' that induce isomorphisms on tangent spaces. In particular, if ^1 _Σ is finite dimensional, then _Σ and _Σ',Γ have the same hull.
We have the natural map of functors f_1: __Γ,→_⊕(D_ρ) induced from the injection _Γ→⊕(D_ρ) and the cover . Consider the open cover '={U_σ}_σ∈Σ_max' of X_Σ'⊆ X_Σ. Similar to the construction of the deformation functor __Γ=__Γ, using the Čech complex Č^∙(,_Γ), we can define the deformation functor __Γ,' using the Čech complex Č^∙(',_Γ). The natural map on Čech complexes induces a morphism of functors f_2:__Γ,→__Γ,'.
To prove the theorem, we will show that f_1 and f_2 are smooth maps with isomorphisms on the tangent spaces, and there are isomorphisms g_1 and g_2 as in the following diagram:
_⊕(D_ρ)[r,"g_1","≅"swap ] _Σ
__Γ,[ru,"f_1"] [rd,"f_2"]
__Γ,'[r,"g_2","≅"swap ] _Σ',Γ.
First, consider the map f_1. By construction
^1 __Γ= ⊕_(ρ,)∈Γ H^1(,(D_ρ))_=⊕_(ρ,) ∈Σ(1)× M H^1(,(D_ρ))_= ^1 _⊕(D_ρ)
and
⊕_(ρ,)∈ Q H^2(,(D_ρ))_→⊕_(ρ,) ∈Σ(1)× M H^2(,(D_ρ))_
is an inclusion. Thus, by standardsmooth the map f_1 is smooth.
Now consider f_2. The maps between the tangent and obstruction spaces are isomorphisms since Σ' covers Γ, so f_2 is also smooth by standardsmooth.
We know that _Σ is the homotopy fiber analogue of _⊕(D_ρ). Likewise, a straightforward adaptation of exactseqlocal implies that _Σ',Γ is the homotopy fiber analogue of __Γ,'. The isomorphisms g_1 and g_2 thus follows from compa2, prop:isosecfunctor, and rem:H1vanishing.
From the above theorem, we can reduce the number of maximal cones that need to be considered. In the combinatorial deformation equation, we also need to compute (α)_στ for every pair σ,τ of maximal cones. The following cocycle property and prop:closure reduce the number of cases that need to be calculated. Let ⋀^2 Σ_max be the set consisting of size two subsets of Σ_max.
We say a set ⊆⋀^2 Σ_max has the cocycle property if for {σ,κ}, {τ,κ}∈ with σ∩τ⊆κ, it follows that {σ,τ}∈. For ⊆⋀^2 Σ_max, we define to be the smallest set containing that has the cocycle property.
prop:closure
Let Σ be a simplicial fan, and let ⊆⋀^2 Σ_max be such that = ⋀^2 Σ_max. Then a cocycle ω∈Ž^1(_ρ,,) is determined by {ω_στ | {σ,τ}∈}.
Let _0= and define
_i+1=_i∪{{σ,τ} | {τ,κ},{σ,κ}∈_i with σ∩τ⊆κ}.
Since ⋀^2 Σ_max is finite, after finitely many steps we obtain _m=. We will prove by induction that for every {σ,τ}∈_i, ω_στ is determined by
{ω_σ'τ' | {σ',τ'}∈}.
Clearly, for i=0, this is true. Suppose that {σ,τ}∈_i for some i≥ 1. If {σ,τ}∈_i-1 the statement is true by induction. If V_ρ,∩σ∩τ=∅, then ω_στ=0. Thus, we may assume that V_ρ,∩σ∩τ≠∅. Since ω∈Ž^1(_ρ,,) and
V_ρ,∩σ∩τ∩κ≠∅, we have ω_στ=-ω_τκ+ω_σκ. By the induction hypothesis, ω_τκ and ω_σκ are already determined. Thus, the statement is true by induction.
In situations where Σ has reasonable geometry, there is a canonical choice of . For a cone τ∈Σ, define
(τ,Σ)={σ∈Σ | τ⊆σ}.
We say that (τ,Σ) is connected in codimension one if for any two maximal cones σ,σ'∈(τ,Σ), there is a sequence of maximal cones τ_1=σ,τ_2,…,τ_k=σ' such that τ_i∈(τ,Σ) and τ_i,τ_i+1 intersect in a common facet.
prop:codimone
Let Σ be a simplicial fan and let ⊆⋀^2Σ_max consist of those pairs of cones that intersect in a common facet. Suppose that for every τ∈Σ, (τ,Σ) is connected in codimension one. Then = ⋀^2 Σ_max. In particular, a cocycle ω∈Ž^1(_ρ,,) is determined by {ω_στ | {σ,τ}∈}.
We will show that = ⋀^2 Σ_max; the second claim then follows from prop:closure. More specifically, we will show that for {σ,σ'}∈⋀^2 Σ_max, {σ,σ'}∈. We will induct on the dimension of τ=σ∩σ'. If σ,σ' intersect in a common facet, then we are done by definition of .
Otherwise, suppose that we have shown all pairs intersecting in a face of dimension larger than τ belong to .
Fixing τ, we now show that any pair of maximal cones σ,σ' from (τ,Σ) belongs to . For this, we induct on the length of a sequence τ_1=σ,τ_2,…,τ_k=σ' in (τ,Σ) connecting σ,σ' in codimension one. If k=2, then again we are done by the definition of . For k>2, by induction we have that {σ,τ_2} and {τ_2,σ'} belong to . Since σ∩σ'=τ⊆τ_2, it follows that {σ,σ'}∈. The claim now follows by induction.
It is straightforward to verify that if a simplicial fan Σ is combinatorially equivalent to a fan with convex support, it satisfies the hypotheses of prop:codimone.
It follows from thm:samehull and prop:closure that in order to compute the hull of _X_Σ, we can use the functor _Σ',Γ for some Σ' covering Γ, and, when computing obstructions, choose ⊆⋀^2 Σ'_max such that =⋀^2 Σ'_max. We will apply these constructions in several examples in <ref>.
Let Σ' be the fan from Example <ref>.
Define the set
={{σ_1,σ_2}, {σ_2,σ_3}, {σ_3,σ_4}, {σ_4,σ_1}}.
We have {σ_i,σ_i+1}, {σ_i+1,σ_i+2}∈ with
σ_i∩σ_i+2⊆σ_i+1 for i=1,2. Thus, {σ_1,σ_3} and {σ_2,σ_4} are in .
It follows that = ⋀^2 Σ'_max.
In fact, is the set of all pairs of maximal cones intersecting in a common facet. Since
Σ' has the property that (σ,Σ') is connected in codimension one for all σ, prop:codimone guarantees that
= ⋀^2 Σ'_max.
§.§ Unobstructedness
In this section, we will
provide a sufficient criterion for a toric variety to have unobstructed deformations.
thm:unobstructed
Let X=X_Σ be a complete toric variety that is smooth in codimension 2 and -factorial in codimension 3. Let
={(ρ,)∈Σ(1)× M | H^0(V_ρ,,)≠0}
and let Γ be as in Definition <ref>.
If H^1(V_ρ,,)=0 for all pairs (ρ,)∈Γ satisfying ρ()=-1, then X_Σ is unobstructed.
In particular, setting
:= { (ρ,+)∈Σ(1)× M |
(ρ,)∈𝒜; ∈∑_(ρ',')∈𝒜_≥0·'
},
if H^1(V_ρ,,)=0 for all pairs (ρ,)∈ satisfying ρ()=-1, then X_Σ is unobstructed.
By mildsing, after possibly replacing Σ be any simplicial subfan of Σ containing all three-dimensional cones, we may assume that
_X≅_Σ.
By thm:samehull, the functors _Σ and _Γ,Σ have the same hulls. But an obstruction space for _Γ,Σ is given by
⊕_(ρ,)∈Γ H^1(V_ρ,,),
see showdeffun2. Moreover, by cor:vanishdegree, we know that for any (ρ,)∈Γ, ρ()≠-1 implies H^1(V_ρ,u,)=0 and the first claim of the theorem follows.
To show the second claim, observe that contains Γ.
To illustrate the significance of the above theorem, we provide an example of an unobstructed toric threefold whose unobstructedness does not follow by degree reasons alone.
We consider the smooth toric threefold from Example <ref>.
The set from thm:unobstructed consists of exactly
(ρ_3,(1,0,0)),(ρ_11,(0,0,1)),(ρ_10,(-1,0,-1)),(ρ_10,(-1,0,0)).
As noted earlier, we also have H^1(V_ρ,,)=0 except for (ρ,)=(ρ_0,(0,0,-1)).
There are infinitely many positive integer combinations of degrees
(1,0,0),(0,0,1),(-1,0,-1),(-1,0,0)
that sum to (0,0,-1). For example,
(0,0,-1)=(-1,0,-1)+(1,0,0).
Hence, we cannot conclude by degree reasons alone that X_Σ is unobstructed. However, since ρ_0 does not appear in any element of , (ρ_0,(0,0,-1) is not in and we conclude that X_Σ is in fact unobstructed.
§ EXAMPLES
§.§ Primitive collections and rigidity
Throughout <ref>, we will assume that Σ is a smooth complete fan. The data of a fan can be provided by specifying the ray generators and listing the maximal cones. Instead of specifying the maximal cones we can describe the fan using the notion of primitive collections:
A subset ⊆Σ(1) is a primitive collection if the elements of do not belong to a common cone in Σ, but the elements of every proper subset of do.
Let ={ρ_1,…,ρ_k} be a primitive collection and let σ∈Σ be the unique cone such that n_ρ_1+⋯+n_ρ_k lies in the relative interior of σ. Then there is a relation
n_ρ_1+⋯+n_ρ_k= ∑_ρ∈σ∩Σ(1) c_ρn_ρ
where c_ρ>0 for all ρ∈σ∩Σ(1). This is the primitive relation associated to , see <cit.>. The degree of is the integer
()= k-∑ c_ρ.
lemma:rigid
Let Σ be a smooth complete fan. If ()>0 for every primitive collection of cardinality 2, then X_Σ is rigid.
We will show that H^1(X_Σ,_X_Σ)=0 in this case. According to Proposition <ref>, we have
H^1(X_Σ,_X_Σ)≅⊕_ρ∈Σ(1), ∈ M
ρ()=-1H^0(V_ρ,,).
For any ρ∈Σ(1) and ∈ M, H^0(V_ρ,,)≠0 only when the simplicial complex V_ρ, is disconnected. Hence, it is enough to show that V_ρ, is connected for all ρ∈Σ(1) and ∈ M.
Assuming that V_ρ,≠∅, let ρ_min∈Σ(1) be any ray not equal to ρ such that ρ_min() is minimal. Then for any ρ'≠ρ with ρ'()<0, we claim that ρ' shares a cone of Σ with ρ_min. Indeed, if not, ={ρ_min,ρ'} forms a primitive collection, and since
()>0 we have only two possibilities:
n_ρ_min+n_ρ'=0; n_ρ_min+n_ρ'= n_ρ”.
The first case is impossible since both ρ_min() and ρ'() are less than zero. The second case is impossible since it would follow that ρ”()<ρ_min(), contradicting the choice of ρ_min. This implies the claim, and the connectedness of V_ρ, follows.
A smooth toric variety X_Σ is Fano (respectively weak Fano), if and only if ()>0 (respectively ()≥0) for every primitive collection (see <cit.>). As a corollary, we obtain the well-known result that every smooth toric Fano variety is rigid (see <cit.>).
We also obtain the rigidity for smooth toric weak Fano varieties with no degree zero primitive collections of cardinality 2 (cf. <cit.> for a similar result).
It is well-known that ^n is the only smooth complete toric variety with Picard rank 1. Since it is Fano, it is rigid by the previous remark. Thus we may focus on smooth complete toric varieties with higher Picard rank.
§.§ Picard rank two
In this section, we will prove that every smooth complete toric variety with Picard rank 2 is unobstructed. Any n-dimensional toric variety X of this type can be expressed as
X ≅(_^s⊕_^s(a_1)⊕⋯⊕_^s(a_r)),
for r,s≥ 1, r+s=n and 0≤ a_1≤⋯≤ a_r, see <cit.>, <cit.>.
A fan Σ with X=X_Σ may be described as follows, see also <cit.>. Fix the lattice N=^n, and consider the ray generators given by the columns of the following matrix:
code-for-first-col = ,code-for-first-row =
A=
[small,first-row,first-col,margin]
ρ_1 ⋯ ρ_s ρ_s+1 ⋯ ρ_n ρ_n+1 ρ_n+2
3-3<>I_s 3-3<> 0 0 -1
⋮ ⋮
0 -1
3-3<> 0 3-3<>I_r -1 a_1
⋮ ⋮
-1 a_r
The primitive collections for Σ are {ρ_1,…,ρ_s,ρ_n+2} and {ρ_s+1,…,ρ_n,ρ_n+1}.
thm:ranktwo
Let
X= (_^s⊕_^s(a_1)⊕⋯⊕_^s(a_r)).
Then:
* X is rigid if and only if s>1, or s=1 and a_r≤1. If s=1, then
_ H^1(X,_X)= ∑_j=1^rmax{a_j-1,0};
* X is unobstructed.
We will make use of the isomorphism
H^k(X,_X)≅⊕_ρ∈Σ(1), ∈ M
ρ()=-1H^k-1(V_ρ,,),
for k=1,2 (see Proposition <ref>).
We will first show claim <ref>.
Suppose that H^1(X,_X)≠0. In this case, there must exist a ray ρ∈Σ(1) with ρ(𝐮)=-1 and a primitive collection {ρ',ρ”} with exactly two rays which are distinct from ρ and satisfy ρ'()<0, ρ”()<0. Consequently, either r or s must equal one. However, if r=1, then ρ'()=-ρ”(). Hence, it must be the case that s=1 and the rigidity for s>1 follows.
Assume that s=1. From the primitive collections, it follows that the simplicial complex V_ρ, is disconnected only when it consists of the two vertices n_ρ_1 and n_ρ_n+2. This occurs if and only if the following conditions on the ray-degree pairings are satisfied: there exists j ∈{1,…,r+1} such that ρ_s+j()=-1 and
ρ_1()<0, ρ_n+2()<0, ρ_s+i()≥ 0 for all i ∈{1,…,r+1}∖{ℓ}.
If a_r≥ 2, then for ρ=ρ_s+r=ρ_n and =(-1,0,…,0,-1), the above set of conditions is satisfied, and thus we obtain V_ρ, is disconnected. In fact, all degree-ray pairs (ρ,) satisfying the above conditions are given by choosing ρ=ρ_s+j for j∈{1,…,r} and setting
=d· e_1-e_s+j
for d satisfying 1-a_j≤ d ≤ -1.
Thus, we obtain
⊕_∈ MH^0(V_ρ_s+j,,)=max{a_j-1,0},
for j∈{1,…,r}, and claim <ref> follows.
To prove claim <ref>, we show that if X is not rigid, then H^2(X,_X)=0. Hence, we may again assume that s=1. It is suffices to show that for all ray-degree pairs (ρ,), every connected component of V_ρ, is contractible to a point.
Given that
n_ρ_s+1+…+n_ρ_n+1=0,
for every ∈ M there exists at least one j∈{1,…,r+1} such that ρ_s+j()≥ 0. Let σ be the cone in Σ generated by {ρ_s+1, …, ρ_n+1}∖{ρ_s+j}. Then V_ρ, is the join of the simplicial complexes
( V_ρ,∩{n_ρ_1,n_ρ_n+2} ) and (V_ρ,∩σ).
Since V_ρ,∩σ is either empty or a simplex, it follows that every connected component of V_ρ, is contractible to a point. This completes the proof of claim <ref>.
§.§ Split P1-bundles over Hirzebruch surfaces
As shown in <cit.>, there exists a ^1-bundle over the second Hirzebruch surface _2 that exhibits quadratic obstructions. In fact, Picard rank three toric threefolds represent the minimal cases in terms of both dimension and Picard rank where obstructions can occur. This is because smooth complete toric varieties of dimension at most 2 (<cit.>) and those with Picard rank at most 2 (thm:ranktwo) are unobstructed.
In this section and the next, we examine toric threefolds that are ^1-bundles over the Hirzebruch surface
_e=(_^1⊕_^1(e)).
Every toric threefold of Picard rank 3 is either of this form, or the blowup of a toric threefold of Picard rank 2 in a point or a ^1 (see <cit.>, <cit.>).
Any toric ^1-bundle over _e can be expressed as
X≅(𝒪_𝔽_e⊕𝒪_𝔽_e(aF+bH)),
where _e=(_^1⊕_^1(e)) is the eth Hirzebruch surface, and F and H respectively represent the classes in (_e) of the fiber and _𝔽_e(1) in the ^1-bundle fibration of _e over ^1.
In other words,
(𝔽_e)= ℤF⊕ℤH with F^2=0, F· H=1, H^2=e.
Since we are considering X up to isomorphism, we can take e,b≥0, see e.g. <cit.>. Under this assumption, the fan Σ from Example <ref> describes X, that is, X=X_Σ. The primitive collections are given by {ρ_1,ρ_3}, {ρ_2,ρ_4}, and {ρ_5,ρ_6}. From this, we obtain the primitive relations:
n_ρ_5+ n_ρ_6 =0;
n_ρ_2+ n_ρ_4 = b· n_ρ_5;
n_ρ_1+n_ρ_3 =
e · n_ρ_2 + a· n_ρ_5 if a≥ 0
e · n_ρ_2 + a· n_ρ_6 if a< 0.
As a first step for computing the hull for several examples of ^1-bundles over _e, we will describe H^1(X,_X) and H^2(X,_X) in the following two lemmas.
lemma:H2deg
Let X= (𝒪_𝔽_e⊕𝒪_𝔽_e(aF+bH)) with e,b≥0. Then
H^2(X,_X)≅⊕_ρ∈Σ(1), ∈ M
ρ()=-1 H^1(V_ρ,,) is non-zero
if and only if b≥2 and (b-1)e+a≥ 2. Moreover, H^1(V_ρ,,)≠0 if and only if ρ=ρ_5 and =(x,y,-1) satisfies
-b+1≤ y ≤ -1, ey-a+1≤ x ≤ -1.
Clearly, when V_ρ, is the simple cycle formed by the vertices ρ_1,ρ_2,ρ_3,ρ_4, we have H^1(V_ρ,,)≠0.
We claim that for any other simplicial complex of the form V_ρ,, its connected components are contractible to a point. Indeed, any such simplicial complex V_ρ, can be expressed as the join of the simplicial complexes:
V_1= V_ρ,∩{n_ρ_1,n_ρ_3}, V_2=V_ρ,∩{n_ρ_2,n_ρ_4}, V_3=V_ρ,∩{n_ρ_5,n_ρ_6}.
From (<ref>), V_3 is either empty or one of the vertices. In the latter case, V_ρ, is contractible to that vertex. Therefore, we may restrict to the case V_ρ, is the join of the simplicial complexes V_1 and V_2. In that case, the connected components of V_ρ, are contractible unless V_1 and V_2 each consist of two vertices.
This scenario occurs if and only if the following conditions on ray-degree pairing are satisfied: either ρ_5()=-1 or ρ_6()=-1, and
ρ_i()<0 for all i=1,2,3,4.
There is no satisfying the second case (this is immediate from primitive relations (<ref>)). Since ρ_5()=-1, we can represent =(x,y,-1). Under this representation, the inequalities can be expressed as:
ey-a+1 ≤ x ≤ -1
-b+1 ≤ y ≤ -1.
This system of inequalities has an integral solution (namely (-1,-b+1,-1)) if and only if b≥ 2 and (b-1)e+a≥ 2.
lemma:H1deg
Let X≅(𝒪_𝔽_e⊕𝒪_𝔽_e(aF+bH)) with e,b≥0. Then
H^0(V_ρ,,) ≠0
in exactly the following cases:
* The pair (ρ,) takes the form (ρ_2,(x,-1,0)), where x satisfies
-e+1≤ x ≤ -1.
Type 1 occurs if and only if e≥ 2;
* The pair (ρ,) takes the form (ρ_6,(x,y,1)), where x,y satisfies
0≤ y≤ b, ey+a+1 ≤ x ≤ -1.
Type 2 occurs if and only if a≤ -2;
* The pair (ρ,) takes the form (ρ_5,(x,y,-1)), where x,y satisfies
-b+1≤ y≤ -1, 0 ≤ x ≤ ey-a.
Type 3 occurs if and only if e+a≤0 and b≥ 2;
* The pair (ρ,) takes the form (ρ_5,(x,0,-1)), where x satisfies
b=0, -a+1 ≤ x ≤ -1.
Type 4 occurs if and only if
a≥ 2 and b=0.
Any simplicial complex V_ρ, can be expressed as the join of the simplicial complexes:
V_1= V_ρ,∩{n_ρ_1,n_ρ_3}, V_2=V_ρ,∩{n_ρ_2,n_ρ_4}, V_3=V_ρ,∩{n_ρ_5,n_ρ_6}.
If at least two of V_1,V_2 and V_3 are non-empty, then V_ρ, is connected. According to (<ref>), V_3 is either empty or consists of single vertex. Hence, V_ρ, can only have more than one connected component when it consists of exactly two vertices, either {n_ρ_1, n_ρ_3} or {n_ρ_2, n_ρ_4}. There are four distinct scenarios in which this occurs:
Type 1:
ρ_2()=-1, ρ_1()<0, ρ_3()<0, ρ_4()≥ 0, ρ_5()≥0 , ρ_6()≥ 0.
Thus, we can represent =(x,-1,0), and the above
inequalities lead to
-e+1≤ x ≤ -1.
It is immediate to see that Type 1 occurs if and only if e≥ 2.
Type 2:
ρ_6()=-1, ρ_1()<0, ρ_3()<0, ρ_2()≥ 0, ρ_4()≥0 .
Thus, we can represent =(x,y,1), and the above inequalities lead to
0≤ y≤ b, ey+a+1 ≤ x ≤ -1.
If Type 2 occurs, then =(-1,0,1) is always included, which requires a≤ -2.
Type 3:
ρ_5()=-1, ρ_2()<0, ρ_4()<0, ρ_1()≥ 0, ρ_3()≥0 .
Thus, we can represent =(x,y,-1), and the above inequalities lead to
-b+1≤ y≤ -1, 0 ≤ x ≤ ey-a.
If Type 3 occurs, then =(0,-1,-1) is always included, which requires e+a≤0.
Type 4:
ρ_5()=-1, ρ_1()<0, ρ_3()<0, ρ_2()≥ 0, ρ_4()≥0 .
Thus, we can represent =(x,0,-1), and the above inequalities lead to
b=0, -a+1 ≤ x ≤ -1.
It is immediate to see that Type 4 occurs if and only if a≥ 2 and b=0.
Using lemma:H2deg and lemma:H1deg, we will apply the unobstructedness result (thm:unobstructed) based on the ray-degree pairs.
lemma:possibleobs
Let X≅(𝒪_𝔽_e⊕𝒪_𝔽_e(aF+bH)) with e, b ≥ 0. Then X is unobstructed, except possibly in the following cases:
* e=1, a≤ -2 and b≥ 3-a;
* e≥ 2, a≤ -e and b ≥ 1+2-ae.
Suppose first that e=0. By lemma:H2deg H^2(X,_X)≠0 if and only if a≥ 2 and b≥ 2. In this case, lemma:H1deg implies that H^1(X,_X)=0. Therefore, when e=0, X is always unobstructed.
Now assume that e=1. Then by lemma:H2deg H^2(X,_X)≠0 if and only if
b ≥max{2, 3-a }.
In this case, Type 1 and Type 4 of lemma:H1deg do not occur.
By thm:unobstructed, to have obstructions, we must have Type 2 and Type 3 elements of lemma:H1deg. Hence, we have a≤-2 and combining this with (<ref>) yields <ref>.
Now assume that e≥ 2. By lemma:H2deg H^2(X,_X)≠0 if and only if
b ≥max{2, 1+2-ae}.
In this case, Type 4 of lemma:H1deg does not occur and Type 1 does occur. By thm:unobstructed, Type 3 must occur.
However, the occurrence of both Type 1 and Type 3 together implies Type 2 also occurs. Hence, we have a≤-e; combining this with (<ref>) yields <ref>.
For the cases outlined in lemma:possibleobs, we will review the inequalities that define the sets of with H^1(X,_X)_≠0 and H^2(X,_X)_≠0, as some of the inequalities are redundant.
Case: e=1, a≤ -2, and b≥ 3-a.
In Figure fig:type1, we depict the projections onto the xy-coordinates of the degrees where H^1(X,_X)_≠0 (split into types 2 and 3) and H^2(X,_X)_≠0.
In this situation, the degrees with H^2(X,_X)_≠0 satisfy the inequalities
y≥ -b+1, x≤ -1, x≥ y-a+1.
This defines a triangular region, which degenerates to a single point if b=3-a.
Type 1 and Type 4 of H^1(X,_X)_ do not occur. The region for Type 2 is defined by the inequalities
y≥ 0, x≤ -1, x≥ y+a+1
resulting in a triangular region, which degenerates to a single point if a=-2. The region for Type 3 is defined by the inequalities
y≤ -1, x≥ 0, x≤ y-a
resulting in a triangular region.
We always have the following relation among the degrees:
(-1,a-2,-1)=(0,a,-1)+(0,-2,-1)+(-1,0,1).
For the degree on the left hand side, H^2(X,_X) is non-zero, while H^1(X,_X) is non-vanishing for each of the degrees on the right hand side.
Case: e≥2, a≤ -e, and b≥ 1+2-ae.
In Figure fig:type2, we depict the projections onto the xy-coordinates of the degrees where H^1(X,_X)_≠0 (split into types 1, 2, and 3) and H^2(X,_X)_≠0.
In this situation, the degrees with H^2(X,_X)_≠0 satisfy the inequalities
y≥ -b+1, x≤ -1, x≥ ey-a+1.
If (a-2)/e is an integer, the convex hull of these degrees is a triangle, which degenerates to a single point when b= 1+(2-a)/e. If (a-2)/e is not an integer, we define
η=⌊a-2e⌋,
and obtain a trapezoidal region (which degenerates to a line when η=-b+1) by eliminating the top portion of the triangle where there are no lattice points.
Type 4 of H^1(X,_X)_ does not occur.
The region for Type 1 is defined by the (in)equalities
y=-1, x≤ -1, x≥ -e+1,
and forms a line. The region for Type 2 is defined by the inequalities
y≥ 0, x≤ -1, x≥ ey+a+1.
If (-a-2)/e is an integer, we obtain a triangle, which degenerates to a single point when a=-2. If (-a-2)/e is not an integer, we define
μ= ⌊-a-2e⌋
and obtain a trapezoidal region (which degenerates to a line when μ=0) by eliminating the top portion of the triangle where there are no lattice points.
The region for Type 3 is defined by the inequalities
y≤ -1, x≥ 0, x≤ ey-a.
If a/e is an integer, we obtain a triangle, which degenerates to a single point when a=-e. If a/e is not an integer, we define
ξ=⌈ae⌉
and obtain a trapezoidal region (which degenerates to a line when ξ=-1) by eliminating the bottom portion of the triangle where there are no lattice points.
If a≢1 e, then ξ-η=1, and we have the following relation:
(-1,η,-1)=(0,ξ,-1) + (-1,-1,0).
If a≡ 1 e, then ξ-η=2, and we have the following relation:
(-1,η,-1)= (1,ξ,-1) + 2(-1,-1,0).
In both cases, for the degree on the left hand side, H^2(X,_X) is non-zero, while H^1(X,_X) is non-vanishing for each of the degrees on the right hand side.
§.§ Split P1-bundles: obstruction computations
Our next goal is to compute the hull for several examples. Instead of directly working with _Σ, we will apply the results from <ref>. Specifically, we will identify Σ' and Γ such that _Σ',Γ has the same hull as _Σ. Additionally, we will determine a minimal such that = ⋀^2 Σ'.
lemma:SigmaD
Let X_Σ≅(𝒪_𝔽_e⊕𝒪_𝔽_e(aF+bH)) with e,b≥0. Then the fan Σ' with the maximal cones
σ_1 = (ρ_1,ρ_4,ρ_5), σ_2= (ρ_1,ρ_2,ρ_5),
σ_3 = (ρ_2,ρ_3,ρ_5), σ_4= (ρ_3,ρ_4,ρ_5).
covers Γ. Moreover, the set
={{σ_1,σ_2}, {σ_2,σ_3}, {σ_3,σ_4}, {σ_4,σ_1}}
satisfies = ⋀^2 Σ'_max.
From lemma:H1deg, we observe that (ρ,)∈ must be of the form
( ρ_2, (*,*,0) ) or (ρ_5, (*,*,-1) ) or (ρ_6, (*,*,1) ).
The proof then follows from Example <ref> and Example <ref>.
For notational simplicity, we will index cochains by the numbers 1,2,3,4 instead of the cones σ_1,…,σ_4, e.g. for 1≤ i,j≤ 4 we write α_i instead of α_σ_i and ω_ij instead of ω_σ_iσ_j.
Recall that in <ref>, we needed to make several choices while computing the hull of _Σ (or similarly _Σ',Γ).
In the examples below, we will make these choices as follows:
* We always use the graded lexicographic local monomial order.
* For constructing the map ψ, we choose the ordering of cones in Σ' as σ_1<σ_2<σ_3<σ_4.
* For Type 1 and Type 2 first order deformations, V_ρ, is given by two vertices n_ρ_1 and n_ρ_3. When choosing a basis of ^1 _Σ',Γ, we will always take the connected component n_ρ_3.
* For Type 3 first order deformations, V_ρ, is given by two vertices n_ρ_2 and n_ρ_4.
When choosing a basis of ^1 _Σ',Γ, we will always take the connected component n_ρ_2.
* For ray-pairs such that H^1(V_ρ,,)≠0, we know V_ρ, is the simple cycle with vertices n_ρ_1,n_ρ_2, n_ρ_3,n_ρ_4 ordered cyclically.
In these cases, we choose the cocycle
ω∈Ž^1('_ρ,,) by setting
ω_34=-1, ω_12=ω_23=ω_41=0.
Then ψ(ω)_i=0 for i=1,…,4, and thus
d(ψ(ω))=0 as required.
With the above choices, it follows that α_1^(r) is always zero.
[Case (e,a,b)=(1,-2,5)]
Here, we explain how to use the setup in <ref> and <ref> to compute the hull of _X for
X=(𝒪_𝔽_1⊕𝒪_𝔽_1(-2F+5H)).
This example possesses the minimal ^1 dimension among those ^1-bundles in lemma:possibleobs that have no quadratic obstructions, but may have third order obstructions. We will compute its hull, showing that it indeed has a third order obstruction.
By lemma:H1deg, H^1(X_Σ,T_X_Σ) is non-zero only in the following degrees:
_1=(0,-1,-1), _2=(1,-1,-1), _3=(0,-2,-1), _4=(-1,0,1).
By lemma:H2deg, H^2(X_Σ,T_X_Σ) is non-zero only in the degree
=(-1,-4,-1).
See Figure fig:1,-2,5 for an illustration.
The only non-negative integer combinations of the _i giving are of the form
= 2_3+_4=_1+_2+_3+2_4=2_1+2_2+3_4.
Therefore, the hull must have the form
[[t_1,…,t_4]]/J; J=⟨ a_1t_3^2t_4 +a_2· t_1t_2t_3t_4^2+a_3 t_1^2t_2^2t_4^3 ⟩
for some a_1,a_2,a_3∈. Here, t_i is the deformation parameter with degree _i.
In Table table:1,-2,5 we summarize the deformation equation computations. See the ancillary files <cit.> for code carrying out these computations in Macaulay2 <cit.>. The monomials highlighted in blue are those for which obstructions are possible. The deformation data is interpreted as follows. For each α_i^(r), the coefficient of a monomial t^w is obtained by multiplying the entries in the σ_i row by the coefficient of t^w in the first column of the table. The expression for α_i^(r) is then the sum of these products, considering only monomials of total degree less than or equal to r. As explained in Example <ref>, every element of Γ must be of the form
( ρ_5, (*,*,-1) ) or (ρ_6, (*,*,1) ) or (ρ_5, (*,*,0) ) or (ρ_6, (*,*,0) ).
This implies that only monomials of the form t_1^w_1t_2^w_2t_3^w_3t_4^w_4 with
|w_1+w_2+w_3-w_4|≤ 1
will appear. Additionally, we restrict our attention to relevant monomials, see Remark <ref>:
we only need to consider monomials dividing t_3^2t_4, t_1t_2t_3t_4^2, or t_1^2t_2^2t_4^3.
The obstruction data should be interpreted as follows. For the normal form of ((α^(r))-∑_ℓ=1^q g_ℓ^(r)·ω_ℓ)_ij with respect to · J_r, the coefficient of t^w∈_r+1 is the product of the entries in the row labeled σ_iσ_j with the coefficient of t^w found in the first column of the table. We also list the coefficients of the γ_ℓ^(r+1) in the rightmost column of the table.
After computing α^(7), we obtain the deformation equation
(α^(7)) ≡ 0 ^8+ ⟨ t_3^2t_4 -2· t_1t_2t_3t_4^2+ t_1^2t_2^2t_4^3 ⟩.
Consequently, we conclude that a_1=a_3=1 and a_2=-2.
After applying the change of variables t_3'=t_3-t_1t_2t_4, we observe that
t_3'^2t_4= t_3^2t_4 -2· t_1t_2t_3t_4^2+ t_1^2t_2^2t_4^3.
We obtain that the spectrum of the hull has two irreducible components, both of dimension three. One is smooth and the other generically non-reduced of multiplicity two.
[Case (e,a,b)=(2,-3,4)]
Here, we compute the hull of _X for
X=(𝒪_𝔽_2⊕𝒪_𝔽_2(-3F+4H)).
We will show that the hull does not have any quadratic obstructions, but does have third order obstructions. We will utilize this example in the proof of thm:obstructed.
By lemma:H1deg, H^1(X_Σ,T_X_Σ) is non-zero only in the following degrees:
_1 =(0,-1,-1), _2 =(1,-1,-1), _3 =(-1,-1,0),
_4 =(-1,0,1), _5 =(-2,0,1).
By lemma:H2deg, H^2(X_Σ,T_X_Σ) is non-zero only in the degrees
_1=(-1,-3,-1), _2=(-2,-3,-1).
See Figure fig:2,-3,4 for an illustration.
The only non-negative integer combinations are of the form
_1 = _2+2_3=_1+_2+_3+_4=2_2+_3+_5
= 3_2+ 2_5= _2+ 2_1+2_4= _1+2_2+_4+_5
_2 = _1+2_3=_1+_2+_3+_5=2_1+_3+_4
=3_1+ 2_4= _1+2_2+2_5 =_2+ 2_1+_4+_5
Therefore the hull must have the form [[t_1,…,t_5]]/ ⟨ g_1,g_2 ⟩, where
g_1 =a_1t_2t_3^2+a_2t_1t_2t_3t_4+ a_3t_2^2t_3t_5+a_4t_2^3t_5^2+a_5t_1^2t_2t_4^2+a_6t_1t_2^2t_4t_5;
g_2 =b_1t_1t_3^2+b_2t_1t_2t_3t_5+ b_3t_1^2t_3t_4+b_4t_1^3t_4^2+b_5t_2^2t_1t_5^2+b_6t_2t_1^2t_4t_5.
for some a_1,…,a_6,b_1,…,b_6∈.
Doing a computation similar to Example <ref>, we obtain
a_1=-b_1=a_4=-b_4=a_5=-b_5=1;
a_2=-b_2=a_3=-b_3=a_6=-b_6=2,
see <cit.> for details.
After applying the change of variables t_3'=t_3+t_1t_4+t_2t_5 we observe that
t_2t_3'^2=g_1 t_1t_3'^2=-g_2.
As in Example <ref>, the spectrum of the hull has two irreducible components. One is smooth with dimension 3, and the other is a generically non-reduced component of multiplicity 2 and dimension 4.
[Case (e,a,b)=(3,-4,3)]
Here we compute the hull of _X for
X=(𝒪_𝔽_3⊕𝒪_𝔽_3(-4F+3H)).
We will see that the hull is irreducible and singular at the origin, and already determined by the quadratic obstructions. We note that these quadratic obstructions could have been computed using the methods of <cit.>.
By lemma:H1deg, H^1(X_Σ,T_X_Σ) is non-zero only in the following degrees:
_1 =(0,-1,-1), _2 =(1,-1,-1), _3 =(-1,0,1), _4 =(-1,-1,0),
_5 =(-2,0,1), _6 =(-2,-1,0), _7 =(-3,0,1).
By lemma:H2deg, H^2(X_Σ,T_X_Σ) is non-zero only in the degree
=(-1,-2,-1).
See Figure fig:3,-4,3 for an illustration. The only non-negative integer combinations are of the form
= _1+_4=_2+_6=2_1+_3=2_2+_7=_1+_2+_5.
Therefore, the hull must have the form
[[t_1,…,t_7]]/J; J=⟨ a_1t_1t_4+ a_2t_2t_6+ a_3t_1^2t_3+a_4t_2^2t_7+a_5t_1t_2t_5 ⟩.
By computing the quadratic obstructions, we obtain that a_1=a_2=1, see Table table:3,4,-3 or <cit.>. After applying the change of variables t_4'=t_4+a_3t_1t_3+a_5t_2t_5 and t_6'=t_6+a_4t_2t_7, we obtain
t_1t_4+ t_2t_6+ a_3t_1^2t_3+a_4t_2^2t_7+a_5t_1t_2t_5= t_1t_4'+t_2t_6'.
Thus, the spectrum of the hull is irreducible and has dimension 6; it is the formal completion of the product of ^3 with the affine cone over a smooth quadric surface.
[Case e≥2, a=-e, b=3]
Here, we compute the hull of _X for
X=(𝒪_𝔽_e⊕𝒪_𝔽_e(-eF+3H)),
where e≥ 2. We will show that
the spectrum of the hull consists of two irreducible components, with an arbitrarily large difference in their dimensions. The case e=2 was first analyzed in <cit.> and has minimal ^1 dimension among obstructed ^1-bundles.
Similar to Example <ref>, the hull is already determined by the quadratic obstructions, which could have been computed using the methods of <cit.>.
By lemma:H1deg, H^1(X_Σ,T_X_Σ) has dimension 2e-1 and is non-zero in the degrees
_1=(0,-1,-1), _2k=(-k,-1,0), _2k+1=(-k,0,1)
for k=1,…,e-1. By lemma:H2deg, H^2(X_Σ,T_X_Σ) has dimension e-1 and is non-zero in the degrees
_k=(-k,-2,-1)
for k=1,…,e-1. See Figure fig:e,-e,b. The only non-negative integer combinations are of the form
_k= _1+_2k=2_1+_2k+1,
for k=1,…,e-1.
Therefore, the hull must have the form
[[t_1,…,t_2k+1]]/J; J=⟨ a_1k· t_1t_2k+ a_2k· t_1^2t_2k+1:k=1,…,e-1 ⟩,
for some a_1k,a_2k∈ where k=1,…,e-1.
The deformation data restricted to the deformation parameters t_1,t_2k is shown in Table table:case: e,-e,3. Therefore, we obtain that a_1k=1 for k=1,…,e-1. After applying the change of variables
t_2k'= t_2k+a_2k· t_1 t_2k+1,
we can express the hull as
[[t_1,t_2',t_3, …, t'_2e-1 ]] /⟨ t_1 ⟩·⟨ t_2', t_4',…, t'_2e-2⟩.
The spectrum of the hull has two irreducible components, both smooth: one with dimension 2e-2, and the other with dimension e.
We have carefully chosen the previous four examples so that degree constraints imply that the obstructions equations g_ℓ are polynomials instead of power series. Such constraints do not apply in general, such as in the case of (e,a,b)=(2,-4,4).
Using computations similar to those from the examples above, we now confirm that the cases in lemma:possibleobs do indeed yield obstructions, proving thm:obstructed.
By lemma:possibleobs, the cases not listed in the theorem are unobstructed.
It remains to show that the cases of the theorem are indeed obstructed, with minimal degree of obstruction as claimed.
Suppose first that e=1, a≤ -2 and b≥ 3-a. By Figure fig:type1 and the discussion following the proof of lemma:possibleobs, there is no degree of Type 1, so it is not possible to have relation among degrees that could provide a quadratic obstruction. However, we have
(-1,a-2,-1)=(0,-a,-1)+(0,-2,-1)+(-1,0,1),
see (<ref>). If we restrict the computations to the deformation parameters associated with the ray-degree pairs
(ρ_5,(0,-a,-1)), (ρ_5,(0,-2,-1)), (ρ_6,(-1,0,1))
we obtain results identical to the deformation parameters associated with the
ray-degree pairs
(ρ_5,(0,-2,-1)), (ρ_5,(0,-2,-1)), (ρ_6,(-1,0,1))
in Example <ref>. Indeed, the simplicial complexes V_ρ, and the rays involved are identical in these two situations.. Thus, we must have a third order obstruction, proving claim <ref>.
Consider instead the case e≥ 2, a≤ -e and b ≥ 1+(2-a)/e. We now use Figure fig:type2 and the discussion following the proof of lemma:possibleobs. If a≢1 e we have the relation
(-1,η,-1)=(0,ξ,-1)+(0,-1,-1)
see (<ref>). Restricting the computations to the deformation parameters associated with the ray-degree pairs
(ρ_5,(0,ξ,-1)), (ρ_2,(-1,-1,0))
we obtain results identical to the deformation parameters associated with the
ray-degree pairs
(ρ_5,(0,-1,-1)), (ρ_2,(-k,-1,0))
in Example <ref>. Thus, we must have a second order obstruction.
If instead a≡ 1 e, it is not possible to have relations among degrees that could provide a quadratic obstruction since ξ-η=2. However, we have
(-1,η,-1)=(1,ξ,-1) + 2· (-1,-1,0),
see (<ref>). Restricting the computations to the deformation parameters associated with the ray-degree pairs
(ρ_5,(1,ξ,-1)), (ρ_2,(-1,-1,0))
we obtain results identical to the deformation parameters associated with the
ray-degree pairs
(ρ_5,(1,-1,-1)), (ρ_2,(-1,-1,0))
in Example <ref>. Thus, we must have a third order obstruction. This completes the proof of claim <ref>.
equationsection
§ ISOMORPHISM OF DEFORMATION FUNCTORS
In this section, we will prove compa. Similar results to those in compa are known in some other situations; see <cit.> and <cit.>. Our proof of compa is an adaptation of these established results. We first establish a necessary lemma.
For x,y ∈Č^1(,)⊗_A, we define
_,(x,y)= {a ∈Č^0(,)⊗_A: a ⊙ x =y},
where ⊙ is the action defined in action.
lifting lemma
Let
0 → I → A' A → 0
be a small extension, and let x',y'∈Č^1(,)⊗_A' be representatives of elements in _,(A').
Then:
* if a ∈_,(π(x'),π(y')), the obstruction to lifting a to _,(x',y') is in Ȟ^1(,)⊗ I and a lifting exists if and only if the obstruction vanishes;
* given two lifts a',b'∈_,(x',y') of a∈_,(π(x'),π(y')), we have
b' -a' ∈Ȟ^0(,)⊗ I.
Let a'∈Č^0(,)⊗_A' be a lift of a. We set
v' = (a'⊙ x') -y' ∈Č^1(,)⊗_A'.
It is immediate that v' ∈Č^1(,)⊗ I, since its image in Č^1(,)⊗_A is zero. We claim that v' ∈Ž^1(,)⊗ I.
Firstly, we observe the following facts:
* Since v' ∈Č^1(,)⊗ I, by rem:obs<ref> we have (v')= d(v').
* By applying comm<ref>, we obtain
v'_jk:= a'_j x'_jk -a'_k -y'_jk= y'_ij ( a'_j x'_jk -a'_k -y'_jk) -y'_ij.
* Since (x')=0=(y'), we have x'_ij x'_jk = x'_ik and -y'_jk -y'_ij = -y'_ik.
By the first observation, it is enough to show that
v'_ij v'_jk= v'_ik.
Using the second observation we obtain
v'_ij v'_jk = (a'_i x'_ij -a'_j -y'_ij) (y'_ij a'_j x'_jk -a'_k -y'_jk -y'_ij)
= a'_i x'_ij x'_jk -a'_k -y'_jk -y'_ij
= a'_i x'_ik -a'_k -y'_ik
= v'_ik
Here, the penultimate equation follows from the third observation. Hence we have d(v')=0.
We next show that the class of v' is independent of the choice of lift a'. Let b'∈Č^0(,)⊗_A' be another lift of a and set w' = (b'⊙ x') -y' similar to above.
Repeatedly using comm along with the fact that a'*-b'∈Č^0(,)⊗ I and rem:obs<ref>, one obtains that
v' -w'=(a' -b')=d(a' -b').
Since v',w'∈Č^1(,)⊗ I, comm<ref> implies that v' and w' differ by the coboundary d(a' -b') so they give the same cohomology class.
If a'∈_,(x',y'), it is immediate that v'=0 hence its cohomology class is zero. Conversely,
if the class of v' is zero in cohomology, then there exists c'∈Č^0(,)⊗ I such that
-w'= d(c')= (c').
Since, c'∈Č^0(,)⊗ I we know c' a' is also a lift of a. We claim that
(c' a' ⊙ x') -y'=0.
This can easily be verified by noticing
0 = d(c')_ij+v'_ij
= c'_i v'_ij -c'_j
=c'_i (a'_i x'_ij -a'_j -y'_ij) -c'_j
= c'_i (a'_i x'_ij -a'_j -y'_ij) (y'_ij -c'_j -y'_ij)
= (c'_i a'_i) x'_ij -(c'_j a'_j) -y'_ij.
Here the second and fourth equations follow from an application of comm<ref>. This completes the proof of claim <ref>.
Now we will show claim <ref>. Let a',b' be two lifts of a such that a'⊙ x'=y' and b'⊙ x'=y'. It is immediate that b' -a' ∈Č^0(,)⊗ I, since its image in Č^0(,)⊗_A is zero. We observe that
y' = b'⊙(-a'⊙ y')
= (b' -a')⊙ x'
= d(b' -a')+ y'.
Here, the second equation is the multiplicative property of a group action, and the last equation follows from an application of comm<ref> with the observation b' -a' ∈Č^0(,)⊗ I.
Hence, we obtain d(b' -a')=0 and it follows that b' -a' ∈ H^0(,)⊗ I, proving claim <ref>.
By thm:deffunctor1, we have that Ȟ^1(,) represents the tangent space of _,, and Ȟ^2(,) is an obstruction space. Moreover, it is immediate to see that the map of cohomology groups Ȟ^1(,) →Ȟ^1(,) agrees with ^1_→^1_, and the map of cohomology groups Ȟ^2(,) →Ȟ^2(,) is an obstruction map for the morphism of functors f: _→_. Hence, using the standard smoothness criterion (standardsmooth), we obtain that f:_,→_, is smooth (cf. <cit.>). In fact, f:_,→_, is surjective.
Now we proceed to show the injectivity of f:_,→_,. It is sufficient to show the following: for any A∈ and representatives x,y∈Č^1(,)⊗_A of elements in _,(A), if there exists γ∈_,(f(x),f(y)), then there also exists a ∈_,(x,y).
By induction on _(A), we will prove the stronger fact that there exists a∈_,(x,y) such that f(a)=γ.
During our proof of this claim the reader may refer to Figure figure:1, which illustrates the relationships between the elements under consideration.
Clearly, the claim is true for A=. Let
0 → I → A' A → 0
be a small extension and assume the claim is true for A. Let x',y'∈Č^1(,)⊗_A' be representatives of two elements in _,(A') such that there exists γ'∈_,(f(x'),f(y')).
We denote the images of x',y' in Č^1(,)⊗_A by x,y. Since
π(γ') ⊙ f(x)= f(y)
by the induction hypothesis there exists a ∈_,(x,y) such that f(a)=π(γ').
According to lifting lemma<ref>, the obstruction to lifting a to _,(x',y') lies in Ȟ^1(,)⊗ I. Moreover, this obstruction element maps to zero in Ȟ^1(,)⊗ I due to the existence of a lift for
f(a)=π(γ') in _,(f(x'),f(y')), provided by γ'. Therefore, by the injectivity of the map Ȟ^1(,)⊗ I →Ȟ^1(,)⊗ I, the obstruction element in Ȟ^1(,)⊗ I must vanish, ensuring the existence of a lifting b' ∈_,(x',y') of a. However, f(b') may not be equal to γ'. Nonetheless, according to lifting lemma<ref>, γ' -f(b') defines an element in Ȟ^0(,)⊗ I. Additionally, by the surjection Ȟ^0(,)→Ȟ^0(,) we may choose an element c'∈Ȟ^0(,)⊗ I that maps to γ' -f(b'). We set a'=c' b'. Clearly, a'∈_,(x',y') and π(a')=a. Moreover, we have
f(a') = f(c') f(b')
=γ' -f(b') f(b')
=γ'.
This completes the proof of the inductive step, thus establishing the injectivity of the map f:_,→_,.
§ COMPARISON THEOREM FOR OPEN SUBSCHEMES
Let X be a scheme over and U⊆ X be an open subscheme.
There is a natural map of functors
_X →_U obtained by restriction.
We make use of the following folklore result that was first brought to our attention by A. Petracci:
CMiso
Suppose that X is a noetherian separated scheme over and U⊂ X an open subscheme with complement Z=X∖ U.
Then the restriction map
_X→_U
is injective if
the depth of _X at every point of Z is at least two, and an isomorphism if the depth of _X at every point of Z is at least three. In particular, it is an isomorphism if Z has codimension at least three in X and X is Cohen-Macaulay.
The above theorem is stated and proved in the affine case (with slightly different hypotheses) by M. Artin in <cit.>. We now show how to globalize the argument of loc. cit.
In the following, X will be a noetherian separated scheme over and U an open subscheme with complement Z.
Let k∈ and assume that the depth of _X at every point of Z is at least k. Then
H^i(X,_X)=H^i(U,_X)
for any i<k-1.
By the depth condition, one obtains that ℋ^i_Z(_X)=0 for i<k, see <cit.>. Here, ℋ_Z^i is the sheaf of local cohomology with support in Z. By <cit.>, it follows that H^i(X,_X)=H^i(U,_U) for i<k-1.
Let X be affine and assume that the depth of _X at every point of Z is at least 2.
Let X' be any deformation of X over some A∈. Then
H^0(X,_X')=H^0(U,_X').
This is claim (*) from the proof of <cit.>, and follows from straightforward induction on the length of A. The base case A= follows from Lemma <ref>.
Let X be affine and assume that the depth of _X at every point of Z is at least 3. Let U' be any deformation of U over some A∈. Then H^0(U,_U') is flat over A.
This is the contents of <cit.>, see the final sentence of the proof. For the reader's convenience, we summarize the argument here.
The proof is by induction on the length of A; the case A= is trivial.
By Lemma <ref> we have that H^i(X,_X)=H^i(U,_U) for i=0,1, in particular this vanishes for i=1 since X is affine.
Realizing A as a small extension
0→→ A → A_0→ 0
the flatness of U' implies the exactness of
0→_U →_U'→_U'⊗_A A_0→ 0.
By the isomorphisms of the previous paragraph, the exactness of
0→ H^0(X,_X) → H^0(U,_U') → H^0(U,_U'⊗_A A_0)→ 0
follows.
The ring H^0(U,_U'⊗_A A_0) is flat over A_0 by the induction hypothesis and the A-flatness of H^0(U,_U') follows by a version of the local criterion of flatness, see <cit.> or <cit.>.
We first show the injectivity of the map _X→_U when the depth of _X along Z is at least two.
Fix an affine open cover ={U_i}_i∈ I of X.
Consider two deformations X' and X” of X over A∈, and denote their restrictions to U by U' and U”. Assume that there is an isomorphism of deformations ϕ:U'→ U”. We thus have isomorphisms
ϕ_i^#:H^0(U_i∩ U, _U”)→ H^0(U_i∩ U, _U')
satisfying the obvious cocycle condition.
By Lemma <ref>
H^0(U_i, _X')=H^0(U_i∩ U, _U'),
H^0(U_i, _X”)=H^0(U_i∩ U, _U”)
so we obtain isomorphisms ϕ_i:X'_|U_i→ X_|U_i”. By the cocycle condition, these glue to give an isomorphism X'→ X”. This shows that
_X→_U is injective.
For the surjectivity when the depth is at least three, consider any deformation U' of U over A∈.
Let ι:U→ X denote the inclusion of U in X, and set _X':=ι_*(_U').
By Lemma <ref>, _X' is flat over A, hence defines a deformation X'. By construction, this restricts to the deformation U', hence _X→_U is surjective.
§ SOLVING THE DEFORMATION EQUATION
In this appendix, we will prove prop:defeqsolving and state and prove a lemma we used in proving hull.
We use notation as established in <ref> and <ref>.
To solve (<ref>) we consider the small extension
0→ J_r/(· J_r) → S/(· J_r) → S/J_r→ 0.
It follows from (<ref>) (modulo · J_r-1) that
λ((s(α^(r)))) ≡λ((s(α^(r))))-∑_ℓ=1^q g_ℓ^(r)·ω_ℓ≡
0 J_r.
From Proposition <ref> we obtain that the image of
ξ=λ((s(α^(r))))-∑_ℓ=1^q g_ℓ^(r)·ω_ℓ
in Č^1(,/)⊗ S/(· J_r) is a cocycle.
Because of this, the normal form of ξ with respect to · J_r is also a cocycle. In fact, the normal form belongs to
Ž^1(,/)⊗_r+1
since
ξ≡ 0 · J_r-1
and · J_r-1= J_r+^r+1. This latter equality follows from the assumption that J_r≡ J_r-1^r. Since the the images of the ω_ℓ span H^1(X,/), there exists
β^(r+1)∈Č^0(,/)⊗_r+1, γ_1^(r+1),…,γ_q^(r+1)∈_r+1
satisfying (<ref>).
This implies claim <ref>.
We now prove claim <ref>.
By rem:obs<ref>,
λ((s(α^(r+1))))≡λ((s(α^(r))))-d(β^(r+1)) ^r+2.
Equation (<ref>) then follows directly from (<ref>). Likewise, J_r+1≡ J_r^r+1 follows from the fact that γ_ℓ^(r+1) belongs to _r+1.
Finally, with notation as in <ref>, the following lemma is used in proving hull:
lemma:obfinj
Consider the map of functors f: (R,-)→_↪.
There exists a natural injective obstruction map ob_f defined by
ob_f: (J/ J)^* → H^1(,/)
φ ↦∑_ℓ=1^qφ(g_ℓ)·ω_ℓ,
where g_ℓ denotes the image of g_ℓ in J/· J.
By construction, J⊆^2.
It is well-known that (J/ J)^* is an obstruction space for (R,-) (see <cit.>). An obstruction space for _↪ is given by Ȟ^1(,/) (see showdeffun2<ref>, prop:isosecfunctor). It is immediate that ob_f is injective since the ω_ℓ form a basis for H^1(,/) and the g_ℓ generate J. We claim that ob_f is an obstruction map for f.
To prove that ob_f is an obstruction map for f, we first claim that we can choose n≫ 0 such that
(J+ ^n)/(· J+ ^n )≅ J/· J.
We will show this using the ideas from the proof of <cit.>.
According to the Artin-Rees lemma <cit.>, we have J∩^n ⊆· J for n≫0.
This implies that
(· J+^n)∩ J=· J,
which leads to the isomorphism
(J+ ^n)/(· J+ ^n ) ≅ J/(· J +^n)∩ J = J/· J
as desired.
Now we will show that ob_f is an obstruction map for f. Let ζ∈(R,A) and consider a small extension
0→ I → A' → A → 0.
We will show that
ob_f(ϕ(ζ, A'))= ϕ(f(ζ),A')
where we use ϕ to denote the map taking a small extension to its obstruction class for both functors (R,-) and _↪.
We define a local morphism of -algebras η: S→ A' by mapping each variable t_ℓ to any lifting of its image under the map S→ R→ A. Because A,A' are Artin rings and by the discussion above, there exists r≫0 such that η,ζ factor respectively through S/^r+1 and R_r, and
(J+ ^r+1)/(· J+ ^r+1 ) ≅ J/· J.
The image of · J under η is zero, because _A'· I=0. Consequently, η factors through S/· J.
Furthermore, since g_ℓ-g_ℓ^(r)∈^r+1, it follows that J+^r+1= J_r and · J +^r+1= · J_r +^r+1.
From the above discussion, we thus have the following commutative diagram:
[column sep=small]
0[r] J [r] [d] S [r] [d] R [d,"𝕀"] [r] 0
0[r] J/ J [r] [d, "≅"] S/(· J)[r] [d] R[r] [d,"π_r"] 0
0[r] J_r/(· J_r+^r+1) [r] [d, "η_r"] S/(· J_r+^r+1)[r] [d,"η_r"] R_r[r] [d,"ζ_r"] 0
0[r] I [r] A' [r] A [r] 0
.
Since J⊆^2, it follows that
η: J/· J J_r/(· J_r+^r+1) I
remains unaffected by the choice of η.
Applying obstruction theory to ζ∈(R,A) (see for example, <cit.>), we obtain
ϕ(ζ, A')= η.
By (<ref>)
we have
ϕ(α^(r),S/(· J_r+^r+1)) = ∑_ℓ=1^qg_ℓ^(r)·ω_ℓ
where g_ℓ^(r) is the image of g_ℓ^(r) in J_r/(· J_r+^r+1).
By the functoriality of obstruction theory, we have
ϕ(f(ζ),A') = ∑_ℓ=1^qη_r(g_ℓ^(r)) ·ω_ℓ
= ∑_ℓ=1^qη(g_ℓ) ·ω_ℓ
= ob_f(η).
Thus, we have shown that ob_f is an injective obstruction map for f.
alpha
|
http://arxiv.org/abs/2409.02788v1 | 20240904150342 | Enhancing 5G Performance: Reducing Service Time and Research Directions for 6G Standards | [
"Laura Landon",
"Vipindev Adat Vasudevan",
"Jaeweon Kim",
"Junmo Sung",
"Jeffery Tony Masters",
"Muriel Médard"
] | cs.NI | [
"cs.NI",
"cs.ET"
] |
Enhancing 5G Performance: Reducing Service Time and Research Directions for 6G Standards
This material is based upon a collaborative work between MIT and JMA Wireless. It is accepted for presentation at the 3rd edition of the International Conference on 6G Networking (6GNet 2024).
Laura Landon^†, Vipindev Adat Vasudevan^†, Jaeweon Kim^*, Junmo Sung^*, Jeffery Tony Masters^*, and Muriel Médard^†
^†Massachusetts Institute of Technology (MIT), Cambridge, USA, Emails: {llandon9, vipindev, medard}@mit.edu
^*JMA Wireless, Syracuse, USA, Emails: {jkim, jsung, tmasters}@jmawireless.com
Accepted Sep 3 2024 to ApJ Letters
=======================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper presents several methods for minimizing packet service time in networks using 5G and beyond. We propose leveraging network coding alongside Hybrid Automatic Repeat reQuest (HARQ) to reduce service time as well as optimizing Modulation and Coding Scheme (MCS) selection based on the service time. Our network coding approach includes a method to increase the number of packets in flight, adhering to the current standard of the 16 HARQ process limit, demonstrating that these strategies can enhance throughput and reduce latency. Experimental results show that network coding reduces service times by up to 7% in low SNR regimes, with greater reduction across all SNR as the number of packets in flight increases, suggesting that future 6G standards should consider increasing the number of HARQ processes for better performance.
5G NR MAC layer, HARQ processes, network coding, modulation and coding scheme (MCS), service time optimization
§ INTRODUCTION
The development of 3GPP standards and research for each new generation is always heavily informed by lessons learned and innovations developed during the rollout of the previous generation. The 5G era introduced several innovations over LTE regarding reliability management.
The turbo codes of LTE were largely replaced with Low-Density Parity-Check (LDPC) and polar codes.
The hybrid ARQ systems of both LTE and 5G enhance reliability and throughput by combining error correction coding with retransmissions, but 5G adds a number of innovations to improve flexibility in scheduling, such as asynchronous downlink as well as uplink, and more dynamic scheduling of HARQ processes at the MAC layer <cit.>.
A key element of ensuring reliability and quality of service in 5G is the appropriate selection of modulation and coding scheme (MCS), which determines how many bits can be sent per symbol and how much redundancy is introduced to compensate for errors. Currently, MCS selection in 5G NR systems predominantly uses channel quality metrics such as signal-to-noise ratio (SNR) and predefined lookup tables to choose an MCS which will optimize data throughput <cit.>. However, throughput is not always the primary consideration of a system, such as in the case of ultra-reliable low latency communications (URLLC), and minimizing the throughput does not necessarily minimize the delay.
Network coding is an approach, widely discussed outside the current 3GPP standard, for ensuring reliability of packets in a network within the stringent latency requirements of 5G use cases <cit.>. In network coding, multiple packets are combined algebraically in groups of K before being sent and additional N-K packet combinations are added as forward erasure correction (FEC), with the result that the receiver can decode the necessary information with any combination of K packets <cit.>. If the number of FEC redundant packets is sufficient to compensate for the lost packets, retransmissions are unnecessary. Thus network coding can provide a lower packet service time on average than HARQ where retransmissions are performed based on feedback after a round trip time (RTT).
As standardization bodies actively discuss the standards and innovations for 6G, significant improvements are needed to meet the URLLC requirements of future applications. Through a comprehensive analysis of current HARQ-based reliability mechanisms, we propose several approaches incorporating network coding that achieve high reliability with low latency for next-generation reliability mechanisms. By building on the current 3GPP standards <cit.> and addressing their challenges, we ensure compatibility with 5G while paving the way for the evolution to 6G systems.
Particularly, this work discusses multiple innovations to reduce service time and a method for choosing the MCS to minimize service time. We demonstrate that replacing HARQ with network coding reduces the service time. We also showcase that the relative benefit of network coding over HARQ with respect to packet service time increases with the number of packets that can be transmitted within a single round trip time, and propose novel methods of using network coding to increase the number of packets in flight at a given time within the bounds of the 16 HARQ process restriction in the current 3GPP standards. This method reduces the wait time for all SNR values while simultaneously maintaining a similar throughput.
§ RELATED WORK
Low latency is a key concern in 5G networks, as evidenced by the introduction of URLLC, with its 99.999% reliability and 1 ms latency requirements <cit.>. Reliability and latency are often competing requirements, since improving reliability often introduces redundancies which increase the time required to complete transmission. Transmission time and its relation to the current reliability methods in the 3GPP standard, ARQ and HARQ, has been discussed in <cit.>. Performance of ARQ and HARQ generally improve with incorporation of techniques which adapt to the channel conditions <cit.>. One of these techniques is adaptive modulation and coding.
Adaptive modulation and coding (AMC) is a technique used to dynamically adjust the MCS based on the current conditions of the wireless communication channel. Generally, the optimum MCS is determined according to previously defined SNR thresholds. These thresholds can be optimized for a variety of metrics. Throughput is commonly used, either with or without a maximum block error rate (BLER) constraint <cit.>. Some studies have looked into using AMC to reduce delay, such as <cit.> which found that more aggressive AMC tables reduce delay with HARQ in WiMAX. Others generated new tables for AMC by minimizing delay metrics defined in a variety of ways, such as slots needed to transmit normalized to the highest MCS in ideal conditions <cit.>, overlap of consecutive transmissions in GPRS <cit.>, or time needed to transmit a collection of blocks in EGPRS <cit.>.
Network coding is a technique for ensuring reliability which has been shown to have significant benefit in delay compared with other reliability techniques <cit.>. The techniques for AMC described above explore the selection of MCS for ARQ or HARQ, generally in earlier generations of the 3GPP standard. This paper adds to this area by exploring a way to minimize service time of transport blocks in 5G, and compares the results for HARQ vs network coding.
§ METHODOLOGY
In order to examine the improvement in packet service times, it is necessary to establish how we will calculate these service times. Service time corresponds to the time when a packet is first sent and a positive acknowledgment is received for the same (see Fig. <ref>). For different communication techniques, such as ARQ, HARQ, and network coding, this varies depending on how lost packets are compensated. In selective repeat ARQ (SR-ARQ), the lost packets are retransmitted when a negative acknowledgment is received by the sender, until the packet is successfully received <cit.>. HARQ follows the same approach but with additional redundancies such that a packet can be recreated intact by combining partially corrupted packets using the LDPC codes. Both these techniques require at least an additional RTT to cover for the lost packet. Network coding uses forward erasure correction technique by sending repair packets a priori to compensate for the lost packets. In the case of network coding, any new reception provides a new degree of freedom, equivalent to a new packet in the ARQ scenario. This reduces the expected wait time. Further details of the service times calculations are explained below. Throughput is defined as the number of original packets sent per second. It can also be estimated as the reciprocal of average service time in a steady state. In case of a completed data transmission, it can also be computed as the total number of packets sent per the time taken to complete the transmission, a method used in most practical simulations.
§.§.§ ARQ and HARQ
The expected service time of a queue using a straightforward ARQ-only system would be given by
E[X] = (1-p) · RTT
+ p (1-p) · 2× RTT
+ p^2(1-p) · 3× RTT ...
where X is the service time of a packet, p is the probability of erasure of that packet, and RTT is the round trip time between sender and receiver.
Note that we estimate the time from transmission to reception and error checking of a packet as RTT. At a bitrate of 1 Gbps or greater and an RTT of 10 ms, the propagation time of a transport block at maximum size (approximately 1 million bits) is on the order of a millisecond or less, an order of magnitude smaller than the RTT. The propagation time of a transport block only begins to approach the RTT when both the bitrate is lower than 1 Gbps and the transport block is at its maximum size. Since we are interested in the higher bitrates targeted by 6G <cit.>, we use the assumption that the propagation time can be neglected relative to the RTT to simplify our equations.
5G NR systems use a combination of hybrid ARQ (HARQ) and ARQ to ensure reliable delivery of packets. The estimation of service time for 5G HARQ is complex because it uses a strategy of packet recombining to improve the likelihood that each subsequent transmission will result in a correctly received packet. This means that the probability of successful packet arrival increases with each retransmission, rather than remaining constant.
The probability of erasure in an ARQ-only system, p, is estimated by the block error rate (BLER) which can be found experimentally or through simulations. This block error rate varies with SNR and MCS, meaning that p can be expressed as a function of SNR and MCS, p( MCS, SNR).
Estimating the improved probability of each HARQ retransmission is complicated. In HARQ, the probability of successful reception on the first transmission is unchanged from that of an ARQ-only system, i.e. p(SNR,MCS) = BLER(SNR, MCS). The probability of successful reception on the second transmission (first retransmission), however, is the sum of the probabilities of two different scenarios: first, that the second packet arrives perfectly intact, and second, that both the first and second packets arrive corrupted, but with sufficient information that they can be combined in HARQ to construct the correct packet. The first scenario is that of the ARQ-only scenario, and has probability p*(1-p). However, in the second scenario, the probability of successfully recovery of a packet depends on the possibility of recovering the bits lost in each transmission through the different redundancy information, which can only be found by an empirical study. In this discussion, we use a simplified estimate of the expected value of the service time in a system using HARQ. The estimate is related to results obtained from using maximum ratio combining to see that a system with N receive antennas and a given linear SNR value s has the same effective signal to noise ratio as a system with only one receive antenna and a linear SNR of N*s. Now, in ARQ, each transmission of a packet is discarded if it is not received intact. In contrast, in HARQ, past transmissions of packets are saved and combined with new transmissions to reconstruct the correct packet. This process of combining transmissions received successively at a single antenna, although time-delayed, is effectively the same process as combining transmissions received simultaneously at multiple antennas.
To arrive at the derivation for our estimate of the probability of being able to decode a packet after N HARQ transmissions, we draw on concepts from estimation theory. It can be shown (see appendix) that for an AWGN channel, the variance of error on the estimate of a single transmission is
σ^2 = E[X^2]E[N^2]/E[X^2] + E[N^2]
and the variance of this error for N transmissions combined using HARQ is
σ^2 = E[X^2]E[N^2]/N E[X^2] + E[N^2]
This is the same variance that would result from sending a single transmission with N times the power[This result is similar to the concept of maximum ratio combining (MRC).].
We can use this result to guide our estimate of the probability of packet error at each successive retransmission using HARQ. If we assume that HARQ extracts the maximum mutual information from combining the packets received, then the probability of packet failure (which using our method depends on both MCS μ and SNR s) is p(μ,s) at the first transmission, p(μ,2s), at the second transmission, and p(μ, Ns) at the Nth transmission, where s is in linear units before being multiplied by N. [One limitation of this method is its assumption that the channel conditions (affecting both MCS and SNR) remain the same across retransmissions. Refining our estimate to account for changing channel conditions or varying the MCS schemes will be considered in future works and here we assume channel conditions remain the same for brevity.]
Applying this to our equation of the expected service time, we get
E[X] = (1-p(μ , s)) · RTT
+ p(μ, s) (1-p(μ , 2s)) · 2× RTT
+ p(μ , s) p (μ , 2s) (1-p(μ , 3s)) · 3× RTT ...
Note that this represents only one possible method for estimating the probability of packet failure at each retransmission using HARQ, and our proposed methods for minimizing service time in this document are not dependent on what method is used.
§.§.§ Network Coding
Network coding, as applied in this paper, is the use of erasure coding by nodes in the network to improve throughput and reliability. It is a technique wherein the information contained in packets is combined together before being transmitted, with additional redundant packet combinations sent at set intervals to compensate for any lost packets. The number of such repair packets can be defined based on the predicted probability of erasure in the communication channel. In contrast with ARQ or hybrid ARQ methods, network coding relies on this use of forward erasure correction to compensate for imperfections in the channel and seeks to avoid retransmission, although it is also possible to send additional repair packets based on feedback if required <cit.>. Regardless of whether HARQ or network coding is used, similar levels of redundancy will be required to compensate for packet erasure, but by anticipating this redundancy with forward erasure correction, network coding reduces in-order packet delay by avoiding retransmissions and reduces the load on the network.
For those less familiar with the principles of network coding implementation, a node using the block network coding approach applied in this paper begins with a block of K data packets to transmit. Rather than transmitting them directly, this node multiplies these K packets by a K × N matrix of coefficients chosen from a finite field. This results in N coded packets, each a linear combination of the original K, which are then transmitted by the node. The receiving node uses the inverse of the coefficient matrix to decode K of the received packets. The key advantage here of network coding is that the receiver no longer needs to receive specific packets - any K of the N transmitted packets will suffice to decode the packets, because they are all a linear combination of the original packets. Thus the redundant N-K packets can make up for the loss of any of the preceding K packets. In order to optimize the level of redundancy, the value of N is chosen such that (N-K)/N approximates the block error rate. A more thorough discussion of network coding can be found in <cit.>.
In a system using network coding, a packet loss is not necessarily a problem unless there are more losses in a block than there are redundant FEC packets. This means that in the worst case scenario, the system must wait until all packets in a block have been transmitted before it can determine whether retransmission is required. Now we are concerned not merely with the propagation time of a single packet, but of a whole block of packets, and this needs to be factored in while calculating the E[X].
We will represent the propagation time of a single packet as τ. (τ can be estimated as n_b / R_b.) Then the expected service time of packets in a system using network coding with K original packets and N-K redundant packets (so N total) can be calculated as follows:
E[X] ≈ (1-p) (RTT + τ )
+ ∑_i=1^N-KN i p^i (1-p)^N-i (RTT + i τ + K+1/2τ )
+ ∑_i=N-K+1^NN i p^i (1-p)^N-i×
(RTT + 2N - K + 1/2τ + RTT + (i-N+K) ×τ )
The first line of the equation represents the case that a packet arrives intact. The second line represents the case that a packet is lost, but there are sufficient redundant packets from network coding to make up for its loss. The last line represents the case that a packet is lost and that network coding is not sufficient to make up for it, so the lost packets must be retransmitted.
Note that equation <ref> represents a case where the maximum number of retransmissions is 1. However, with network coding, additional redundancy can also be provided by increasing number of extra coded packets included in the block to avoid the retransmissions completely. This is particularly useful if the round trip time is significantly high.
§ METHODS TO MINIMIZE SERVICE TIME
This paper explores several methods to minimize service time. These include a new metric for MCS optimization and two ways of network coding MAC layer transport blocks.
§.§ MCS Optimization
The widely used approach of optimizing MCS code for throughput need not necessarily provide the lowest service time per packet. Service time per packet is an important metric to consider in mission-critical applications and in resource-constrained devices where the number of packets being kept in the service queue needs to be minimized. We propose that the average service time for a given round trip time can be used to optimize MCS for minimum service time.
§.§ Increasing Packets in Flight
As stated previously, the relative benefit of network coding over HARQ with respect to service times increases with the number of packets per round trip time. In 5G NR, the number of packets in flight is limited by the number of HARQ processes, capped at 16. Each HARQ entity will wait for feedback on the frame it sent, as shown in Fig.<ref>.
However, network coding enables increasing the number of packets in flight, without disturbing the number of HARQ processes. With network coding, because any network-coded packet in a block can be used to decode any other and only the total number of received packets is necessary to decode an entire block, feedback is only required for a block of packets rather than individual packets (in the form of the number of missing degrees of freedom). If each block of linear combinations of packets as described in section <ref> is considered to be served using a single HARQ process, it allows each process to send multiple packets before getting any feedback, this eventually provides K × 16 packets to be allowed in flight using the 16 HARQ processes, where K is the size of original packets in a coded block. An example with a code where 3 original packets are included in each code block is shown in Fig. <ref>. Redundant packets are not explicitly shown in this figure because the key feature of this method is the use of block ACK/NACKs. Theoretically, it is also possible to implement network coding with no redundancy and it would still give the efficiency benefits of block acknowledgments. In this case the necessary number of repair packets can be sent based on the acknowledgment.
It is relevant to note that with this sending structure, the time of the coded block would be longer than the transmission time of a single packet, but this time will be negligible for the same reasons explained in the section <ref>: propagation time vs RTT. This is because using the new model, a block of size n will take n τ sec to transmit but only one RTT must be waited before receiving feedback and moving on to the next block, whereas in the existing design, each packet will take τ sec to transmit and an RTT before the process can move on, meaning that those same n blocks would take n(τ + RTT) to process. Since we already established that RTT is significantly larger than tau, this means sending blocks is significantly more efficient than sending individual packets. The only concern here would be if n is high enough that n τ begins to approach RTT, but even then, n τ + RTT << n (τ + RTT) so sending blocks would still be more efficient than sending individual packets.
Another approach to increase the number of packets is by initiating multiple streams, by introducing an intermediate network coding layer between the RLC and MAC layers in the protocol stack. This network coding layer can take packets from the RLC layer, make them into multiple streams, and then code inside each stream. Each stream can have its own HARQ processes. This provides another layer of parallelization and further increases the number of packets in flight. This approach can work along with the block-level acknowledgment, providing a cascading effect in the number of packets in flight.
As an aside on implementation methods, integrating network coding into a 5G system at the HARQ level requires MAC-level changes to the code on both the base station and the user equipment (UE), in order to both encode and decode packets. Commercially available base stations and UEs generally do not expose these lower levels of code for modification, but open source testbeds such as Eurecom's Open Air Interface are designed to be a publicly available implementation of 5G and can be used to access these lower layers. While we used Matlab to simulate the delay and throughput effects of network coding vs. HARQ, an implementation in such a 5G testbed would be the next logical step in establishing confidence in these results.
§ RESULTS AND DISCUSSION
We collected simulated BLER values for each combination of MCS index and SNR value across a range of SNRs in a system with transport block retransmission disabled. We used MCS table 5.1.3.1-2 found in standard <cit.> and SNRs ranging from -6 to 27 dB with increments of 0.1 dB. Then using these BLER values as the probability of error p, we simulated the different reliability approaches (HARQ and network coding) in MATLAB to estimate the average service time for different approaches mentioned in section <ref>. This simulation was performed on a slot by slot basis, where the transport block in any given slot was randomly determined to be in error or not according to a Bernoulli distribution with p = BLER. The service time is calculated as the time between the first transmission of a transport block and its eventual correct reception after retransmissions and/or coding depending on the reliability approach. Optimization of MCS based on service time was also performed. We further tested the performance of both approaches for the 99^th percentile of service times. This is particularly interesting as it helps provide service-level performance guarantees.
§.§ Optimizing MCS for service time: HARQ
When there are multiple MCS options available for a particular Eb/N0, the optimal MCS for minimum service time can be different than the optimal MCS for maximum throughput. This can be seen in Fig. <ref>, where the optimal MCS index for service time is different compared to the optimal MCS index for throughput. These different MCS indices result in different service times as well. With the current HARQ approach, the benefit of optimizing for service time is not providing significant performance gains. However, this may be important with a different coding scheme.
§.§ Optimizing MCS for service time: Network coding reduces delay compared to HARQ
When using service time-based optimization, it is particularly relevant to consider network coding. With sufficient and appropriate code rates, network coding provides a lower time in the queue than the current HARQ approach. The lower service time also corresponds to a higher number of packets being served, for a given arrival rate.
The effects of network coding on service time optimization can be seen in Fig. <ref>, in which it can be seen that network coding always results in a service time equal to or less than that of HARQ. With the limitation in 5G of 16 TBs maximum in flight over an RTT, this is particularly significant in low SNR regimes, where the erasure probability is higher.
§.§ Limitation of 16 HARQ Processes
The effect of transport blocks per round trip time on the disparity between service times for network coding vs. HARQ is shown below in Fig. <ref>, in which the 16-process cap on HARQ is not taken into account in subfigure <ref>.
These figures demonstrate that network coding has lower wait times than HARQ for both 16 and 160 packets per RTT, but the difference is more significant in the case of 160 packets per RTT. The current 16 packets per RTT limitation imposed by 5G standards means that this improvement cannot be directly applied to the current protocol stack, but future systems may find it worth exploring the benefit of increasing the number of packets that can be in flight during an RTT using the network coding layer.
§.§ Increasing Packets in Flight via Network Coding
While changes to the number of HARQ processes may be considered in future work, the method we proposed in subsection <ref> to commandeer the HARQ process to send a complete network-coded block instead of a packet enables network coding to further improve the service time in a system while still working within the current 5G standard. With this approach, the improvement is consistent even in a high SNR range, where the probability of erasures is very low, as we can now send more packets. This also improves the throughput and provides better throughput when using network coding compared to HARQ. This approach is possible only with network coding, thanks to the possibility of working with block-level feedback. Fig.<ref> shows the benefit of our approach with a code rate of 3/4.
§.§ Improved Service Level Agreement
The improvement in service time provided by network coding has implications for service level agreements. For the same channel conditions, network coding has a lower 99^th percentile service time than HARQ. We define the 99^th percentile service time as the maximum of the lowest 99% of service times for all transport blocks in a system. In other words, 99% of transport blocks will have a service time less than or equal to the 99^th percentile service time. These 99^th percentile service times can be used as a service level guarantee.
Fig. <ref> compares the service level guarantees of network coding and HARQ for different channel conditions, represented by the probability of erasure.
§ CONCLUSION
In this paper, we have investigated a method for optimizing Modulation and Coding Scheme (MCS) selection in 5G networks to minimize packet service time. Our analysis demonstrated that the packet service time can be reduced up to 7% in low SNR regimes by integrating network coding with or in place of traditional HARQ mechanisms. Additionally, we proposed an innovative approach using network coding to increase the number of packets in flight within the current constraint of 16 HARQ processes, demonstrating that packet service times can be reduced for all SNRs without reducing the throughput. Future work could explore applying network coding to TDD. As network coding requires a feedback only per block, not per frame, the number of feedback slots in a TDD pattern can be reduced. This allows more packets to be sent within a time frame, another advantage of network coding that can result in increased throughput. While our methods show considerable improvements within the existing 5G framework, our findings also suggest that future wireless standards, such as 6G, could benefit from increasing the number of HARQ processes beyond the current limit. This enhancement would enable more flexible and efficient data transmission strategies, further reducing packet service times and improving overall network performance.
To arrive at the derivation for our estimate of the probability of being able to decode a packet after N HARQ transmissions, we consider tools from estimation theory. Rather than asking whether a packet was intact or corrupted, we will treat any received packet as a measured value Y of the sent packet, X. We have Y = X + N where N represents the effects of noise.
We will represent the channel as AWGN. This means that the minimum mean-squared error (MMSE) is also the linear least square error (LLSE). Then we are seeking to minimize the squared error,
E[(X̂ - X)^2] = E[(α Y_1 - X)^2]
= E[(α X + α N - X)^2]
= E[(α - 1)^2 X^2 + 2 α (α - 1) X N + α^2 N^2]
The middle term 2 α (α - 1)XN has an expectation of zero because the signal and noise are independent and zero mean. Since the expectation term is a linear operator, this gives
E[(X̂ - X)^2] =
(α - 1)^2 E[X^2] + α^2 E[N^2].
We can take the derivative of the α term from here to find that this error expectation is minimized at
α = E[X^2]/E[X^2] + E[N^2].
Now, consider the case of a single HARQ retransmission, so that we have received the same packet transmitted twice. In this case, we would like to combine the information we have received from these two measurements and form one single estimate. This can be represented as X̂ = α_1 Y_1 + α_2 Y_2, where Y_1 and Y_2 are the two received transmissions and α_i gives the corresponding weighting. Note that Y_i = X + N_i, with X the same in each case because it is the same packet being transmitted. In this case, we can repeat the process shown above to see that
E[(X̂ - X)^2] = E[(α_1 Y_1 + α_2 Y_2 - X)^2]
= E[(α_1 X + α_1 N_1 + α_2 X + α_2 N_2 - X)^2]
= (α_1 + α_2 - 1)^2 E[X^2] +
α_1^2 E[N_1^2] + α_2^2 E[N_2^2].
We can take two partial derivatives (one with respect to α_1 and one with respect to α_2) to find that this error expectation is minimized at
α_1 = (1 - α_2)E[X^2]/E[X^2] + E[N^2]
and
α_2 = (1 - α_1)E[X^2]/E[X^2] + E[N^2].
Solving for α_1 and α_2 will show that α_1 = α_2, so we drop the subscripts and obtain
α = E[X^2]/E[X^2] + E[N^2]/1+E[X^2]/E[X^2] + E[N^2] = E[X^2]/2E[X^2] + E[N^2].
Now, the variance in the case of a single transmission is
σ_1^2 = E[X^2]E[N^2]/E[X^2] + E[N^2]
and the variance in the case of two transmissions is
σ_2^2 = E[X^2]E[N^2]/2E[X^2] + E[N^2].
It can similarly be shown that this generalizes for K transmissions, such that
σ_K^2 = E[X^2]E[N^2]/KE[X^2] + E[N^2],
which is the same variance which is obtained through maximal ratio combining <cit.>, a method for diversity combining which adds the signals from each channel together.
Now, the mutual information is related to the entropy by the equation I(X; (Y_1, ... , Y_K)) = H(X) - H(X | Y_1, ... , Y_K). Note that variance can be lower bounded as follows:
σ_K^2 = E[(X̂ - X)^2] ≥1/2 π e 2^2h(X | Y_1, ... , Y_K)
and equality holds if X is Gaussian <cit.>. In this case we have h minimized as h(X | Y_1, ... , Y_K) = 1/2 + 1/2ln (2 πσ_K^2 ),
which means that I(X; (Y_1, ... , Y_K)) is maximized when we set X as Gaussian. Then the maximum mutual information of the transmissions depends solely on the variance in the error estimate. Since the variance we obtain by combining the information content of successive transmissions is the same as the variance obtained using maximal ratio combining <cit.>, we see that the maximal mutual information of the tranmissions must also be the same as that obtained using maximal ratio combining.
IEEEtran
|
http://arxiv.org/abs/2409.02650v1 | 20240904122633 | SoK: Bitcoin Layer Two (L2) | [
"Minfeng Qi",
"Qin Wang",
"Zhipeng Wang",
"Manvir Schneider",
"Tianqing Zhu",
"Shiping Chen",
"William Knottenbelt",
"Thomas Hardjono"
] | cs.CR | [
"cs.CR",
"cs.ET"
] |
pics/sema/.style args=#1/#2/#3code=
#20
[circle,minimum width=1mm,draw,fill=#1] ;
(0,0)O
[R,fill=#1](O,1mm)(90,90-#2)
[R,fill=#3](O,1mm)(90-#2,90-360)
,
arrows.meta
automata,positioning
⌈⌉
⌊⌋
B-.05emi-.025em b-.08em
T-.1667em.7exE-.125emX
L>X
language=TeX,
escapeinside=“,
breaklines=true,
frame=none,
captionpos=b,
extendedchars=false,
keywordstyle=,
basicstyle=,
commentstyle=,
showstringspaces=false,
numberstyle=,
numbersep=2em,
xleftmargin=4em,
plain clmClaim thmTheorem lemLemma defiDefinition
definition gameGame notaNotations prfProof
columns=fixed,
numbers=left,
numberstyle=,
frame=none,
backgroundcolor=,
keywordstyle=,
numberstyle=,
commentstyle=,
stringstyle=,
showstringspaces=false,
language=sh,
ŁØ
_ıȷłø
_
+#
packeditemize
∙
circitemize
∘
#1##1#1ALG@line-1
SoK: Bitcoin Layer Two (L2)
Minfeng Qi^1,These authors contributed equally to the work. , Qin Wang^2,*, Zhipeng Wang^3, Manvir Schneider^4,
Tianqing Zhu^1, Shiping Chen^2, William Knottenbelt^3, Thomas Hardjono^5
^1City University of Macau, China |
^2CSIRO Data61, Australia |
^3Imperial College London, UK
^4Cardano Foundation, Switzerland |
^5Massachusetts Institute of Technology, US
September 9, 2024
============================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We present the first Systematization of Knowledge (SoK) on constructing Layer Two (L2) solutions for Bitcoin.
We carefully examine a representative subset of ongoing Bitcoin L2 solutions (40 out of 335 extensively investigated cases) and provide a concise yet impactful identification of six classic design patterns through two approaches (i.e., modifying transactions & creating proofs). Notably, we are the first to incorporate the inscription technology (emerged in mid-2023), along with a series of related innovations. We further establish a reference framework that serves as a baseline criterion ideally suited for evaluating the security aspects of Bitcoin L2 solutions, and which can also be extended to broader L2 applications. We apply this framework to evaluate each of the projects we investigated.
We find that the inscription-based approaches introduce new functionality (i.e., programability) to Bitcoin systems, whereas existing proof-based solutions primarily address scalability challenges. Our security analysis reveals new attack vectors targeting data/state (availability, verification), assets (withdrawal, recovery), and users (disputes, censorship).
§ INTRODUCTION
Bitcoin <cit.> has surged in the crypto market since mid-2023 <cit.>. Several indicators demonstrate the growth of its ecosystem <cit.>: (i) The Bitcoin price (and marketcap) have reached a historical record price of US$73,079 (Mar. 14, 2024, #𝖢𝗈𝗂𝗇𝖬𝖺𝗋𝗄𝖾𝗍𝖢𝖺𝗉); (ii) The average block size of Bitcoin has increased significantly, from 1.2MB to over 2MB; (iii) The transaction volume in the memory pool has consistently risen, reaching nearly 24,000 transactions, compared to the stable level of around 5,000 transactions in 2022; (iv) Statistical platforms such as Dune Analytics <cit.><cit.> and UniSat <cit.> confirm the upward trend in Bitcoin transactions; (v) Numerous Bitcoin new projects <cit.> secure investment and been actively launched in the market; (vi) Governments soften policies toward Bitcoin assets, as evidenced by the approval of Bitcoin ETFs by SEC <cit.> and HK authorities <cit.>.
In this paper, we focus on the technical underpinnings, specifically the new developments designed and implemented on Bitcoin, collectively referred to as Bitcoin Layer Two (L2).
L2 protocols. Layer two in general is a mirrored concept to layer one (L1) protocols. While L1 protocols are designed to be self-sufficient and operate independently, typically referring to the blockchain base that consists of a series of core protocols (consensus mechanism, data structure, transaction/block processing) and network layer (P2P networks), L2 protocols often refer to secondary frameworks or technologies that are built on top of L1. The concept of L2 at the early stage (which we called “old-fashioned" to salute its classiness) majorly discusses off-chain computing techniques such as sidechain, channels (e.g., payment & lighting channels). Those solutions primarily improve the system's scalability while partially (i.e., byproduct) enhancing its interoperability.
* Scalability[For clarity and quick reference, we use “∙” for general itemized classifications, “∘” for workflow-related steps, and “▹” for metric-related items.] refers to the ability to efficiently (also implies performance) manage an increasing volume of transactions, commonly measured by transaction speed (quicker confirmation) and throughput (transactions per second, or TPS).
L2 protocols are one of the main solutions for improving scalability compared to concurrent L1 techniques (e.g., block size <cit.>, consensus mechanisms <cit.>, sharding <cit.>, and chain structure <cit.>).
* Interoperability refers to the capability of heterogeneous blockchain systems to exchange data efficiently. Numerous interoperability solutions <cit.> leverage similar L2 underlying techniques (e.g., sidechains <cit.>, hash-locks <cit.>) to facilitate cross-chain communications.
Concept refinement: Bitcoin L2. When exploring the specific concept of Bitcoin L2, we find that the intrinsic nature of L2 introduces new dimensions, particularly concerning Bitcoin's functionality (i.e., programmability).
* Functionality refers to the capacity of a blockchain to support and execute customized code, enabling state transitions directly on-chain. This term is predominantly associated with the Ethereum ecosystem (and other EVM-compatible platforms), where state-of-the-art solutions mostly utilize smart contracts to facilitate the creation of decentralized applications (dApps <cit.>), automated workflows <cit.>, and complex financial instruments (e.g., DeFi <cit.>).
However, due to the structural differences between the UTXO model (e.g., Bitcoin) <cit.> and the account model (e.g., EVM-compatible blockchains), discussions of Bitcoin's functionality were historically limited. This is largely because of the inherent constraints <cit.> in UTXO which is primarily designed for simple and stateless transactions. Unlike EVM-compatible platforms, Bitcoin and similar UTXO designs are not well-suited for hosting complex functional applications beyond their original purpose. To date, Bitcoin's primary role remains as a store of value (peaking at US$1.44T, Mar. 2024), serving as the anchor of crypto markets (occupying 69.2%[The global crypto market cap was US$2.08T(#𝖢𝗈𝗂𝗇𝖬𝖺𝗋𝗄𝖾𝗍𝖢𝖺𝗉).]).
Gaps. Despite the growing importance of Bitcoin L2 solutions, awareness of this concept remains surprisingly low. Our preliminary surveys (i.e., random sampling of small groups) indicate that even experienced blockchain researchers and developers (31 out of 39) are not familiar with it. Therefore, we pose the following research questions:
* RQ1: What is the current status of existing projects?
* RQ2: What are the common patterns and architectural differences among these solutions?
Moreover, introducing an additional layer to the mainchain may expose the system to unexpected attacking vectors, posing significant security risks. For instance, a challenge-response pattern between the mainchain and sidechains could face risks such as falsified proofs or witnesses <cit.>; unreliable bridging services between two chains could lead to asset corruption <cit.>; and direct inscription on a transaction field might lead to transaction malleability issues <cit.>. Existing L2 studies center around classifying operational mechanisms, while security-focused research tends to address specific technical issues, often overlooking the general applicability of various constructions. This leads us to ask:
* RQ3: What distinct security threats (either new or existing) arise from those different design patterns?
Our approach. We show how we address these gaps.
172 Collecting projects (Sec.<ref>). We start by conducting a comprehensive investigation of the projects in the wild. We then carefully examine the selected projects (40 out of 335) that can represent the most state-of-the-art developments.
173 Identifying & assessing design patterns. Our classification is based on the modification areas within Bitcoin (cf. Fig.<ref>) where we identified two major approaches (Sec.<ref>): embedding inscribed scripts into transactions and incorporating contract management for off-chain proof verification.
We then analyze architectural differences among various solutions (Sec.<ref>) (i.e., inscription, BitVM, rollups, sidechains, state channels, client-side verification) and examine how they integrate with the Bitcoin mainchain. To our knowledge, this is the first work to include and discuss inscription-based solutions that have not been explored previously.
174 Establishing new security frame (Sec.<ref>). We also, for the first time, identify a series of security threats aligned with the transaction execution workflow, specifically tailored to Bitcoin L2. We consider the security of native assets (e.g., withdrawal), the integrity of data during processes (e.g., proof/state verification, data availability), and incorporate the user’s perspective (e.g., dispute resolution).
Key findings. Bitcoin L2 introduces new dimensions (i.e., functionality) compared to earlier L2 conceptions (majorly scalability) and new methods (inscription, BitVM), while negatively also bringing more complexity and attacking vectors.
* Embedding scripts directly on-chain allows for implementing NFTs by uniquely numbering each satoshi and enabling state transitions via logic gates through commitments.
* Proof-based solutions such as rollups and sidechains enable trustless validation of off-chain transactions, ensuring that only valid state transitions are recorded on the mainchain.
* Complexity increases due to complicated commitment gates in BitVM, and cryptographic computations (e.g., generation, verification) in proof-based solutions.
* Our identified attacking vectors include potential vulnerabilities in the cryptographic primitives used for transaction verification, the risk of data manipulation or censorship in the secondary chain, and the exploitation of weaknesses in withdrawal mechanisms and dispute resolution processes.
Our credits. We highlight several SoKs that are partially related to ours and credit (shaping our deep understanding) their discussions on Bitcoin <cit.>, “old-fashioned” L2 (off-chain) protocols <cit.>, SNARKs <cit.>, rollups <cit.>, interoperability <cit.>, and light client <cit.>.
Many recent parallel studies have made great contributions to exploring new techniques within Bitcoin <cit.> and enhancing the scalability and robustness of Bitcoin systems <cit.>. Despite these topics being slightly beyond ours, we also acknowledge these important advancements.
§ OUR METHODOLOGY
§.§ Project Collection
Data sources. We primarily collect Bitcoin L2 projects from four statistical websites: Rootdata <cit.>, BTCEden <cit.>, L2.watch <cit.>, and EdgeIn.io <cit.> (more in Appendix <ref>). These platforms have already amassed large-scale datasets and specialize in blockchain projects. Following this, we dive into each project's official website, GitHub repository, and open forums to gather additional information.
Selection strategy. Based on gathering initial lists of L2 projects, we apply a filtering process: (i) we conducted a deduplication process to eliminate redundancies; (ii) we primarily selected projects that had either achieved the mainnet stage or were on the cusp of doing so; (iii) we included a few projects in the testnet and pre-testnet phases with notable developmental advancements. For some borderline projects, we manually and subjectively evaluated their standing within the community, considering factors including the project team's background, media coverage, and industry recognition.
Statistical results. Our initial results report 335 projects (detailed results in Table <ref> and Fig.<ref> in Appendix <ref>). By applying the filtering process, we screened out 40 eligible Bitcoin L2 projects (cf. the second column in Table <ref>).
Project details. We further provide a summary of those 40 projects, which is included in Appendix <ref>.
0.95
(RQ1) Finding 1: Bitcoin L2 projects have experienced significant growth, with 335 new projects emerging within a short period (15 months). Of these, 40 have reached a stage where they are ready for examination.
§.§ Design Spaces
Skeleton protocol. The Bitcoin mainchain operates through a series of fundamental protocols that ensure the secure generation of addresses and keys. Address generation involves creating a public-private key pair, where the private key k is a randomly generated 256-bit number, and the public key K is derived using the elliptic curve multiplication K = k · G, with G being the generator point on the secp256k1 curve. The Bitcoin address A is then generated by hashing the public key using SHA-256 and RIPEMD-160: A = 𝖱𝖨𝖯𝖤𝖬𝖣-160(𝖲𝖧𝖠-256(K)).
Transaction details. In Bitcoin, a transaction T is the fundamental unit for transferring assets between addresses. Each transaction is composed of several key fields that define its structure. Specifically, a transaction T can be represented as:
T = {𝗂𝗇𝗉𝗎𝗍𝗌, 𝗈𝗎𝗍𝗉𝗎𝗍𝗌, 𝗅𝗈𝖼𝗄𝗍𝗂𝗆𝖾, 𝗏𝖾𝗋𝗌𝗂𝗈𝗇, 𝗐𝗂𝗍𝗇𝖾𝗌𝗌},
where,
* 𝗂𝗇𝗉𝗎𝗍𝗌 are references to previous unspent transaction outputs (UTXOs) that the current transaction is spending. Each input contains a reference to a previous output, an unlocking script (𝖲𝖼𝗋𝗂𝗉𝗍𝖲𝗂𝗀), and a sequence number.
* 𝗈𝗎𝗍𝗉𝗎𝗍𝗌 define the destination of the transferred Bitcoin, specifying the amount and a locking script (𝖲𝖼𝗋𝗂𝗉𝗍𝖯𝗎𝖻𝖪𝖾𝗒) that controls who can spend the output in the future.
* 𝗅𝗈𝖼𝗄𝗍𝗂𝗆𝖾 is an optional field that specifies the earliest time or block height at which the transaction can be included.
* 𝗏𝖾𝗋𝗌𝗂𝗈𝗇 indicates the transaction format version and can signal the use of new transaction types or features.
* 𝗐𝗂𝗍𝗇𝖾𝗌𝗌 is used in SegWit transactions that contain the Taproot public key, witness program, and other witness data required to validate the transaction.
Supported techniques. BIP141 <cit.> introduced SegWit which separates signature data from the transaction data. BIP341 <cit.> allows complex scripts to be represented as single keys through the Taproot public key and Taptree structures, revealing only the executed path during transaction validation. We defer more technical details in Appendix <ref>.
Category by modification fields. Fields in Bitcoin transactions serve as the foundation for several L2 improvements. We categorized these improvements into two primary categories.
172 Modifying transaction data (Sec.<ref>). Techniques like inscriptions introduce new data types by modifying specific fields in transactions (i.e., embedding metadata within 𝗈𝗎𝗍𝗉𝗎𝗍𝗌 or 𝗐𝗂𝗍𝗇𝖾𝗌𝗌 fields), allowing for additional functionalities like digital assets or complex scripts.
* Inscriptions: The inscription technique, enabled by the Ordinals protocol <cit.>, allows embedding arbitrary data into Bitcoin transaction. This process involves encoding data D, constructing a witness script S_w = 𝖮𝖯_𝖱𝖤𝖳𝖴𝖱𝖭 D. A transaction T is then created with an output script (𝖲𝖼𝗋𝗂𝗉𝗍𝖯𝗎𝖻𝖪𝖾𝗒) that references S_w as part of the witness data. The transaction is then broadcast and included in a block, embedding the data on-chain.
* BitVM:
BitVM <cit.> leverages the 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 and 𝖳𝖺𝗉𝗍𝗋𝖾𝖾 structures to embed multiple conditions within the witness data, specifically within the Taproot public key 𝖪_𝖳𝖺𝗉𝗋𝗈𝗈𝗍. This setup allows transactions to represent more sophisticated logic by utilizing the 𝖳𝖺𝗉𝗍𝗋𝖾𝖾 structure to organize different possible execution paths. Each leaf in the Tapleaf can represent a specific logic condition or script. The root of those 𝖳𝖺𝗉𝗍𝗋𝖾𝖾s is included in the witness field as part of 𝖪_𝖳𝖺𝗉𝗋𝗈𝗈𝗍 and referenced in 𝖲𝖼𝗋𝗂𝗉𝗍𝖯𝗎𝖻𝖪𝖾𝗒.
173 Verifying proofs. Solutions that rely on cryptographic proofs typically include these proofs in the 𝗐𝗂𝗍𝗇𝖾𝗌𝗌 field or as part of 𝗂𝗇𝗉𝗎𝗍𝗌 or 𝗈𝗎𝗍𝗉𝗎𝗍𝗌, enabling the validation of off-chain computations or state transitions. Customized smart contracts often act as an automated verifier that checks the validity of the proof and manages the subsequent state changes.
* Rollups (Sec.<ref>): Rollups utilize proofs to validate transactions off-chain and post summarized data on-chain. The validity of a batch of transactions is proved by a proof π:
π = 𝖯𝗋𝗈𝗈𝖿(T_1, T_2, …, T_n),
where T_i represents individual transactions. The rollup contract on the mainchain verifies π, ensuring that all transactions in the batch are valid.
* Sidechains (Sec.<ref>): Sidechains use Simplified Payment Verification (SPV) proofs <cit.> to provide a cryptographic link between a transaction on the Bitcoin mainchain and the corresponding operation on the sidechain. This proof includes the Merkle path 𝖯_𝗆𝖺𝗂𝗇 from the transaction T_lock to the block's Merkle root 𝖱_𝗆𝖺𝗂𝗇, as well as the block header 𝖧_𝗆𝖺𝗂𝗇:
π_𝖲𝖯𝖵 = {𝖯_𝗆𝖺𝗂𝗇, 𝖧_𝗆𝖺𝗂𝗇, 𝖱_𝗆𝖺𝗂𝗇, T_lock}.
* Client-side verification (Sec.<ref>): It leverages client-side validation <cit.> and the UTXO set to ensure data integrity. The provided proof π(T_x) for verification includes the following components:
π(T_x) = { C(T_x), H(U_i) },
where C(T_x) is a commitment for a transaction T_x, U_i is the UTXO associated with T_x, and H(U_i) is its hash. Verification is performed by comparing π(T_x) against the known state of UTXOs to ensure the commitment is valid.
* State channels (Sec.<ref>): State channels enable off-chain transactions between participants by exchanging signed transactions T_i = SignedTx(S_i). These transactions result in state updates S_i, with the signatures serving as cryptographic proof of each participant’s consent to the new state. Participants can update the state without broadcasting them to the Bitcoin network. After n transactions, the updated state is:
S_n = {(𝖠, 𝗏_𝖠'), (𝖡, 𝗏_𝖡')},
where 𝗏_𝖠' and 𝗏_𝖡' are new balances of 𝖠 and 𝖡 after the n-th transaction.
§.§ Security Reference Frame
For the first time, we establish a comprehensive security reference framework specifically designed to evaluate a wide range of L2 solutions, including those previously overlooked in early-stage studies <cit.>. Our framework is rooted in an analysis of asset flow throughout its entire lifecycle, systematically categorized into three distinct phases:
172 Pre-execution. This phase focuses on security measures that need to be in place before transactions are initiated, ensuring that the system is prepared to handle transactions securely.
* Data availability: Ensures that all necessary transaction data is fully available and accessible for verification prior to the finalization of the transaction.
173 Transaction execution.
This phase focuses on the integrity of transactions as they are being processed, including the transfer of assets and the recording of transactions.
* State verification: Ensures that the L2 solution employs robust validation techniques to accurately mirror off-chain transactions on the mainchain.
* Withdrawal mechanism: Safeguards the process of withdrawing assets, ensuring that funds can be securely transferred back to the mainchain without unauthorized access.
* Anti-censorship measures: Prevents any manipulation or censorship during the transaction process, ensuring that all transactions are processed fairly and without interference.
173 Post-transaction.
This phase addresses the measures that ensure the continued security and integrity of the system after transactions have been completed, including conflict resolution and recovery mechanisms.
* Dispute resolution: Provide mechanisms for resolving conflicts that arise after transactions, such as fraudulent activities or disputes, to maintain network stability.
* Emergency asset recovery: Ensures that users can reclaim assets in the case of emergencies (e.g., network failures), protecting their deposits after the transaction completion.
We defer the detailed definitions to Sec.<ref>.
§ EVALUATING BITCOIN L2 PROTOCOLS
§.§ Bitcoin's Inscription
Bitcoin’s scripting capabilities are inherently limited due to the constraints of its scripting language <cit.>, which supports only a limited set of operations to ensure network security.
Bitcoin inscription extends the functionality of Bitcoin by allowing string data to be embedded directly within transactions, enabling a weaker version of customized operations.
Working mechanism. The inscription process involves encoding the data (e.g., text, image, video) into a byte string or JSON format, which is then embedded in a Bitcoin transaction. The data are encapsulated within a 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 script using opcodes such as {𝖮𝖯_𝖥𝖠𝖫𝖲𝖤, 𝖮𝖯_𝖨𝖥, …, 𝖮𝖯_𝖤𝖭𝖣𝖨𝖥}, creating a structure known as an “envelope”. This envelope is embedded in a 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 output, leveraging the witness discount provided by SegWit <cit.> to reduce storage costs. The process is executed in two phases:
* Commit Transaction: A 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 output T_com is created, committing to a script 𝖲_𝖼𝗈𝗆 that references the inscription metadata 𝖣 without revealing it. This can be mathematically represented as:
T_com = 𝖳𝖺𝗉𝗋𝗈𝗈𝗍( 𝖲_𝖼𝗈𝗆(H(𝖣)) )
where H(𝖣) is the cryptographic hash of the inscription data. This transaction is broadcast and included in a block 𝖡_𝖼𝗈𝗆.
* Reveal Transaction: The output from T_com is spent in a subsequent transaction T_rev, which includes the actual inscription data 𝖣 within its script 𝖲_𝗋𝖾𝗏. The reveal transaction is mathematically represented as:
T_rev = 𝖳𝖺𝗉𝗋𝗈𝗈𝗍( 𝖲_𝗋𝖾𝗏(𝖣) )
When T_rev is confirmed, the inscription metadata 𝖣 is permanently recorded on the blockchain.
The inscription technique can be used to create NFT on Bitcoin (e.g., BRC20 <cit.>) or inscription tokens on Ethereum (e.g., Ethscriptions <cit.>). For the case of BRC20,
each satoshi during the minting process (cf.Listing <ref>) is inscribed with four pieces of metadata to represent the token's attributes. The ordinal/sequential numbers (𝖮𝗋𝖽 <cit.>) associated with these satoshis are tracked from the initial commitment to the final inscription <cit.>, ensuring that metadata is permanently linked to those specific satoshis on-chain.
[caption=Operations (𝖬𝗂𝗇𝗍) for Bitcoin Inscription, label=list-mint,basicstyle=]
# On-chain Inscription
"p" : "brc-20", # protocol name
"op": "mint", # operation
"tick": "ordi", # token name
"amt": "1000" # the amount of token being minted
# Off-chain update
if state[tick] NOT exists OR
"amt" > "lim" OR sum("amt") > "max":
raise errors
else
account_state[tick]["balance"][minter] += amt
§.§ BitVM & BitVM2
∙ BitVM. BitVM extends the design principles of inscriptions by enabling Turing-complete computation through Discreet Log Contracts (DLCs <cit.>) and cryptographic proofs.
NAND gate. The NAND gate is an elementary component in BitVM. It results in a "1" only when both inputs are "0". For all other combinations of inputs, the result is "0". Within the Bitcoin scripting environment, the opcodes 𝖮𝖯_𝖡𝖮𝖮𝖫𝖠𝖭𝖣 and 𝖮𝖯_𝖭𝖮𝖳 can be strategically combined to emulate the behavior of a NAND gate. This composite operation, essential for constructing more complex logic, is termed "𝖮𝖯_𝖭𝖠𝖭𝖣" in BitVM. For example, Listing <ref> illustrates (also see Fig.<ref>) how a prover constructs the logic gate NAND by pushing three bit values (E, B, and A) onto the stack, corresponding to the preimages of 𝗁𝖺𝗌𝗁𝖤, 𝗁𝖺𝗌𝗁𝖡, and 𝗁𝖺𝗌𝗁𝖠 using 𝖮𝖯_𝖡𝖨𝖳_𝖢𝖮𝖬𝖬𝖨𝖳𝖬𝖤𝖭𝖳. The verifier then checks the correctness of NAND by applying 𝖮𝖯_𝖤𝖰𝖴𝖠𝖫𝖵𝖤𝖱𝖨𝖥𝖸 operation to compare the consistency of outputs.
[caption=Hash Commitments and Verification in NAND Gate, label=lst:hash-commitment-verification, basicstyle=, breaklines=true]
# Hash Commitments
<hashE0/1> # Commitment to hashE
OP_BIT_COMMITMENT
OP_TOALTSTACK
<hashB0/1> # Commitment to hashB
OP_BIT_COMMITMENT
OP_TOALTSTACK
<hashA0/1> # Commitment to hashA
OP_BIT_COMMITMENT
OP_TOALTSTACK
# Verification
OP_FROMALTSTACK
OP_NAND
OP_EQUALVERIFY # Verify A NAND B == E
Challenge-response protocol.
The challenge-response mechanism <cit.> is crucial for ensuring the accuracy of BitVM computations. Similar to Optimistic rollups (discussed later), it provides a method for one party (i.e., the verifier) to test the veracity of another party's (i.e., the prover's) claims through a series of challenges. Both the prover and the verifier stake a certain amount of Bitcoin assets as collateral and pre-sign a series of transactions to prepare for potential disputes. The verifier, acting as a skeptic, randomly selects a logical gate from the circuit and issues a challenge to the prover. This challenge requires the prover to disclose the inputs and outputs associated with the selected gate. In response to the challenge, the prover reveals the specified gate's inputs and outputs. Notably, the process of challenging and disclosing is repeated multiple times. The verifier may challenge different gates to ensure the prover's claims are consistent across the circuit. If at any point the prover's claims about the same gate contradict each other, the verifier detects this inconsistency. Then the verifier uses this inconsistency as proof of fraud and can take punitive action, such as confiscating the prover's funds.
Working mechanism. BitVM utilizes 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 addresses to create a matrix, which functions similarly to a binary circuit. Each instruction within a Script corresponds to the smallest unit of a program, producing binary outcomes (true or false). These are organized into a matrix, resulting in a sequence of binary outputs (e.g., 01100101). The complexity of the program is directly proportional to the number of 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 addresses combined, with traversal and execution time estimated at 𝒪(n) for a 𝖳𝖺𝗉𝗍𝗋𝖾𝖾 with n nodes.
* Key pair generation: Participants generate their respective key pairs (k_A, K_A) and (k_B, K_B).
* Funding transaction: A funding transaction is created with the inputs from k_A and k_B. The output script of this transaction utilizes a Taproot public key K_Taproot, which is a combination of the participants' keys and possibly other conditions:
T_f = {𝗂𝗇𝗉𝗎𝗍𝗌(k_A, k_B), 𝗈𝗎𝗍𝗉𝗎𝗍𝗌(𝖳𝖺𝗉𝗋𝗈𝗈𝗍(K_Taproot))}.
* Oracle setup: An oracle 𝒪 publishes a public key K_𝒪 and a signed message σ_𝒪 indicating the outcome of the event.
* Contract execution transactions (CETs): Pre-signed CETs are created for all possible outcomes. These CETs incorporate the Taproot public key and are executed based on the oracle's signature, distributing funds accordingly.
* State channels and off-chain state transitions: The BitVM execution process involves setting up state channels and performing off-chain state transitions s_i+1 = δ(s_i), where these transitions can be optionally validated using zero-knowledge proofs (ZKP):
π_i = 𝗓𝗄𝗉(s_i, s_i+1).
* On-chain verification: The final state s_n and proof π_n, and Taproot public key, are submitted on-chain for verification.
Limitations. While BitVM presents an approach to expanding the capabilities of the Bitcoin network for smart contract functionality, it also faces significant challenges. These include its current limitation to two-party interactions, the costs associated with scripting, and a narrow scope of ideal use cases <cit.>. Specifically, BitVM's architecture is limited to interactions between two pre-defined entities. Extending this to support N-to-N interactions, where multiple parties are involved in a single contract, would require a more complex logical design. In addition, the execution of preset unlock conditions for 𝖳𝖺𝗉𝗋𝗈𝗈𝗍 addresses in BitVM incurs miner fees, and as the number of addresses involved increases, so does the cost. Furthermore, BitVM is best suited for applications that rely on heavy off-chain computations. However, for applications that require frequent on-chain interactions, BitVM may not be the most efficient solution.
∙ BitVM2. BitVM2 is an advanced iteration of the original BitVM, designed to enable permissionless verification <cit.>. This iteration overcomes the limitations of the initial design, which was confined to predefined two-party setups, by allowing any participant to act as a verifier during runtime.
SNARK verifier. In BitVM2, the system employs a one-time setup with a 1-n honesty assumption, where anyone can challenge an invalid assertion without being part of the initial verifier group. The verification process in BitVM2 utilizes the Groth16 proof system <cit.> to verify assertions efficiently. A key technical enhancement is the division of the SNARK verifier program into smaller, manageable sub-programs, each verified through a sequence of steps using Lamport signatures <cit.>. Specifically, let's denote the initial state as s_0 and the final state as s_n. The state transitions are managed by an optimized state transition function δ:
s_i+1 = δ(s_i).
Each transition generates a proof π_i that verifies the transition without revealing any confidential information:
π_i = 𝖦𝗋𝗈𝗍𝗁16(s_i, s_i+1).
To handle the large size of the SNARK verifier programs, the computation is split into multiple steps. Each step j is signed using Lamport signatures σ_j:
σ_j = 𝖫𝖺𝗆𝗉𝗈𝗋𝗍𝖲𝗂𝗀𝗇(H_j, K_j),
where H_j represents the hash of the computation step, and K_j is the corresponding key.
Limitations. One challenges is the need to emulate covenants due to their absence in Bitcoin, which requires the involvement of a signer committee, thereby increasing off-chain coordination efforts. This adds complexity to the system and heightens the risk of operational delays or inefficiencies. Moreover, the design depends on the presence of at least one honest operator to prevent funds from becoming unspendable, creating the risk of ransom attacks, where malicious operators could freeze assets and hold them hostage <cit.>. Additionally, the system generates large, non-standard transactions that may not be relayed by default Bitcoin nodes, necessitating alternative propagation methods and increasing reliance on custom network configurations <cit.>. Lastly, the use of fixed deposit amounts restricts flexibility, which may force users to engage with professional services for peg-ins and peg-outs. This reliance on external services could reduce accessibility, particularly for smaller or casual users, and may create entry barriers for a broader range of participants.
Discussion. The surveyed projects <cit.>, each building on BitVM in different ways, collectively advance Bitcoin’s capabilities of off-chain computation. For instance, BitLayer builds on BitVM by integrating a more sophisticated state management system. ZKByte focuses on incorporating ZKP into Bitcoin’s L2, utilizing a ZKP validator while managing cross-chain state transitions through the UTXO model. Bitstake leverages the BitVM framework to implement a PoS mechanism, enabling dynamic participation in transaction management. On the other hand, Citrea implements a ZK Rollup architecture, introducing the concept of execution slices that batch process thousands of transactions. SatoshiVM simplifies circuit complexity by adopting the Bristol format for logical gate circuit structures and introduces single-round non-interactive on-chain verification.
§.§ Rollups
Data availability requires all necessary data for verifying to be readily accessible to all participants <cit.>. However, as Bitcoin networks grow in size, ensuring that every participant has access to all transaction data becomes increasingly challenging.
Rollups move most computational work off-chain, transmitting only small size data (e.g., transaction outputs, state updates, proof data) to the mainchain <cit.>. By processing transactions off-chain and periodically posting summarized data on-chain, rollups can improve transaction throughput.
Components. The mechanism relies on three components to ensure the off-chain transaction processing and validation.
* Sequencer: A sequencer plays a crucial role in collecting and ordering transactions before they are aggregated and processed. Essentially, the sequencer's primary job is to receive transactions from users, order them, and then batch them into a single rollup block <cit.>. The way transactions are ordered can affect the state of the rollup and potentially the fees associated with those transactions.
* Operator: An operator executes transactions in an off-chain environment and manages the resulting state transitions <cit.>. This off-chain execution and management facilitate reduced load and less computational effort on the mainnet. Operators are responsible for submitting proof of the rollup blocks to the blockchain mainnet.
* Challenger: The role of challengers is to act as external verifiers who check the validity of the operator's commitments to the mainchain. They continuously monitor the state transitions and proofs. If challengers detect inconsistencies or fraudulent activity in the submitted data, they can submit a fraud-proof to the mainchain.
Working mechanism. Bitcoin rollups work by processing a large volume of transactions off-chain on an L2 network and then summarizing the results to submit to the main Bitcoin blockchain. The process involves five main steps:
* Transaction initialization on L2: Users initiate transactions on the L2 Rollup. The sequencer and operator are responsible for sorting these transactions, aggregating them into batches, executing the transaction logic, and computing the post-transaction state.
* State representation: The state of the blockchain, including account balances and state variables, is represented in a specific data structure such as a Merkle tree. The Pre-state Root R_pre represents the state before the transactions are executed, while the Post-state Root R_post represents the state after the transactions are executed:
R_pre = 𝖬𝖾𝗋𝗄𝗅𝖾𝖱𝗈𝗈𝗍(S_pre) and R_post = 𝖬𝖾𝗋𝗄𝗅𝖾𝖱𝗈𝗈𝗍(S_post).
* Transaction proof generation: The Operator generates a transaction proof π along with the Post-state Root R_post. For zk-rollups, this proof is a succinct zk-proof:
π = 𝖦𝗋𝗈𝗍𝗁16(R_pre, R_post, T).
For Optimistic rollups, the proof involves a fraud-proof mechanism where participants can challenge incorrect state transitions:
π_fraud = 𝖥𝗋𝖺𝗎𝖽𝖯𝗋𝗈𝗈𝖿(R_pre, R_post, T).
* Submission to the mainchain: The Operator submits the Post-state Root R_post and the transaction proof π to the Rollup smart contract on the mainchain.
* Verification on the mainchain: The Rollup contract verifies whether the state transition from R_pre to R_post is correct using the submitted proof:
𝖵𝖾𝗋𝗂𝖿𝗒(R_pre, R_post, π) →𝖳𝗋𝗎𝖾.
If the verification passes, the Rollup contract updates the state on the mainchain, confirming the transactions that took place on L2. If an incorrect R_post is submitted, other participants can challenge it using R_pre and the transaction batch data:
𝖵𝖾𝗋𝗂𝖿𝗒(R_pre, R_post, π_fraud) →𝖥𝖺𝗅𝗌𝖾.
Types of rollups. There are two primary types of rollups: Zero-knowledge Rollups (zk-rollups) that utilize ZKPs to validate transaction efficacy, and Optimistic rollups that employ fraud-proof mechanisms to ascertain transaction validity.
* Zk-rollups: Operators in zk-rollups generate and submit a ZKP, specifically zk-SNARks and zk-STARKs <cit.>, to the mainchain. This validity proof confirms the correctness of all transactions in the batch and the resultant new state. zk-SNARKks, while fast due to their small proof sizes and quick verification times, require a trusted setup that can be a security concern if compromised <cit.>. Conversely, zk-STARKs avoid this issue by not needing a common reference string, offering better resistance against quantum computing, albeit at the cost of larger proof sizes <cit.>.
* Optimistic rollups: The fundamental assumption in Optimistic rollups is that the transactions and state changes are valid by default. Different from using a ZKP, Optimistic rollups use a fraud-proof mechanism <cit.>. When an illegal transaction is identified, challengers can challenge it by submitting a fraud proof to the network. The person who submitted the fraudulent transaction, faces penalties, typically losing their bond. A portion of this bond is awarded to the challenger as a reward. While both types of rollups increase throughput, implementing zk-rollups is generally more complex than Optimistic rollups because of the intricacies involved in generating ZKPs. Conversely, Optimistic rollups typically have a longer withdrawal time due to the challenge period necessary for potential fraud proofs.
Limitations.
Implementing rollups on Bitcoin presents a unique set of challenges due to Bitcoin's underlying design, which is quite different from platforms like Ethereum that support on-chain programmability. Rollups require a rich set of functionalities to manage state transitions, process fraud or validity proofs, and handle complex dispute resolutions, which are beyond the capabilities of Bitcoin's current scripting language. In addition, to implement rollups efficiently, Bitcoin would likely need adjustments at the base-layer protocol to support essential features, such as covenants. Such changes require broad consensus within the community, which has been slow to progress given the Bitcoin community's wariness of any agreement. Furthermore, the absence of a native smart contract layer in Bitcoin complicates the deployment of rollups. Technologies like zk-rollups depend on complex smart contracts for generating and verifying zpks. Similarly, Optimistic rollups need smart contracts for managing the challenge-response mechanisms essential for their security model. The deployment of such systems would require a foundational shift in how Bitcoin operates.
Discussion. The rollup-based projects <cit.> on Bitcoin demonstrate a range of strategies to enhance scalability. In the zk-rollups category, Projects like B² Network and Bison leverage ZKP for secure transaction validation, with B² Network incorporating a hybrid zkEVM and fraud-proof challenge mechanism, while Bison emphasizes computational efficiency with zk-STARKs. Tuna Chain and BL2 use zk-rollups for batch processing and integrating Taproot and decentralized data availability layers, respectively, with BL2 introducing a dual-layer architecture for distributed verification. Sovryn and Merlin Chain further extend this category by combining zk-rollups with decentralized oracle networks and bi-directional bridges to enhance cross-chain interactions. In the Optimistic rollups category, projects like Biop and Rollux utilize the Optimistic Rollup protocol to batch transactions off-chain, with Biop integrating PoS consensus and fault-proof mechanisms, and Rollux leveraging the Optimism Bedrock for EVM equivalence. BeL2 adopts Elastos SmartWeb and a relayer mechanism for fraud prevention, while Hacash.com introduces a multi-layered architecture that supports high transaction throughput and utilizes rollup for real-time settlement.
§.§ SideChain
Sidechains <cit.> allow assets to be transferred between Bitcoin (acting as the mainchain) and secondary blockchains (sidechains) through a two-way peg mechanism <cit.>. This enables Bitcoin to be transferred to a sidechain, where it can benefit from the sidechain's unique features and capabilities, and then back to the mainchain.
Simplified payment verification. SPV <cit.> is an essential component of the Bitcoin network that allows users to verify transactions without maintaining a full copy of the blockchain. The SPV node first calculates the hash H(T_x) of the transaction T_x that needs to be verified and then fetches all block headers H_b of the longest branch. It uses the Merkle path P(T_x) to calculate the Merkle root hash R_m and compares it with the Merkle root stored in the local block header to identify the block containing the transaction.
Working mechanism. The core component of sidechain architecture is the two-way peg mechanism, enabling asset transfer between Bitcoin and a sidechain, typically through SPV proof. The working principle of the sidechain is as follows.
* Locking Bitcoin on the mainchain: The user initiates a transaction T_lock on the mainchain, sending 𝖠_𝖡𝖳𝖢 to a specific locking script, which is recorded with Merkle root R_main.
* Generating the SPV proof: An SPV proof π_𝖲𝖯𝖵 is generated, including the Merkle path 𝖯_𝗆𝖺𝗂𝗇 from T_lock to R_main, as well as the block header 𝖧_𝗆𝖺𝗂𝗇:
π_𝖲𝖯𝖵 = {𝖯_𝗆𝖺𝗂𝗇, 𝖧_𝗆𝖺𝗂𝗇, R_main, T_lock}.
* Verification on the sidechain: The sidechain verifies T_lock using π_𝖲𝖯𝖵 by ensuring that
𝖬𝖾𝗋𝗄𝗅𝖾𝖱𝗈𝗈𝗍( 𝖯_𝗆𝖺𝗂𝗇∪ T_lock ) = R_main.
Upon successful verification, the sidechain creates 𝖠_𝗌𝗂𝖽𝖾 tokens, ensuring 𝖠_𝗌𝗂𝖽𝖾 = 𝖠_𝖡𝖳𝖢.
* Returning Bitcoin to the mainchain: To transfer Bitcoin back to the mainchain, the user must destroy 𝖠_𝗌𝗂𝖽𝖾 tokens and provide proof π_𝗌𝗂𝖽𝖾. The mainchain verifies the proof and, if valid, releases the Bitcoin to the user.
Types of sidechains. Sidechains can be classified into three main categories based on their consensus mechanisms.
* Federated consensus sidechains: Federated sidechains rely on a group of trusted entities, known as a federation, to manage the locking and unlocking of assets between the mainchain and the sidechain <cit.>. This is achieved through federated multi-signature to lock the bitcoins released in the sidechain. The security of federated sidechains depends on the trustworthiness and honesty of the federation members.
* Merged-mined sidechains: A Merged-mined sidechains utilize Bitcoin’s miner network and the same PoW consensus to manage asset transfers. <cit.>. Miners validate blocks on both the mainchain and sidechain simultaneously, using a two-way peg mechanism for asset movement. Similarly, Drivechain <cit.>, employs Bitcoin miners as custodians to manage transfers. It uses miners' collective hash power to secure assets across different sidechains.
* PoX-based sidechains: In a PoX-based sidechain, PoX miners commit BTC to the Bitcoin mainchain through a consensus mechanism other than PoW. These BTC transactions serve as “anchor points” that create a link between Bitcoin and the sidechain, synchronizing both blockchains. The miners are rewarded in the sidechain’s native token, like how miners are rewarded in PoW-based system.
Limitations.
Sidechains enhance the scalability of Bitcoin, but they come with several limitations. Security is a prime concern due to reduced decentralization in sidechains, which often rely on a smaller set of validators for consensus, potentially concentrating power. Interoperability is another challenge, as each sidechain may operate under different rules and protocols, complicating asset transfers between chains. The design of sidechains also involves trade-offs, particularly between decentralization and the desired performance characteristics such as transaction speed and cost.
Discussion. The sidechain-based projects <cit.> investigated in the paper can be grouped into the three categories above. Federated consensus sidechains, such athe Liquid Network, implement a consortium model that employs blocksigners and watchmen, diverging from Bitcoin's PoW mechanism. Merged-mined sidechains such as Rootstock leverage Powpeg mechanism, which involves multi-signature management to manage BTC transfers between the mainchain and the sidechain. In the realm of PoX-based sidechains, projects like MAP Protocol push the boundaries by using ZKP and light clients for cross-chain interoperability, while Stacks ties its security to Bitcoin via PoX, allowing miners to earn rewards for BTC commitments. Mintlayer and BounceBit extend this concept by employing PoS mechanisms, with Mintlayer using Bitcoin block hashes for randomness in block production and BounceBit integrating both CeFi and DeFi elements. Libre, operating on a Delegated Proof-of-Stake (DPoS) mechanism, integrates with Bitcoin via the Lightning Network. BitReXe introduces a multi-VM system using the PREDA programming model, and BEVM utilizes Musig2 for secure multi-signature schemes and native BTC as gas.
§.§ Client-side Verification Plus UTXO
Client-side validation (CSV) allows for an efficient method of transaction verification. Instead of every node handling every transaction, clients validate the parts of the transaction history relevant to them. However, by enabling users to authenticate data validity autonomously, this method bypasses the standard consensus model, potentially resulting in data discrepancies among diverse clients.
The UTXO set, recognized for its immutability consensus, serves as a foundational ledger within the Bitcoin network <cit.>. Establishing a dependable linkage between off-chain assets and the state of UTXOs is essential for extending consensus-driven validation. The CSV+UTXO approach combines off-chain asset issuance with a dynamic on-chain UTXO ledger. The UTXO set acts as a time witness that can verify the precise state of Bitcoin transactions at any given time. This feature allows users to synchronize changes in these transactions with changes in the state of other assets.
Single use seal.
A single-use seal is a cryptographic primitive that facilitates a two-tiered commitment <cit.>. It allows a committer to commit to a message at a future point in time, with the assurance that this commitment can only be made once. Bitcoin Transaction Output-based Single-use-Seals (TxO Seals <cit.>) apply this concept to the Bitcoin transaction graph. Parties in the protocol agree on a transaction output with special meaning, designated as a "seal." A future transaction, known as the "witness transaction," is required to contain a deterministic Bitcoin commitment to a specific message. There are two properties of TxO Seals:
* Hiding: Independent parties cannot detect the presence of the commitment in the transaction graph, even if the original message is known.
* Verifiability: Given information about the specific commitment protocol and access to deterministic Bitcoin proof data, any party can verify the commitment's validity for the intended message only.
Working mechanism. The CSV+UTXO technology leverages the hiding and verifiability of TxO seals to enhance the Bitcoin network's scalability. Below is a process overview:
* Transaction initiation: A client C_i initiates a transaction T_x. This transaction is committed to the UTXO set, where each UTXO U_i represents an unspent transaction output.
* Client-side validation: Each client C_i validates the transaction T_x by verifying the relevant UTXOs U_i associated with the transaction. The client computes the transaction hash H(T_x) and validates it using the Merkle root R_m.
* Single-use seal commitment: A single-use seal S is applied to the transaction output T_x. Let M be the message to be committed, and C be the commitment. The commitment C is represented as:
C = 𝖢𝗈𝗆𝗆𝗂𝗍(H(T_x), M).
* Deterministic Bitcoin proof: The witness transaction includes a deterministic Bitcoin commitment to M. The proof π is verified using the UTXO set U_i, where H(U_i) represents the hash of the relevant UTXO:
π = 𝖯𝗋𝗈𝗈𝖿(C, H(U_i)).
* Verification: The commitment's validity is verified by any party with access to the proof data. The verifiability property ensures that the commitment C can only be valid for the intended message M and that it is linked to the specific UTXO U_i.
𝖵𝖾𝗋𝗂𝖿𝗒(π, H(U_i)) →𝖳𝗋𝗎𝖾.
Limitations. The implementation of this approach is highly complex, primarily due to Bitcoin’s inherent design, which does not natively support such intricate computations. Additionally, When a client receives a payment, it may need to verify a large amount of data all at once, which can be considered a drawback. This issue typically arises when a client first acquires an asset with a lengthy transaction history.
Discussion. The reviewed projects <cit.> collectively demonstrate a focused attempt to broaden Bitcoin's functionalities through the uses of UTXO and CSV. RGB embeds cryptographic commitments in UTXOs for asset issuance and management. RGB++ expands on RGB by integrating other UTXO-based blockchains like CKB and Cardano as verification layers, trading off some privacy for increased global verifiability. BiHelix further builds on the RGB protocol, integrating with the Lightning Network to enhance Bitcoin’s interoperability. Bitlight Labs focuses on CSV to manage smart contracts, employing the Simplicity language to enable Turing-complete scripting. Bool Network uses Dynamic Hidden Committees and integrates CSV for cross-chain security. Lastly, Mercury Layer emphasizes statechain technology to facilitate rapid Bitcoin UTXO transfers, relying heavily on CSV for statechain verification.
§.§ State Channel
State channels are designed to facilitate off-chain transactions between parties <cit.>. These channels are particularly useful for frequent transactions, such as micropayments.
Working mechanism. The working mechanism of state channels can be broken down into the key steps involved in the opening, operation, and closure of a state channel.
* Opening the channel: To establish a state channel, two or more participants, say Alice and Bob, create a multi-signature address A_multi, which requires signatures from both parties. Each participant deposits a certain amount of Bitcoin, B_Alice and B_Bob, into this address. The transaction T_open marking the channel’s opening is recorded on the blockchain:
T_open = 𝖳𝗑(A_multi, B_Alice + B_Bob).
Once the channel is open, participants can conduct numerous transactions off-chain. These transactions are reflected as state updates S_i, where each update represents a new distribution of the funds held in the channel.
* Conducting off-chain transactions: During the operation of the state channel, participants exchange signed transactions T_i that update the state of the channel without broadcasting these transactions to the Bitcoin network. These state updates are essentially promises of future transactions that both parties agree upon:
T_i = SignedTx(S_i),
with signatures from Alice and Bob. The most recent state update always supersedes previous ones, ensuring that the latest agreement on the fund distribution is maintained. For example, if Alice and Bob agree on a state update S_2 after initially agreeing on S_1, then:
S_2 ⇒(𝗇𝖾𝗐 𝖽𝗂𝗌𝗍𝗋𝗂𝖻𝗎𝗍𝗂𝗈𝗇: 𝖡_𝖠𝗅𝗂𝖼𝖾^', 𝖡_𝖡𝗈𝖻^').
* Closing the channel: When the participants decide to close the channel, they create a final transaction T_close reflecting the last agreed-upon state S_final and broadcast this transaction to the Bitcoin blockchain:
T_close = 𝖳𝗑(A_multi, S_final).
This final transaction closes the channel and redistributes the funds according to the last state update. By doing so, the blockchain only records two transactions: T_open to open the channel and T_close to close it, regardless of the number of off-chain transactions conducted within the channel.
Limitations. One challenge is the complexity of managing the multi-sig setup required to open a channel <cit.>. Each transaction within the channel needs to be signed by all parties involved, which introduces delays and coordination issues, particularly when involving more than two participants. Additionally, the mechanism for closing a channel is vulnerable to disputes <cit.>. If a party tries to broadcast an outdated transaction state, it requires timely intervention from the other party to prevent fraud, which necessitates constant monitoring.
zi'xi
Discussion. The projects <cit.> exemplify the technical advancements of state channels. Lightning Network utilizes mechanisms like RSMC for secure updates and HTLC for conditional transactions, enabling off-chain payments through a network of payment channels. OmniBOLT builds on the Lightning Network by facilitating the off-chain circulation of smart assets, adding capabilities such as cross-channel atomic swaps through its Golang-based protocol suite. Lnfi Network extends the state channel concept by integrating Taproot assets with Nostr’s key management. Ark Protocol innovates on state channel by eliminating the need for incoming liquidity, allowing for the seamless conversion of on-chain UTXOs to virtual UTXOs and back.
Others. In addition to the aforementioned solutions, this category of Bitcoin L2 solutions includes a variety of other methods (Appendix <ref>). This includes Tectum <cit.>, which introduces SoftNotes <cit.> for fast, off-chain transactions; Nubit <cit.>, which employs data availability sampling (DAS) for efficient block verification; and HyperAGI <cit.>, designed to support compute-intensive applications for AI. These projects represent the diversity in Bitcoin’s evolving L2 ecosystem.
0.95
(RQ2) Finding 2: Among the examined Bitcoin L2 projects, 12% embed data into transactions to extend functionality. Furthermore, 81% of the projects utilize off-chain computation to alleviate on-chain congestion, while 72% employ cryptographic proofs, including ZKPs and SPV, for secure state verification.
§ SECURITY EVALUATION
§.§ Security Framework
By examining various vulnerability sources <cit.>, we identified a series threats across the entire lifecycle of asset flow, categorizing them into: unauthorized withdrawals, incorrect transaction verification, manipulation and censorship, fraudulent disputes, data unavailability, and the inability to recover assets during emergencies.
We accordingly present our structured security evaluation framework (answering RQ3), comprising six critical criteria (denoted by C) as follows. We provide a detailed analysis of each project’s strengths and vulnerabilities as in Table <ref>.
*
0.95
(RQ3) Finding 2:
C1. Withdrawal mechanism (WM) evaluates whether the L2 solution employs robust security measures to safeguard the withdrawal process, ensuring that funds can be securely and correctly transferred back to the Bitcoin mainchain. The primary security threat associated with this criterion is the risk of unauthorized withdrawals, where malicious actors could potentially gain access to funds without proper authorization. Insecure or flawed withdrawal mechanisms could lead to the loss of assets, as attackers might exploit vulnerabilities to initiate fraudulent withdrawals or intercept the process. To mitigate the threat, WM considers security features as below:
▹ Multi-signature authentication: This involves requiring multiple parties to sign off on a transaction before it can be executed. It adds an additional layer of security, ensuring that no single entity can unilaterally withdraw funds.
▹ Cryptographic proofs: These are mathematical techniques that allow one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself. In the context of withdrawals, the proofs are typically used to guarantee that the transaction data is valid without revealing sensitive information.
▹ Decentralized validators: Using a group of validators to approve transactions can distribute the power to authorize withdrawals across multiple independent entities, reducing the risk of collusion or single points of failure (SPoF).
▹ Time-locked contracts: They are, in most cases, a pair of smart contracts on both legs that delay the execution of a transaction until a specified amount of time has passed. The contracts can be used to create a buffer period during which any unauthorized withdrawal can be detected and canceled.
C2. State verification (SV) assesses the robustness of the L2 solution’s validation techniques to ensure the correctness of off-chain transactions and their accurate reflection on the mainchain. The primary security threat associated with this criterion is the risk of incorrect transaction verification, where invalid or manipulated off-chain transactions could be improperly validated and recorded on the mainchain. This could lead to incorrect balances, loss of funds, or unauthorized asset transfers. To mitigate this threat, it is crucial that the L2 solution employs stringent state verification mechanisms that can effectively detect and prevent invalid state transitions:
▹ Zero-knowledge proofs: These proofs allow validators to confirm the validity of transactions without needing to see the transaction details. This preserves privacy while ensuring that only legitimate transactions are processed.
▹ Fraud proofs for Optimistic rollups: These are mechanisms that allow any network participant to challenge and prove a fraudulent transaction. If a transaction is deemed fraudulent, it can be reverted, and the dishonest party penalized.
▹ Cryptographic commitments: This technique involves creating a cryptographic hash of transaction data, which can later be used to verify the integrity of the data. It ensures that the data has not been tampered with.
▹ Periodic state commitments to mainchain: Regularly committing the state of off-chain transactions to the mainchain creates a verifiable and immutable record. This helps maintain the integrity of the Bitcoin blockchain.
C3. Anti-censorship measures (ACM) evaluate the L2 solution’s ability to prevent any single entity from controlling or censoring transactions within the network. The primary security threat associated with this criterion is the potential for manipulation and censorship, where a powerful entity or group could selectively block or reorder transactions to their advantage, undermining the fairness and neutrality of the network. This threat can lead to scenarios where certain transactions are unfairly delayed or never processed. To mitigate the threat, ACM should consider the following aspects:
▹ Decentralized sequencers: Sequencers are responsible for ordering transactions in a decentralized manner, preventing any single party from manipulating the transaction order.
▹ Governance models: These models distribute decision-making power across a broad range of stakeholders, preventing any single entity from having undue influence.
▹ Distribution of power: Ensuring that power is distributed among multiple participants reduces the risk of censorship and promotes a more democratic system.
▹ Mitigation of manipulation risks: Measures to detect and prevent manipulation of the transaction process help ensure that all transactions are processed fairly.
C4. Dispute resolution (DR) examines the mechanisms that an L2 solution employs to resolve conflicts, particularly those involving fraudulent activities or disputes. If disputes are not resolved efficiently, it could lead to manipulation of the network, where malicious actors might take advantage of unresolved conflicts, further leading to network instability, loss of user funds, and reduced trust in the system.
▹ Fraud proofs: These proofs allow network participants to challenge and prove fraudulent transactions, ensuring that any dishonest activities are promptly addressed.
▹Efficiency of dispute resolution process: The speed and effectiveness with which disputes are resolved are critical to maintaining trust in the system. Efficient processes ensure that issues are addressed quickly and fairly.
▹Role of validators/arbitrators: Validators or arbitrators play a crucial role in resolving disputes. Their impartiality is essential for a fair dispute-resolution process.
▹Incentives and penalties: A system of rewards and penalties encourages honest behavior and discourages fraud. This includes rewarding participants who detect and report fraud and penalizing those who attempt to commit fraud.
C5. Data availability (DA) assesses the strategies implemented by an L2 solution to ensure that all transaction data is available for verification at all times. The primary concern here is to prevent scenarios where crucial transaction data becomes inaccessible, leading to potential validation failures or network manipulation. Data unavailability poses a severe security threat, as it could allow malicious actors to hide or manipulate transaction details, undermining the integrity of the verification process. Without guaranteed data availability, the network risks losing the ability to detect fraudulent activities, making it vulnerable to exploitation.
▹ Decentralized data availability: The metric involves using distributed nodes within the network to serve transaction data, ensuring no SPoF can compromise data availability.
▹ Data availability sampling: This technique involves checking random samples of data to ensure that the entire dataset is available and has not been tampered with.
▹ Erasure coding: This is a data protection method that breaks data into fragments, encodes them, and distributes them across multiple locations. It ensures that data can be reconstructed even if some fragments are lost or corrupted.
▹ Off-chain storage with periodic on-chain commitments: Storing data off-chain reduces the burden on the mainchain, while periodic on-chain commitments provide a verifiable record that the data has not been tampered with.
C6. Emergency asset recovery (EAR) focuses on evaluating the effectiveness of the mechanisms to allow users to reclaim their assets during emergencies, such as network failures, breaches, or unexpected shutdowns. The corresponding security threat in this context is the potential loss of assets due to system failures or malicious attacks that prevent users from recovering assets. This criterion examines whether the L2 solution provides robust methods to initiate and complete asset recovery, ensuring that users have a fallback plan.
▹Force-close transactions: These transactions allow users to close their positions and recover their assets even if the network is experiencing issues (e.g., asynchrony).
▹Time-locked exit transactions: These transactions provide a delayed period during which users can exit the system securely. Assets can be protected against immediate threats.
▹Exit windows: These are specified periods during which users can exit the system, providing regular opportunities for asset recovery.
▹Monitoring by validators and watchtowers: Validators and watchtowers continuously monitor the network for issues and ensure that exit transactions are processed correctly.
§.§ Case Study
We select a representative project (Bitlayer <cit.>) as a case study. We examine each criterion with a scale of low, medium, and high to show the security performance.
Multi-sig authentication. The process begins with participants like Alice and Bob creating an off-chain fund transaction on Bitlayer. Each locks a certain amount of Bitcoin into a 2-of-2 multi-sig address, ensuring that funds can only be moved with both parties’ consent. This setup directly addresses the WM criterion by preventing unauthorized access to the locked funds through multi-sig authentication. The pre-signed nature of these transactions ensures that funds are securely committed without immediate blockchain involvement. Consequently, BitLayer’s multi-sig authentication is effective in securing funds, thus rating this aspect as high.
Creation of CETs. Next, Alice and Bob pre-sign various CETs representing all potential outcomes of their contract. These CETs include conditions to distribute the funds based on an oracle’s outcome announcement. This process aligns with state verification by employing cryptographic commitments to ensure data integrity and tamper-proof records of off-chain states. The CETs are designed to execute based on ZKPs, ensuring that the transaction details remain private while validating the outcome. Therefore, the robust creation and use of CETs in BitLayer provide strong state verification capabilities, making this criterion rate high.
Time-locked contracts. Oracles play a crucial role in determining the outcome by signing the corresponding CET based on real-world events. To ensure security, time-locked contracts are used so that if the oracle does not provide a signature within a specified period, the locked funds are returned to the participants. This ensures that funds are not indefinitely locked, satisfying the criteria of EAR by providing a secure exit strategy. Consequently, rating this aspect as high.
Broadcasting the fund transaction. After all CETs are pre-signed, the fund transaction is broadcasted, locking the funds in the multi-signature address. Upon receiving the oracle’s signed outcome, the appropriate CET is executed to redistribute the funds. If an incorrect CET is broadcasted, BitLayer’s fraud-proof mechanism enables participants to challenge the transaction, ensuring disputes are resolved fairly. This dispute resolution process effectively meets the DR criterion, resulting in a high rating for BitLayer.
Decentralized validators. BitLayer employs decentralized validators to validate transactions, preventing any single entity from manipulating the process. This measure fulfills the ACM criterion by distributing power and ensuring unbiased transaction sequencing. Furthermore, the CETs and fund transactions are stored off-chain, with only necessary data being committed on-chain when required. This hybrid storage approach ensures that data is always accessible for validation purposes while maintaining transparency. Thus, BitLayer’s use of decentralized validators and a hybrid storage approach provides strong ACM and DA, rating these two criteria high.
0.95
(RQ3) Finding 3: Most projects (85%) exhibit strong withdrawal mechanisms, particularly through MSA and TLC. State verification is robustly supported by cryptographic proofs like ZKPs among projects (72%). Anti-censorship measures (55%) vary, with decentralized sequencers and governance models ensuring fair transaction processing in some projects. Dispute resolution mechanisms (78%) are generally effective, with fraud-proof implementations and the role of validators playing key roles. However, data availability (42%) and emergency asset recovery (37%) are less consistently implemented, with only a subset of projects offering comprehensive solutions.
§ FURTHER DISCUSSION
More evaluated properties (last five columns in Table <ref>). We also assess the selected projects using a broader set of criteria (i.e., properties from existing studies) at three levels (high, medium, and low) to provide extended references.
Limitations of DLC. One issue is the contract’s funding guarantee. For example, Alice and Bob can use a DLC to hedge against Bitcoin price volatility by agreeing on a future price for Bitcoin, settled in USD. Since funds are pre-committed, the contract can only handle limited price fluctuations. If the price of Bitcoin falls below a certain threshold, the pre-committed funds may not cover the contract’s value, locking the funds inefficiently. Increasing the amount of committed funds could mitigate this, but it also means locking up more capital. Another challenge is the extensive number of signatures required for every possible contract outcome. This can become unmanageable in scenarios with many potential outcomes, like forward contracts with varied price points. A proposed solution <cit.> involves breaking down values into fractional parts, reducing the number of signatures needed for a wide range of values, but this solution has yet to be verified.
Centralization of indexers.
Bitcoin indexers are crucial for enabling off-chain computations. For example, B²nodes <cit.> are equipped with indexers to parse and index inscriptions. These indexers work by scanning the blockchain, extracting necessary data, and maintaining states. However, the reliance on centralized indexers (i.e. UniSat <cit.>) introduces several security risks. Centralized indexers can become SPoF, susceptible to attacks that could manipulate data. For instance, a malicious indexer could provide incorrect transaction histories or balances. Additionally, the concentration of indexing power raises concerns about the potential for Sybil attacks. To mitigate risks, some works <cit.> utilize decentralized indexing solutions, but they are still in infancy.
Congested mempool. The popularity of Bitcoin inscriptions has introduced significant congestion to mempool. Additionally, the proposal of BitVM facilitates more complex smart contracts on Bitcoin, which further increases network congestion. The mempool serves as a staging area for valid but unconfirmed transactions, and its size reflects the network’s congestion level <cit.>. A congested mempool indicates a high volume of transactions waiting to be processed, which inherently leads to longer confirmation times and increased transaction fees. For miners, this congestion can be beneficial as it increases the fees they earn per block mined. However, for regular users, this translates into higher costs and delays, making Bitcoin less efficient for everyday transactions. Therefore, optimizing Bitcoin’s network efficiency while preserving its programming functionality is a critical challenge. Researchers could investigate more efficient transaction validation methods or develop new smart contract frameworks that minimize on-chain data storage to manage mempool congestion.
§ CONCLUSION
We present the first systematic study of Bitcoin L2 solutions by collecting and evaluating existing projects. We further propose and apply a security reference framework to assess the identified design patterns within each selected project. We conclude that L2 brings Bitcoin functionalities but also introduces increased complexity and additional attack vectors.
unsrt
§ TECHNICAL COMPONENTS
§.§ UTXO Model
A Unspent Transaction Output (UTXO) represents a specific amount of Bitcoin that has been received but not yet spent by the owner. Essentially, it is the cryptocurrency equivalent of cash in hand, where each UTXO can be thought of as an individual bill or coin.
UTXO represents a discrete amount of Bitcoin that remains constant until it is spent. The association with a Bitcoin address denotes ownership and the right to spend, while the transaction ID provides a traceable history, linking the UTXO back to its origin in the blockchain.
Working mechanism. The process of creating and spending UTXOs is fundamental to how Bitcoin transactions are structured. The model allows the miners to verify transactions without tracking account balances. Instead, miners check whether UTXOs being spent are valid and unspent.
* Creation of UTXOs: UTXOs are generated through Bitcoin transactions. When a sender A sends an amount x BTC to a recipient B, this amount x becomes a UTXO associated with B’s address. Mathematically, if the transaction T is made, then:
𝖴_𝖡 = T_𝖠 → 𝖡(x),
where 𝖴_𝖡 is the new UTXO for recipient B.
* Spending UTXOs: To create a new transaction, a sender's wallet selects (via certain coin selection algorithms <cit.>) one or more UTXOs as inputs to cover the transaction amount y. If the total value of the selected UTXOs (∑𝖴_𝗂) exceeds the amount y, the excess amount z = ∑𝖴_𝗂 - y is sent back to the sender as a new UTXO, often referred to as "change". This can be expressed as:
𝖭𝖾𝗐 𝖴_𝖼𝗁𝖺𝗇𝗀𝖾 = T_𝖡 → 𝖡(z),
where T_𝖡 → 𝖡 is the transaction from the sender back to themselves.
* Transaction inputs & outputs: A Bitcoin transaction consists of inputs and outputs, where inputs I are UTXOs being spent, and outputs O create new UTXOs for the recipients. If a transaction T consumes n inputs to produce m outputs, it can be represented as:
T(I_1, I_2, …, I_n) → (O_1, O_2, …, O_m),
where I_n are the UTXOs used as inputs and O_m are the new UTXOs created. This cycle of consuming old UTXOs to create new ones continues with each transaction.
§.§ Merkle Tree
A Merkle tree, also known as a hash tree, is a cryptographic concept employed in Bitcoin to ensure data integrity. It organizes data into a binary tree structure, where each leaf node represents the hash of transaction data, and each non-leaf node is the hash of its immediate child nodes.
Cryptographic hash functions. SHA-256 (Secure Hash Algorithm 256-bit) <cit.> is the hash function used to generate transaction hashes and block hashes in Bitcoin. It produces a 256-bit (32-byte) hash value, typically rendered as a hexadecimal number, 64 digits long. In addition to SHA-256, Bitcoin uses the RIPEMD-160 <cit.> hash function as part of the process to create Bitcoin addresses. After a public key is generated from a private key, it is hashed using SHA-256, and then the result is hashed again using RIPEMD-160. This produces a shorter hash used, along with a network byte and a checksum, to form the Bitcoin address.
Binding and hiding. The binding property of a Merkle tree is rooted in the cryptographic hash functions used to generate the tree. Specifically, altering any single transaction T_i in the block will change its corresponding hash 𝖧(T_i), thereby altering the hashes of all parent nodes up to the Merkle root R. This property can be expressed as:
R = 𝖧(𝖧(… 𝖧(𝖧(T_1) 𝖧(T_2)) 𝖧(𝖧(T_3) 𝖧(T_4))) …),
where 𝖧(·) represents the cryptographic hash function, and denotes concatenation. This recursive hashing ensures that the Merkle root R is a binding commitment to the exact set and order of transactions in the block.
Although the Merkle root and the associated Merkle path can be used to verify the inclusion of a transaction T_i, they do not reveal the specifics of the transaction. For example, to prove the inclusion of a transaction T_1 in the block, we provide the hash of T_1 and the necessary hashes in the Merkle path, such as 𝖧(T_2), 𝖧(𝖧(T_3) 𝖧(T_4)), and 𝖧(𝖧(𝖧(T_5) 𝖧(T_6)) 𝖧(𝖧(T_7) 𝖧(T_8))), which allows us to reconstruct the Merkle root. If the reconstructed Merkle root matches the known Merkle root R, we can confirm that T_1 is part of the block without knowing all transactions.
§.§ Segregated Witness (SegWit)
The Bitcoin network consistently verifies a new block approximately every 10 to 15 minutes, with each block encompassing a specific number of transactions. Consequently, the size of these blocks directly influences the number of transactions that can be confirmed within each block. SegWit, representing one of the key protocol upgrades <cit.>, was proposed to address the scalability issue by changing the way transaction data is structured and stored in the blocks.
[caption=Transaction Details, basicstyle=, breaklines=true]
# Input:
- Previous tx: fd9...b33a
- Index: 0
- scriptSig: 304...8932
# Output:
- Value: 5000000000
- scriptPubKey: ... OP_CHECKSIG
——-after Segwit——-
# Input:
- Previous tx: fd9...b33a
- Index: 0
- scriptSig: (empty)
# Output:
- Value: 5000000000
- scriptPubKey: ... OP_CHECKSIG
# Witness Data:
- Input 0
- ScriptSig: 304...8932
language=,
basicstyle=,
keywordstyle=,
morekeywords=scriptSig,
morekeywords=[2]scriptPubKey,
keywordstyle=[2],
SegWit separates signatures from the transaction data. A SegWit transaction (above listing) consists of two main components: the original transaction structure without the signature (i.e., 𝖲𝖼𝗋𝗂𝗉𝗍𝖲𝗂𝗀) and a separate witness section containing the signatures and scripts. The witness information is still transmitted and stored in the blockchain but is no longer a part of the transaction's txid calculation. The txid is now calculated without including the witness data. The change means that the txid remains constant even if the signature data is altered (i.e., fixing transaction malleability <cit.>). Secondly, SegWit introduces a new concept called block weight, which is a blend of the block's size with and without the witness data. The maximum block weight is set to 4MB, while the size of the non-witness data is still capped at 1MB.
§.§ Taproot Upgrade
The Taproot upgrade cover three BIPs: Schnorr signatures (BIP340), Taproot (BIP341) and Tapscript (BIP342). The core idea of upgrade centers around BIP341 <cit.>, which is to combine the strengths of Merkelized abstract syntax trees (MAST) and Schnorr signatures by committing a single Schnorr public key in the output that can represent both a single public key spend and a complex script spend. It introduces a new Taproot output (SegWit version 1) that includes a signature, a control block, and a script path. Moreover, it also specifies the rules for spending the Taproot output, which can be either a key-path spend (using a single signature) or a script-path spend (using the scripts committed to in the MAST structure).
To further illustrate the working principle of Taproot, we consider a situation where a Bitcoin address is controlled by three parties: A, B, and C. We created a Taproot output address that allows for spending conditions. The spending conditions are as follows: (i)
any two of the three parties (A, B, C) can jointly spend the funds; and (ii) if the funds are not moved for a year, party A can unilaterally spend them (a time-lock condition).
Initially, the parties aggregate their individual public keys (pk_A, pk_B, pk_C) to form a single Schnorr aggregated public key P. This aggregate key, alongside the Merkle root derived from two hashed scripts: Script1 adds up the valid signatures (ensuring that at least two signatures are provided); Script2 uses 𝖮𝖯_𝖢𝗁𝖾𝖼𝗄𝖲𝖾𝗊𝗎𝖾𝗇𝖼𝖾𝖵𝖾𝗋𝗂𝖿𝗒 to enforce the time-lock (guaranteeing that the script can only be executed after a year). Then, it checks the signature from A, and forms the Taproot output. When it's time to spend, the parties can opt for a key-path spend using P for a private transaction, or a script-path spend, revealing the chosen script and its corresponding Merkle proof to fulfill specific conditions.
§ COMPENDIUM OF PROJECTS
§.§ Statistical Websites and Our Results
We source the projects from four major statistical websites that extensively catalog a large number of registered projects.
* Rootdata <cit.> provides a comprehensive ecosystem map that categorizes various projects within the Bitcoin Layer 2 space. It offers detailed listings, including projects in different stages of development, such as testnet and upcoming.
* BTCEden <cit.> compiles active and upcoming Bitcoin L2 projects. It presents data in a tabular format, highlighting the project's name, the type of L2 solution (e.g., ZK Rollup, Side Chain), its purpose, and the total capitalization.
* L2.watch <cit.> offers a detailed overview of L2 projects, including those in pre-testnet, testnet, and mainnet stages.
* EdgeIn.io[To ensure a comprehensive search on EdgeIn.io, we employed a set of targeted keywords that are pivotal to our research. We utilized the search functionality with the keyword "Bitcoin" combined with various terms related to Layer 2 technologies, such as "layer2, l2, sidechain, bitvm, client-side, rollup, state channel". This strategic approach yielded 139 companies that align with our research focus, providing a broad spectrum of L2-related projects within the Bitcoin ecosystem.] <cit.> is a platform that provides business data and knowledge for the Web3 industry, connecting users with a vast network of Web3 organizations.
Following the data collection method, we obtained an initial list of 335 projects (the second column in Table <ref>). We then conducted a deduplication process, refining our dataset to a list of 126 unique Bitcoin L2 projects. This dataset was divided according to the projects’ developmental and fundraising stages (illustrated in Fig. <ref>).
In the Mainnet Stage, there were 33 projects, of which 18 received funding and 25 had complete documentation. The Testnet Stage included 25 projects, with 11 securing funding and 14 having comprehensive documentation. In the Pre-testnet Stage, 48 projects were identified, 8 of which obtained funding, while 10 had complete documentation. Lastly, the Initiated Stage consisted of 20 projects, with 2 receiving funding and only 1 having complete documentation.
While applying our second round of filtering process, we finally reported 40 projects by removing projects that lacked sufficient documentation, community engagement, or a clear path to Mainnet deployment.
0.95
(RQ1) Finding 1 (State-of-the-art projects): Mainnet Stage (33 projects): Among these, 18 projects received funding, and 25 had complete documentation. Testnet Stage (25 projects): Within this group, 11 projects were funded, and 14 had comprehensive documentation. Pre-testnet Stage (48 projects): 8 projects obtained funding, and 10 had complete documentation. Initiated Stage (20 projects): 2 projects obtained funding, and 1 project had complete documentation.
§.§ BitVM-based Solutions
(1) Bitlayer: Building on the foundation of BitVM (also using NAND gates, see Fig. <ref>), BitLayer <cit.> extends the Turing completeness of smart contracts on Bitcoin by introducing enhanced architectural components. Specifically, BitLayer incorporates a more sophisticated state management system that allows for the execution of complex logic without altering the core Bitcoin protocol. This is achieved through an implementation of Taproot and Schnorr signatures, which enables more efficient execution of multi-party contracts and off-chain computations.
(2) ZKByte <cit.> builds upon the principles of BitVM by integrating ZKPs into the Bitcoin L2. It introduces a ZKP-enabled validator that verifies the state transitions of the blockchain without exposing sensitive data. This validator leverages the UTXO model to track and manage states across the main network and L2 network. Additionally, ZKByte incorporates a trusted oracle system, which validates the correctness of UTXO inputs and outputs and ensures that script execution adheres to the L2 protocol.
(3) Bitstake <cit.> leverages the BitVM for implementing PoS protocols. Embedding Permissioned Optimistic Bridge, Bitstake enables dynamic participation through staking, where stakeholders can actively engage in verifying and managing transactions over the bridge. Bitstake also enhances BitVM's security model by implementing a challengeable operator environment. In this setup, validators can dispute potentially fraudulent transactions within predefined challenge periods.
(4) Citrea <cit.> extends the functionality of BitVM by implementing a ZK Rollup architecture on the Bitcoin network. Technically, Citrea has introduced the concept of execution slices, where thousands of transactions are batch-processed and validated on the Bitcoin mainchain using a compact ZKP. Additionally, Citrea introduces a bi-directional anchoring mechanism that allows seamless interaction between Bitcoin as a settlement layer and its L2 environment, where more complex smart contracts can be executed.
(5) SataoshiVM <cit.> advances the BitVM framework by adopting the Bristol format for its logical gate circuit structure, which simplifies the circuit’s complexity. While BitVM requires multiple rounds of interaction to identify specific data discrepancies, SatoshiVM introduces a more streamlined approach with a single round of non-interactive on-chain verification. This method allows validators to observe commitment values and verify the corresponding off-chain data without continuous interaction. If any discrepancies are detected, validators can immediately identify these errors and penalize the proposer by transferring their Bitcoin UTXO.
§.§ Rollups
∙ Zero-knowledge rollups
(1) B² Network <cit.> has implemented a hybrid model resembling a blend of zk and optimistic features. This network architecture is structured into two distinct layers: the rollup layer and the data availability layer. The Rollup layer employs the zkEVM system, responsible for sorting, packaging transactions, and generating zero-knowledge proofs. Meanwhile, the DA layer offers a decentralized storage solution, where off-chain storage nodes conduct zk validations and inscribe Rollup data into Bitcoin’s Ordinals script. Given that Bitcoin L1 lacks native smart contract validation, B² Network has introduced a fraud-proof challenge mechanism akin to Optimistic Rollups. This mechanism allows challengers to challenge the commitment to zkp verification within a specified period. If a challenge is upheld, the Rollup reverts the transactions, and the challenger is rewarded with assets locked by the node. Conversely, if no challenge occurs or the challenge fails, the Rollup receives final confirmation.
(2) Bison <cit.> employs the ZK-STARK rollup on Bitcoin, conceptualized to operate as a sovereign rollup. Bison introduces a Bison OS system composed of the sequencer and the prover. The Sequencer is responsible for collecting and ordering user transactions, while the Prover leverages STARK technology to generate ZKPs. A key improvement in Bison’s approach is its support for client-side verification, where users can directly download and verify ZK proofs independently.
(3) Tuna Chain <cit.> leverages zk-Rollups to bundle multiple transactions into single batches that are validated on the Bitcoin main network. It employs ZK proofs for off-chain computations and uses Taproot and Bitcoin Script to verify contracts on-chain without altering Bitcoin's consensus rules. Additionally, it uses native BTC as gas for EVM transactions, integrating Bitcoin's value anchoring with Ethereum's programmability to enable application development.
(4) BL2 <cit.> builds on zk rollup technology to enhance Bitcoin’s scalability, implementing a dual-layer architecture. The Rollup Layer processes transactions off-chain using a zkEVM system, which aggregates these transactions and generates ZPKs for validation. Unlike traditional rollups, BL2 introduces the Celestia Data Availability Layer, where batch data and zk proofs are stored across decentralized nodes. This setup allows zk-proof verifiers and Bitcoin committers within the BL2 network to validate transactions before committing them to the Bitcoin blockchain.
(5) Sovryn: <cit.> introduces an architecture that combines BOB (Bitcoin Optimistic Bridge) with zk-rollup technology through BitcoinOS. BOB enables the seamless interaction between Bitcoin and Sovryn’s sidechain, operating on Rootstock by locking BTC on the Bitcoin main chain and releasing RBTC on the sidechain. Sovryn leverages zk-rollups to aggregate and process transactions off-chain. These transactions are then compressed into zk-proofs and periodically settled on the Bitcoin mainnet.
(6) Melin Chain: Building on the zk-rollup foundation, Merlin Chain <cit.> compresses multiple transaction proofs into compact batches that are submitted to the Bitcoin network as a single transaction. Beyond the standard zk-rollup process, Merlin Chain integrates a decentralized oracle network, which serves to validate off-chain data and ensure its accuracy before inclusion in the rollup.
(7) LumiBit <cit.> utilizes the ZK scheme of Halo2 to explore a L2 scaling solution for Bitcoin. As a ZK-rollup, LumiBit leverages Bitcoin as its Data Availability layer. ZK proofs are inscribed on the Bitcoin network and verified by open-source clients, allowing users to access and verify the latest off-chain states. Unlike other ZK-rollups, LumiBit's prime feature lies in the distinctive use of Halo2 scheme and KZG commitments to reduce verification costs. Additionally, LumiBit is building a Type 2 ZK-EVM with its universal circuit design, ensuring compatibility with the Ethereum ecosystem. This PoS-based security model requires participants to stake BIOP tokens, which incentivizes proper block generation and validation.
∙ Optimistic rollups
(1) Biop <cit.> integrates the Optimistic Rollup protocol with a PoS consensus algorithm, creating a system known as BiopOS. In this setup, Sequencers are tasked with aggregating and packaging transactions into blocks, which are then submitted to the Bitcoin mainnet for verification. Validators are responsible for monitoring the Sequencers’ actions and can submit fault proofs if discrepancies are detected.
(2) Rollux <cit.> is a L2 scaling protocol built on Bitcoin and Syscoin, leveraging Optimism Bedrock to provide EVM equivalence. It uses optimistic rollups, batching transactions off-chain, and then submitting them to Syscoin for finality. Key components include the Canonical Transaction Chain (CTC) for transaction ordering, the State Commitment Chain (SCC) for state root proposals, and Bridge contracts for L1-L2 communication. The current system relies on a centralized sequencer but plans to decentralize this role.
(3) BOB <cit.> employs EVM to facilitate smart contract execution, leveraging Optimistic Rollup to batch process transactions off-chain before submitting them to Ethereum for final settlement. BOB’s integration of Bitcoin’s Rust libraries via the RISC Zero zkVM allows for off-chain execution of Rust programs. These off-chain computations are then verified through zk-proofs, which are validated within EVM contracts.
(4) BeL2 <cit.> integrates Bitcoin with Elastos SmartWeb technology to enhance scalability and programmability. Utilizing optimistic rollup technology, BeL2 batches transactions off-chain and periodically submits them to the Bitcoin mainnet, leveraging Elastos’ merged mining for security. Additionally, a relayer mechanism is used for fraud prevention, staking deposits as collateral.
(5) Hacash.com <cit.> is designed to support real-time settlement using a multi-layered architecture for Bitcoin. It includes three layers: Layer 1 features ASIC-resistant mining and support for multi-signature transactions; Layer 2 operates as a channel chain settlement network enabling high transaction throughput by allowing multiple off-chain transactions with only final balances submitted on-chain; and Layer 3 serves as an application ecosystem scaling layer, supporting rollup technology and multi-chain protocols.
(6) Rollkit <cit.> is a modular framework for sovereign rollups that utilizes Bitcoin for data availability. Rollup sequencer nodes aggregate transactions into blocks and post them to a data availability layer like Celestia for ordering and finalization. Full nodes execute and verify these blocks, and propagate fraud proofs in optimistic rollup setups, while light clients verify proofs and authenticate state queries. Rollkit integrates with Cosmos SDK via an enhanced Application BlockChain Interface and utilizes Taproot transactions on Bitcoin for DA.
§.§ Sidechains
∙ Federated consensus sidechains
(1) Liquid Network <cit.> introduces a Federated Sidechain Consensus (FSC) model, which uses a consortium of blocksigners and watchmen instead of Bitcoin’s traditional PoW. The network employs a two-way peg system, allowing the transfer of Bitcoin to and from the sidechain, where Bitcoin is converted into Liquid Bitcoin. Additionally, it uses Confidential Transactions to encrypt transaction details and a one-minute block time to reduce confirmation delays.
∙ Merged-Mined sidechains
(1) Rootstock (RSK) <cit.> is designed to bring Ethereum-compatible smart contracts to the Bitcoin ecosystem. It uses RBTC, a token pegged 1:1 to BTC, generated through merged mining. RSK employs a two-way peg system facilitated by Powpeg, an autonomous multi-signature management bridge that involves reputable companies like Xapo, Bitpay, and BitGo. This bridge allows seamless transfer of BTC between the main chain and the RSK sidechain.
∙ PoX-based sidechains
(1) MAP Protocol <cit.> operates similarly to PoX-based sidechains as it integrates a mechanism for verifying cross-chain transactions. However, instead of relying on traditional PoX, it utilizes a combination of ZK-proof technology and light clients to achieve decentralized interoperability across multiple blockchain networks.
(2) Stacks <cit.> extends Bitcoin’s functionality by enabling Proof of Transfer while leveraging Bitcoin’s security. It ties the security of Stacks directly to Bitcoin by anchoring each Stack block to a Bitcoin block. Miners are rewarded in Stacks’ native token for their BTC commitments.
(3) Mintlayer <cit.> utilizes a PoS consensus mechanism, where participants stake ML tokens to become validators and block producers. Each Mintlayer block references a Bitcoin block, using the Bitcoin block hashes as a source of randomness to select block producers. The network also employs a Dynamic Slot Allotment system to randomly select block signers who validate and sign new blocks.
(4) BounceBit <cit.>’s PoS-based sidechain mechanism involves validators staking both BTC and BounceBit’s native token BB. This staking process secures the network and enables validators to produce new blocks. Users can convert BTC to BBTC and stake it on the BounceBit platform to earn rewards. The system integrates both CeFi and DeFi elements, allowing for funding rate arbitrage and the creation of on-chain certificates for restaking and mining.
(5) Libre <cit.> operates using a Delegated proof-of-stake (DPoS) consensus mechanism. It integrates with Bitcoin via non-custodial wrapping (pegged assets) and the Lightning Network, providing seamless on-and-off ramps for Bitcoin users. This design allows Libre to process over 4,000 transactions per second with zero transaction fees.
(6) BitReXe <cit.> operates as a PoS-based sidechain. It uses a unique approach by incorporating the PREDA programming model to scale out general smart contracts within multi-VM blockchain systems. Validators on the BitReXe network stake RXBTC (a token pegged to Bitcoin) to validate transactions and produce new blocks.
(7) BEVM <cit.> is an EVM-compatible sidechain solution that integrates the Taproot consensus mechanism. The Taproot consensus is a cornerstone of BEVM's architecture, combining Musig2 for secure multi-signature schemes, a BFT PoS network of Bitcoin SPVs for decentralized verification, and the Signal Protocol for encrypted, secure communication between nodes. Additionally, by using native BTC as gas, BEVM lowers barriers for Bitcoin holders to participate in L2 transactions, while its full EVM compatibility invites developers to leverage existing tools and languages.
§.§ Client Side + UTXO
(1) RGB <cit.> leverages CSV and the UTXO model to create a scalable method for issuing and managing assets on the Bitcoin network. It initiates with an asset issuer creating a new asset and generating a unique one-time seal along with a cryptographic commitment on their client. This commitment is then embedded within a transaction's UTXO, anchoring the asset to the blockchain. Recipients verify the asset's legitimacy by examining the commitment against the one-time seal. Upon transfer, the old seal is invalidated, and a new set of seal, commitment, and transaction data are recorded on the Bitcoin network, ensuring an immutable transaction history.
(2) RGB++ <cit.> combines the RGB protocol with UTXO-based public blockchains such as CKB, Cardano, and Fuel, utilizing these platforms as verification and data storage layers for RGB assets. This approach shifts data validation tasks from the user to the third-party platforms, replacing client-side validation with decentralized platform verification, provided users trust these blockchains. The protocol achieves compatibility with the original RGB through the concept of “isomorphic binding” where the extended UTXOs on chains like CKB or Cardano act as containers for RGB asset data, directly reflecting asset parameters on the blockchain. For instance, if Alice wants to transfer 30 out of her 100 RGB tokens to Bob, she generates a commitment and spends her UTXO on Bitcoin, while consuming the corresponding UTXO container on CKB, creating new containers for both Alice and Bob. This transaction is validated by the consensus mechanisms of CKB/Cardano, without Bob’s direct involvement. While this setup improves global verifiability and DeFi applications, it sacrifices privacy, requiring a trade-off between security and usability. If users prioritize privacy, they can revert to the traditional RGB mode. RGB++ also assumes trust in the reliability of the CKB/Cardano networks. Users can operate their RGB assets on these UTXO chains directly using their Bitcoin accounts, linking UTXO conditions to Bitcoin addresses, and can use “transaction folding” to reduce costs by batching multiple transfers into a single commitment.
(3) BiHelix <cit.> functions as a Bitcoin-native infrastructure that amplifies the capabilities of the Bitcoin network through several key mechanisms. It integrates the RGB protocol, which facilitates private and efficient transactions by allowing parties to reach consensus without the need for on-chain contract recording, thus keeping individual transaction histories and status data off-chain. The SLR (Security-Lighting-RGB) Protocol enhances interoperability by building upon the migration of Core Lightning's functionality to rust-lightning, thereby reinforcing the synergy between the RGB protocol and the Lightning Network. Additionally, BiHelix employs a set of socially consensus-driven nodes on the sidechain, using native economic incentives to draw in node service providers.
(4) Bitlight Labs <cit.> develops infrastructure based on the RGB protocol and deploys a suite of applications on Lightning Network, including Bitswap (asset exchange) and the Bitlight (wallet). The project is featured for managing smart contracts with privacy and security through client-side validation, where data is controlled by “state owner” rather than being publicly accessible. It operates on Bitcoin transactions, potentially using the Lightning Network's off-chain capabilities, and allows for scripting with Blockstream's formally verified Simplicity language (claimed to be Turing complete).
(5) Bool Network <cit.> is a Bitcoin Verification Layer that integrates client-side verification within its L2 solution. The protocol designs a client-side verification process where transaction validation is performed collectively by distributed key management systems without relying on a central authority. Its Dynamic Hidden Committees (DHC) act as security guardians, ensuring cross-chain message integrity, while its public blockchain, Bool Chain, serves as an EVM-compatible ledger for recording committee activities and supporting future application development. Bool Network's client-side verification is secured by Ring VRF <cit.> (where a user generates a unique pseudorandom output and a proof of its correctness using their private key and a set of public keys, i.e., the “ring”).
(6) Mercury Layer <cit.> represents an implementation of client-side verification within the Bitcoin ecosystem, focusing on statechain technology to enable efficient asset transfers. At its core, Mercury Layer operates through a system of blind co-signing and key updates, allowing for the instantaneous and cost-free transfer of Bitcoin UTXOs. The protocol emphasizes client-side verification by leveraging client software to perform all transaction operations and statechain validations, ensuring that the server remains unaware of transaction details and UTXO identities. This approach requires clients to actively verify partial signatures, manage backup transactions, and oversee key updates, thereby upholding the security of the statechain without relying on server-side control.
§.§ State Channel (Fig.<ref>)
(1) Lightning Network <cit.> is a L2 scaling solution for Bitcoin that addresses the blockchain's limitations in transaction throughput and confirmation times by moving the majority of transactions off-chain. It operates through a network of payment channels, utilizing two primary smart contract mechanisms: the Recoverable Sequence Maturity Contract (RSMC), which facilitates secure allocation updates and dispute resolution, and HTLC, which enables the creation of timed, conditional transactions across multiple parties.
(2) OmniBOLT <cit.> extends Lightning Network's capabilities to facilitate smart asset transactions. Built on the BTC/OmniLayer network, OmniBOLT enables the circulation of OmniLayer assets through lightning channels, providing instant payments, cross-channel atomic swaps, and decentralized exchange functionalities. Its protocol suite, written in Golang, includes implementations for basic HTLC payments, atomic swaps of multiple currencies, and an automatic market maker with a liquidity pool for DEX.
(3) Lnfi Network (formerly NostrAssets) <cit.> integrates Taproot assets and Satoshis, allowing users to send and receive these assets using Nostr’s public and private keys. Leveraging Lightning Network for settlement and security, NAP ensures high transaction throughput, minimal latency, and low fees through off-chain transactions. This integration enables near-instantaneous asset transfers, crucial for applications like micro-payments and real-time trading.
(4) Ark Protocol <cit.> is designed to address the scalability limitations of Lightning Network. It allows users to send and receive payments without the need for incoming liquidity. Ark does not require the recipient to be online or interactive for the transaction to be completed. The protocol's lifting mechanism allows users to convert on-chain UTXOs into virtual UTXOs without reliance on third parties, and the redemption function allows to move funds back to the base layer.
§.§ More
(1) Tectum <cit.> introduces SoftNotes, a L2 solution that enhances Bitcoin's scalability. SoftNotes are digital banknotes representing Bitcoin wallet ownership, facilitated by Tectum blockchain. Technically, SoftNotes operates on a bearer instrument model, allowing for off-chain transactions that are untraceable and anonymous, with cryptographic finality. Tectum's also includes a hybrid mode for transactions, a smart contract-based security model for handling BTC private keys, and the capability to represent and transfer NFTs.
(2) Nubit <cit.>, a Bitcoin data availability layer, introduces DAS solution to address the scalability challenge by allowing nodes to verify block data availability through random sampling rather than downloading entire blocks. DAS employs two protocols: the sampling protocol for verifying data availability through random chunk requests, and the decoding protocol for reconstructing the full block from verified chunks. The system involves three types of participants: validators who run the sampling protocol and sign block headers upon successful verification; full storage nodes that maintain the entire block and respond to chunk requests; and light clients who initiate sampling to ensure data availability.
(3) HyperAGI <cit.>, a Bitcoin L2 solution, focuses on decentralized, compute-intensive applications, particularly for Artificial General Intelligence (AGI) and metaverse integration. HyperAGI claims to support complex, high-demand tasks such as 3D rendering and AI computations. It achieves this by integrating a 3D pipeline-based Pinpoint protocol, which achieves computation verification in a decentralized manner. Additionally, HyperAGI employs a HyperTrust consensus algorithm that operates with the Graph Virtual Machine.
§ BITCOIN IMPROVEMENT PROPOSALS (BIPS)
We reviewed all existing BIPs (cf. Table <ref>) and captured the major ones (focusing on the final stage, as of Aug. 2024) for Bitcoin's basic design and development lineage.
This work references the following BIPs: 174/370/371 (PSBT), 141/144 (SegWit), and 340-343 (Taproot).
|
http://arxiv.org/abs/2409.03626v1 | 20240905153833 | Strong asymptotic freeness of Haar unitaries in quasi-exponential dimensional representations | [
"Michael Magee",
"Mikael de la Salle"
] | math.PR | [
"math.PR",
"math.GR",
"math.OA",
"math.RT"
] |
ℱ
Autχ
ℂ
ℋ̋
U
𝒫
ext
hull
triv
Hom
tr
End
Łℒ
𝒲
𝔼
SL
ℝ
ℤ
→
𝒜
𝐚
⇝
𝐃
𝐛̱
def
=
𝒵
Tr
ℕ
std
H.S.
ε
𝐜̧
𝐝̣
Å𝐀
𝐁
𝐮̆
𝐯̌
spec
Ind
1/2
Re
Im
𝔭
ȷ𝐣
B
tr
rank
𝒦
ℋ
𝔥
ℰ
PSL
𝒢
Int
acc
𝖺𝗐𝗅
even
𝐳
𝕀id
𝒞
cusp
new
𝕃
𝐌
ℐ
X
𝐅
↪
Ext
ℬ
Id
ℚ
Ø𝒪
Mat
NN
𝔫𝔫
Tr
𝖲𝖦𝖱𝖬
𝐦
𝐧
𝐤̨
𝖦𝖱𝖬
vac
𝒮
red
V
SO
Γ^∨
fd
perm
ℍ
𝒯
𝖩𝖺𝖼
𝖧𝖺𝗆
χ
𝖿𝗂𝗑
(^n)^⊗k
scl
𝐅
ℙ
𝒰
SU
Strong asymptotic freeness of Haar unitaries in quasi-exponential
dimensional representations
Michael Magee and Mikael de la Salle
September 9, 2024
=============================================================================================
§ ABSTRACT
We prove almost sure strong asymptotic freeness of i.i.d. random unitaries
with the following law: sample a Haar unitary matrix of dimension
n and then send this unitary into an irreducible representation
of (n). The strong convergence holds as long as the irreducible
representation arises from a pair of partitions of total size at most
n^1/24-ε and is uniform in this regime.
Previously this was known for partitions of total size up to ≍log n/loglog n
by a result of Bordenave and Collins.
§ INTRODUCTION
Let (n) denote the group of complex n× n unitary matrices.
For each k,ℓ∈ there is a unitary representation
π_k,ℓ^0:(n)→((^n)^⊗ k⊗((^n)^∨)^⊗ℓ).
This representation has a non-zero invariant vector for (n) if
and only if k=ℓ, and in that case the space of invariant vectors
is understood[The space of invariant vectors is the image of [S_k] in ((^n)^⊗ k)≅(^n)^⊗ k⊗((^n)^∨)^⊗ k.
If n≥ k this is isomorphic to [S_k] and we will always
be in this regime in this paper. If n<k the dimension of the space
is still understood as the number of permutations in S_k with
longest increasing subsequence ≤ n <cit.>.]. Let π_k,ℓ be the restriction of π_k,ℓ^0 to
the orthocomplement to the invariant vectors.
Fix r∈[n]. The main theorem of this paper is the following.
Let U_1^(n),…,U_r^(n) denote i.i.d.
Haar distributed elements of (n). For any A<1/24 the
following holds almost surely. For any non-commutative *-polynomial[That is, a polynomial in r non-commuting indeterminates X_1,…,X_r
and their formal adjoints X_1^*,…,X_r^*.] p
sup_k+ℓ≤ n^A|‖π_k,ℓ(p(U_1^(n),…,U_r^(n)))‖ -p(x_1,…,x_r)|=o(1)
where x_1,…,x_r are generators of a free group _r
and the norm on the right is the one in C_^*(_r).
The analog of Theorem <ref> with (n) replaced by S_n
will appear in forthcoming work of E. Cassidy.
Theorem <ref> gives a clean statement about strong convergence
of i.i.d. Haar elements of (n) in representations of quasi-exponential
dimension in the following form:
Let U_1^(n),…,U_r^(n) denote i.i.d.
Haar distributed elements of (n). For any A<1/24
the following holds almost surely. For any non-commutative *-polynomial
p
sup_π∈(n)\
(π)≤exp(n^A)
|‖π(p(U_1^(n),…,U_r^(n)))‖ -p(x_1,…,x_r)|=o(1).
Our proof of Theorem <ref> relies on two things:
* a recent breakthrough of Chen, Garza-Vargas, Tropp, and van Handel
<cit.> who found a new (and remarkable) approach to strong
convergence based on `differentiation with respect to n^-1'.
We add to this method in various ways in the sequel. One of the conceptual
differences is a new criterion that we find for temperedness of a
unitary representation of a free group (or more generally a group
with the rapid decay property), and therefore for strong convergence
towards the regular representation, see <ref>.
This allows us to bypass the use of Pisier's linearization argument,
which was essential in many proofs of strong convergence so far (a
notable exception is the work of Paraud and Collins-Guionnet-Parraud
<cit.>),
and replace it by considerations on random walks on free groups, see
<ref>.
* The method above is given as input rapid decay estimates for expected
values of stable characters of word maps on (n). The key feature
of these estimates is that, up to a point, they actually improve with
the complexity of the representations that we consider. This feature
is intimately related to the concept of stable commutator length
in free groups as uncovered in <cit.>.
Theorem <ref> was proved when k=1,ℓ=0 by Collins and
Male in <cit.>. This resolved a problem left open in
Haagerup and Thorbjørnsen's breakthrough work <cit.>.
Bordenave and Collins <cit.> prove Theorem <ref>
in the regime
k+ℓ≤ clog(n)/loglog(n)
for some positive c.
Theorem <ref> can be written in the following two alternate
ways.
* Let V_i^(n)⊕_k+ℓ≤ n^Aπ_k,ℓ(U_i^(n)).
The random matrices V_i^(n) are (almost surely) strongly
asymptotically free.
* The random unitary representations of _r described by x_i↦ V_i^(n)
almost surely strongly converge to the regular representation
of _r.
The following simple case of Theorem <ref> is both new and
important.
For any A<1/24, almost surely,
for all k,ℓ∈∪{0} such that k+ℓ≤ n^A
‖∑_i=1^rπ_k,ℓ(U_i^(n))+π_k,ℓ(U_i^(n))^-1‖ =2√(2r-1)+o_n→∞(1).
We highlight this corollary in connection with the following well
known question of Gamburd, Jakobson, and Sarnak <cit.>
— do generic in (Haar) measure (u_1,…,u_r)∈SU(2)^r
have some ϵ>0 such that for all k>0
‖∑_i=1^rπ_k(u_i)+π_k(u_i)^-1‖≤2r-ϵ?
It was proved by Bourgain and Gamburd in <cit.>
for u_i with algebraic entries that generate a dense subgroup
of SU(2). This was extended to SU(n) for general
n≥3 by Bourgain and Gamburd <cit.> —
in this case the spectral gap is uniform over all representations
π_k,ℓ with k+ℓ>0. The analogous result for all compact
simple Lie groups was obtained by Benoist and de Saxce <cit.>.
Of course, Corollary <ref> does not advance these
questions for any fixed n as it deals only with infinite sequences
of random matrices of dimension n→∞ but it does make significant
progress on a relaxation of the problem. The motif of Theorem <ref>
is that we have only a small amount of randomness of our matrices
compared to their dimensions.
The method of proof also allows us to obtain estimates for large matrix coefficients. For example, we have the following result, which relies on the ideas from <cit.> and the results of <cit.>.
Let k_n≤exp(n^1/2 (log n)^-4). For every q and every sequence P_n of non-commutative *-polynomial of degree ≤ q and with coefficients in M_k_n(), almost surely
lim_n P_n(U_1^(n),…,U_r^(n))/P_n(x_1,…,x_r)=1.
Bordenave and Collins <cit.> proved the same theorem for q=1 and k_n≤exp(n^1/32r+160) instead of k_n=exp(n^1/2 + o(1)); this result gives yet another proof of the result from <cit.>, which has important consequences in the theory of von Neumann algebras <cit.>. It is an intriguing question whether the exponent 1/2 is optimal or not. As observed by Pisier <cit.>, the optimal exponent is necessarily at most 2.
§.§ Acknowledgments
We give huge thanks to Ramon van Handel who generously explained the
methods of the work <cit.> to us during several enjoyable
meetings at the I.A.S. in April/May 2024. Without these meetings,
this paper would not exist.
Funding:
M. M. This material is based upon work supported by the National Science
Foundation under Grant No. DMS-1926686. This project has received
funding from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation programme
(grant agreement No 949143).
M. S. Research supported by the Charles Simonyi Endowment at the Institute
for Advanced Study, and the ANR project ANCG Project-ANR-19-CE40-0002.
§ PRELIMINARIES
§.§ Representation theory of (n)
For partitions λ=(λ_1,…,λ_p)⊢ k,μ=(μ_1,…,μ_q)⊢ℓ,
for n≥ p+q let s_λ,μ:(n)→ denote the character
of the representation
(π_λ,μ,V^λ,μ)
with dominant weight
(λ_1,λ_2,…,λ_p,0,0,…,0 ,-μ_q,-μ_q-1,…,-μ_1).
n-(p+q)
Because this character interpolates for all n≥ p+q, we refer
to it as a stable character. In the paper we write |λ|=k
for the size of λ.
§.§ Representation theory of (n)
The irreducible unitary representations of (n) are closely related
to those of (n): for every partitions λ,μ as before
with n≥ p+q, the restriction of π_λ,μ to (n)
is an irreducible representation; all irreducible representations
of (n) appear in this way; and the restrictions of π_λ,μ to
(n) and π_λ',μ' to (n) coincide if and only
if the dominant weights differ by a multiple of (1,…,1). In
other words, the irreducible unitary representations of (n)
are indexed by the sets of integral dominant weights of the root system
A_n-1, which can be parametrized by the set of n-tuples
of integers Λ=(Λ_1,…,Λ_n)∈^n with
Λ_1≥…≥Λ_n, modulo the subgroup (1,…,1).
Write Λ_1 for the norm on ^n/(1,…,1)
Λ_1=inf_t∈∑_i|Λ_i-t|.
It is the natural norm in the dual of the subspace of (^n,·_∞)
for which the coordinates sum to 0. In what follows, whenever Λ
belongs to the quotient ^n/((1,…,1)), we will
chose a representative (Λ_1,…,Λ_n)∈^n
such that ∑_i|Λ_i|=Λ_1. If Λ
belongs to the image of ^n, we can and will assume that (Λ_1,…,Λ_n)
has integer coordinates.
By Weyl's dimension formula, if π is an irreducible representation
of (n) with highest weight Λ,
(π)=∏_1≤ i<j≤ nj-i+Λ_i-Λ_j/j-i.
Lower bounds for (π) in terms of Λ have been obtained
in <cit.>, but we will need a bound that is
more precise[For example if Λ=(k,0,…,0), our bound gives (π)≥exp(ck),
whereas <cit.> gives (π)≥(k+1)^log(n-1).] in the regime Λ_1=o(n).
There is a constant c>0 such that for every integer n≥2 and
every irreducible representation π of (n),
exp(cmin(Λ_1,n))≤(π)≤exp(Λ_1log n),
where Λ is the highest weight of π.
The upper bound is because π appears as a sub-representation
of the representation of (n) on (^n)^k⊗(^n)^⊗ℓ),
where k is the sum of the positive Λ_i's and -ℓ
is the sum of the negative Λ_i's. And (^n)^k⊗(^n)^⊗ℓ)
has dimension (k+ℓ)n=exp(Λ_1log n).
For the lower bound, by Weyl's dimension formula (<ref>)
we have to prove
cmin(Λ_1,n)≤∑_i<jlog(1+Λ_i-Λ_j/j-i).
We will prove this inequality for every Λ∈^n with Λ_1≥…≥Λ_n.
In that case, since the right-hand side increases if we replace Λ
by tΛ for t>1, it is enough to consider the case when
Λ_1≤ n. Finally, replacing Λ by (-Λ_n,…,-Λ_1),
we can assume that the sum of the positive entries of Λ is
at least Λ_1/2.
Let n_1=⌊n/2⌋ and n_2=⌊3n/4⌋.
Expressing that the ℓ_1 norm of (Λ_1,…,Λ_n)
is at most the ℓ_1 norm of (Λ_1-t,…,Λ_n-t)
for every t≠0, we see that Λ has at most n/2
positive entries, and at most n/2 negative entries. In
particular, we have
∑_i=1^n_1Λ_i≥1/2Λ_1,
and Λ_j≤0 for every n_2<j≤ n. If i≤ n_1
and j>n_2, we have
Λ_i-Λ_j/j-i≥Λ_i/n.
We deduce the obvious bound
∑_i<jlog(1+Λ_i-Λ_j/j-i)≥∑_i≤ n_1,n_2<jlog(1+Λ_i/n).
By the bound log(1+t)≥ tlog(2) for every 0≤ t≤1 and
the inequality 0≤Λ_i≤Λ_1=n, this is at
most
log(2)∑_1≤ i≤ n_1∑_n_2<j≤ nΛ_i/n≥log(2)/4∑_i=1^n_1Λ_i.
By (<ref>), we obtain
(<ref>) with c=log(2)/8.
For every 0<A≤1 and
every n≥ N, for every irreducible representation π of (n):
(π)<exp(cn^A)
implies
π⊂⊕_k+ℓ≤ n^Aπ_k,ℓ,
which in turn implies
(π)≤exp(n^Alog n).
§ MATRIX INTEGRAL RESULTS
Let _r denote the free group on a fixed generating set X{x_1,…,x_r}.
For w∈_r let
w:(n)^r→(n)
denote the induced word map defined by substituting in an r-tuple
of unitaries for the elements of X appearing in a reduced expression
of w. For example if w=x_1x_2x_1^-1x_2^-1 then w(u_1,u_2,…,u_r)=u_1u_2u_1^-1u_2^-1.
Let |w| denote the word length of w w.r.t. X.
Let
_n[s_λ,μ(w)]∫_(n)^rs_λ,μ(w(u_1,u_2,…,u_r))du_1⋯ du_r.
For L∈ let
g_L(x)∏_c=1^L(1-c^2x^2)^⌊L/c⌋.
The input into our analysis is the following result about expected
values of stable characters of word maps, building on the works <cit.>.
Let w∈_r be not the identity
with |w|≤ q.
* There is a polynomial P_λ,μ,w∈[x]
such that for
n≥(k+ℓ)|w|,
_n[s_λ,μ(w)]=P_λ,μ,w(1/n)/g_(k+ℓ)q(1/n),
with
(P_λ,μ,w)≤3(k+ℓ)q(1+log((k+ℓ)q)).
* If w is not a proper power in _r,
then _n[s_λ,μ(w)] = O(n^-1/3 (k+ℓ)).
* If w is not a proper power in _r and μ=∅, then _n[s_λ,∅(w)] = O(n^-(k+ℓ)).
It is probable that the conclusion of Part <ref>
also holds with O(n^-(k+ℓ)), but we are unsure how to prove
this at the moment, and it just changes constants in our theorems.
Part <ref> of Theorem <ref> follows
from combining a result of <cit.> with one of Duncan
and Howie <cit.> as explained in <ref>.
Theorem <ref> will be proved in the later part
of the paper (<ref>-<ref>).
In <ref>-<ref>
we prove Theorem <ref> from Theorem <ref>.
§ MARKOV BROTHERS INEQUALITY
We use the following fundamental inequality by Andrey and Vladimir
Markov, see <cit.> for references. If P
is a degree ≤ D polynomial in one variable, then for every integer
k,
sup_[-1,1]|P^(k)|≤D^2(D^2-1)…(D^2-(k-1)^2)/(2k-1)!!sup_[-1,1]|P|
where (2k-1)!!=1·3…(2k-1). By an affine change of variable,
for a general interval (<ref>) becomes:
sup_[a,b]|P^(k)|≤2^k/(b-a)^kD^2(D^2-1)…(D^2-(k-1)^2)/(2k-1)!!sup_[a,b]|P|.
We deduce the following, which is essentially <cit.>.
For every polynomial P of degree ≤ D
and every integer N≥ D^2
sup_[0,1/N]|P|≤1/1-D^2/N+1sup_n≥ N|P(1/n)|.
Every element of the interval [0,1/N] is at distance at
most 1/2N(N+1) from {1/n| n≥ N}. Therefore,
by the fundamental inequality of calculus, we have
sup_[0,1/N]|P|≤sup_n≥ N|P(1/n)|+1/2N(N+1)sup_[0,1/N]|P'|.
By the Markov brothers inequality (<ref>) with k=1,
this is less than
sup_n≥ N|P(1/n)|+1/2N(N+1)·2ND^2sup_[0,1/N]|P|.
The lemma follows.
As a consequence,
For every polynomial P of
degree ≤ D and every integer k≤ D,
sup_[0,1/2D^2]|P^(k)|≤2^2k+1D^4k/(2k-1)!!sup_n≥ D^2|P(1/n)|.
By (<ref>) with a=0 and b=1/2D^2,
sup_[0,1/2D^2]|P^k|≤(4D^2)^kD^2(D^2-1)…(D^2-(k-1)^2)/(2k-1)!!sup_[0,1/2D^2]|P|.
The lemma follows from the bound D^2(D^2-1)…(D^2-(k-1)^2)≤ D^2k
and Lemma <ref>.
§ A CRITERION FOR STRONG CONVERGENCE
In this section, Γ will be a finitely generated group with a
fixed finite generating set and corresponding word-length. We denote by
_≤ q[Γ] the subspace of the elements of the group
algebra of Γ supported in the ball of radius q.
A function u Γ→ is tempered if
lim sup_n →∞ |u((x^*x)^n)|^1/2n≤λ(x)
for every x ∈[Γ]. Here λ is the left-regular representation of Γ on ℓ_2(Γ) and the norm is the operator norm.
A function u Γ→ is completely tempered
if
lim sup_n →∞ |⊗ u((x^*x)^n)|^1/n≤(id⊗λ)(x)
for every integer k ≥
1 and every x ∈ M_k()⊗[Γ], where is the trace on M_k().
For example, if u(γ) = ⟨π(γ)ξ,ξ⟩ is a matrix coefficient of a unitary representation of Γ, then u is tempered if and only if it is completely tempered, if and only if the cyclic representation generated by ξ is weakly-contained in the left-regular representation. So Definition <ref> can be seen as an adaptation to functions that are not related to representations of the classical notion of tempered representation.
This section is devoted to the proof of the following criterion for strong convergence in probability of a sequence of random representations.
Let π_n be a sequence of random unitary representations of finite nonrandom dimension, u_n Γ→ be functions and ε_n>0. Assume that
* u_n is tempered and there is a polynomial P_n such that for every q and every x ∈_≤ q[Γ], |u_n(x)| ≤ P_n(q) x_C^*(Γ),
* |(π_n(x)) - u_n(x)| ≤ε_n exp(q/log(2+q)^2) x_C^*(Γ) for every q and every x ∈_≤ q[Γ],
* lim_n ε_n = 0.
Then for every y ∈[Γ], and every δ>0,
lim_n ( π_n(y)≥λ(y)+δ) = 0.
If C^*_λ(Γ) has a unique trace, then the conclusion becomes that
lim_n ( | π_n(y) - λ(y)| ≥δ) = 0.
The choice of w(q) = exp(q/log(2+q)^2) is an arbitrary choice that is convenient for our applications, the crucial property that is used is that ∑_q 1/1+q^2log w(q)<∞, reminiscent of the Beurling-Malliavin Theorem. It could be replaced by exp( q^2 u_q) for an arbitrary decreasing and summable sequence (u_q)_g ≥ 0.
Proposition <ref> is a variant with high derivatives of the criterion for strong convergence that appeared implicitely, for polynomial w, in <cit.> with u_n(γ) = 1_γ=1, and <cit.> for general u_n.
The condition on the existence of P_n is probably not needed, but will be trivially satisfied in the applications. It allows to use standard results on distributions rather than ad-hoc proofs.
We have the following variant for matrix coefficients.
Let π_n,d_n,u_n,ε_n be as in Proposition <ref>. Assume moreover that u_n is completely tempered. Let k_n be a sequence of integers such that lim_n ε_n k_n=0.
Then for every integer q, δ>0 and every sequence y_n ∈_≤ q[Γ] ⊗ M_k_n() with y_n_C^*(Γ)⊗ M_k_n≤ 1,
lim_n ( π_n(y_n)≥λ(y_n)+δ) = 0.
If C^*_λ(Γ) has a unique trace and is exact, then the conclusion becomes that
lim_n ( | π_n(y_n) - λ(y_n)| ≥δ) = 0.
We first collect a few ingredients that we need for the proof.
§.§ Bump functions with small Fourier coefficient
A nonzero compactly supported continuous function cannot have a Fourier transform that decays too fast at infinity, as it has to satisfy ∫log |f̂(t)|/1+t^2 dt > -∞. In other words, the condition ∫_0^∞φ(t)/1+t^2dt <∞ on a function φ^+ →^+ is a necessary condition for the existence of a nonzero compactly supported continuous function f such that |f̂(t) | ≤exp(-φ(|t|) for every t. The Beurling-Malliavin theorem asserts that this is often also a sufficient condition, see for example <cit.>. The following elementary fact is an illustration of the same phenomenon. The small nuance compared to the Beurling-Malliavin theorem is that we need that f be nonnegative. The same proof works for φ(t) = t/(log(2+|t|))^1+ε replaced by any continuous function [0,∞) → [0,∞) such that φ(t)/t^2 is non-increasing and integrable on [1,∞).
For every ε>0, there is a real
number M>0 and an even function C^∞ function f→
that is strictly positive on (-1,1), zero outside of (-1,1), and such
that
|f̂(t)|≤exp(-M|t|/(log(2+|t|))^1+ε).
The proof is a standard construction of a smooth bump function as
infinite convolution product of indicator functions <cit.>.
Define a_j=c/j(log2+j)^1+ε, where c>0
is chosen such that ∑_j≥1a_j=1. Let (X_j)_j≥1
be a sequence of independent random variables, all uniform in [-1,1].
Let μ be the law of ∑_ja_jX_j. It is a measure with
full support in [-∑_ja_j,∑_ja_j]=[-1,1]. Then
μ(t)=exp(-it∑ a_jX_j)=∏_j≥1sin(ta_j)/ta_j.
The inequality
|μ(t)|≤∏_j≤t/(log2+t)^1+ε1/|t|a_j≤exp(-M|t|/(log2+t)^1+ε)
is a computation. It implies that μ is absolutely continuous
with respect to the Lebesgue measure and that its Radon-Nikodym derivative,
which is our f, is C^∞. f does not vanish on the interval
(-1,1) because f is a log-concave function as a limit of convolutions
of log-concave functions.
In this lemma, we have tried to optimize the rate of decay of the
Fourier transform of f, because this is what allows to take A
the largest in Theorem 1.1. But this is reflected by the fact that
f(t) is extremely small when t<1 is close to 1, something
like exp(-exp(O(1/1-t))), which gives rather bad quantitative
estimates for the speed of the convergence. For smaller values of
K≪ n^A, it would better to use functions with slower decay
of f̂.
We can periodize the above function:
For every 0<α<π, there is a nonnegative
real-valued even function φ_α∈^∞(/2π) that is strictly positive on (-α,α) +2π, zero outside of (-α,α) +2π
with support equal to [α,α] and such that
|φ̂_α(k)|≤exp(-M|k|α/log(2+|k|α|)^1+ε)
for every k∈.
Set φ_α(t+2π)=f(t/α) for every t∈[-π,π].
§.§ Sobolev algebras in the full group C^*algebra
Let (w(q))_q∈ be a non-decreasing sequence of positive real
numbers satisfying w(q_1+q_2)≤ w(q_1)w(q_2). Set
𝒮_w(Γ)={ x∈ C^*(Γ)| x=∑_qx_q,x_q∈_≤ q[Γ],∑_qw(q)x_q_C^*(Γ)<∞}.
It is a Banach algebra for the norm
x_w:=inf{∑_qw(q)x_q_C^*(Γ)| x=∑_qx_q,x_q∈_≤ q[Γ]}.
For w(q)=(1+q)^i we denote the corresponding space 𝒮_i(Γ).
We have the following elementary fact:
A function u Γ→ extends by linearity to a linear map 𝒮_w(Γ)→ of norm ≤ N if and only if for every q and every x ∈_≤ q[Γ],
|u(x):=∑_γ x(γ) u(γ)| ≤ N w(q) x_C^*(Γ).
The next lemma is useful and suggests the terminology of Sobolev algebras for the algebras 𝒮_i(Γ).
Let y=y^*∈_≤ q[Γ] and f:[-y_C^*(Γ),y_C^*(Γ)]→ be a continuous function. Let w → as above. Assume that the function φ(θ) = f(y_C^*(Γ)cosθ) satisfies ∑_n ∈ w(q |n|) |φ̂(n)|<∞. Then f(y)∈𝒮_w(Γ) and
f(y)_w ≤∑_n ∈ w(q |n|) |φ̂(n)|.
In particular, if f is C^i+1, then f(y) ∈𝒮_i(Γ) with norm ≤ C_i (1+y_C^*(Γ))^i+1 (1+q)^i f_C^i+1.
In the proof, we write y for y_C^*(Γ). The assumption implies that the Fourier coefficients of φ are absolutely summable, which gives rise to a norm-converging expansion
f(y)=∑_n ∈φ̂(n)T_n(y/y)
where T_n is the n-th Chebyshev polynomial T_n(cosθ)=cos(nθ). So T_n(y/y) belongs to _≤ q|n|[Γ], has norm ≤ 1 in C^*(Γ)
and
∑_nw(q|n|)|φ̂(n)|T_n(y/y)_C^*(Γ)≤∑_nw(q|n|)|φ̂(n)|.
The proves the first part of the lemma. For the second part, if f is C^i+1, then so is φ, and by the Cauchy-Schwarz inequality
∑_n (1+q|n|)^i |φ̂(n)| ≤ (∑_n(1+q|n|)^-2)^1/2(∑_n(1+q|n|)^2i+2|φ̂(n)|^2)^1/2,
which is finite.
Let u Γ→. Assume that u extends to a continuous linear map u𝒮_i(Γ) →. Then u is tempered if and only if there is n such that for every selfadjoint x ∈𝒮_i(Γ) ∩λ, u(x^n)=0. In that case, this holds for all n≥ i+2.
Furthermore, u is completely tempered if and only if there is n such
that for every k and every selfadjoint x ∈ M_k()⊗
(𝒮_i(Γ) ∩λ), (Tr⊗
u)(x^n)=0. In that case, this holds for all n≥ i+2.
We prove the equivalence for temperedness, the proof for complete temperedness is identical.
We start with a preliminary observation. Lemma <ref> tells us that for every self-adjoint y ∈_≤ q(Γ), the function f ∈ C^∞() ↦ u(f(y)) is a compactly supported distribution of order ≤ i+1, and the largest symmetric interval containing its support is [-lim sup|u(y^k)|^1/k,lim sup|u(y^k)|^1/k], see <cit.>).
Given this observation, the if direction is direct. Let ε>0, and pick f be a C^∞ function equal to 0 on the interval [-λ(y)^2,λ(y)^2] and to 1 outside of the ε-neighbourhood of it. For k large enough (so that t↦ |t|^k/n is C^i+2), x=(y^*y)^k/n f(y^*y) belongs to 𝒮_i(Γ) ∩λ. As a consequence, u(x^n)=0 and therefore
u( (y^*y)^k) = u( (y^*y)^k(1-f(y^*y)^n)).
But the C^i+1 norm of t↦ t^k(1-f(t)^n) is ≤ C(i,ε,y) (λ(y)^2+ε)^k, which implies that
lim sup_k |u( (y^*y)^k)|^1/2k≤√(λ(y)^2+ε).
This proves that u is tempered.
For the converse, consider a self-adjoint x ∈𝒮_i(Γ)∩λ. Decompose x= ∑_q x_q as in the definition of 𝒮_i(Γ). We can moreover assume that each x_q is self-adjoint. Let y_q = ∑_s≤ q x_q ∈_≤ q[Γ]. Then we have
λ(y_q) = λ(y_q-x)≤∑_q>nx_s = o((1+q)^-1),
because ∑_q (1+q)^ix_q<∞. Let φ be a C^∞ function equal to 1 on [-1,1] and 0 outside of [-2,2], and let φ_q(t) = φ(t/λ(y_q)). We know from the preliminary observation that Λ_q(f) = u(f(y_q)) is a distribution with support inside [-λ(y_q),λ(y_q)] and such that |Λ_q(f) |≤ C (1+q)^i f_C^i+1() for some C independent from q. Therefore, for every n we have
u(y_q^n) = u(y_q^n φ_q(y_q))=(1+q)^i O_q →∞(λ(y_q)^n-i-1).
If n≥ i-2 this goes to 0. Therefore, we have
u(x^n) = u(lim_q y_q^n) = lim_q u(y_q^n)=0.
The proposition is proved.
§.§ Unique trace and exactness
We use the following easy consequence of the operator-space reformulation of exactness, in the line of <cit.>.
Let A be an exact C^*-algebra. For every finite family a_1,…,a_n in A and every ε>0, there is an integer d such that for every k and every b_1,…,b_n ∈ M_k(), there are norm ≤ 1 matrices u,v ∈ M_d,k() such that
(1-ε)∑_i a_i ⊗ b_i≤∑_i a_i ⊗ u b_i v^*.
If a_i belong to M_N(), the lemma is easy with d=N, because the
norm of X=∑ a_i ⊗ b_i is the supremum of ⟨ X
ξ,η⟩ over norm 1 elements of ^N ⊗^k, and
every element of ^N ⊗^k belongs to ^N ⊗ E for a
subspace E ⊂^k of dimension ≤ N. The general case
follows by Kirchberg's characterization of exact C^*-algebras, which
implies that for every ε>0, there is N such that the
operator space spanned by a_1,…,a_n is at completely bounded
distance ≤ 1+ε from a subspace of M_N()
<cit.>.
Let Γ be a finitely generated group. Assume that C^*_λ(Γ) has a unique trace. Then for every integer q and ε>0, there exist y_1,…,y_n ∈[Γ] and δ>0 such that, for every finite dimensional unitary representation π of Γ, if
max_i π(y_i)/λ(y_i)≤ 1+δ
then
inf_y ∈_≤ q[Γ]π(y)/λ(y)≥ 1-ε.
If moreover C^*_λ(Γ) is exact, the conclusion (<ref>) can be strengthened to
inf_d≥ 1
y ∈ M_d()⊗_≤ q[Γ](id⊗π)(y)/(id⊗λ)(y)≥ 1-ε.
We use the following fact: C^*_λ(Γ) has a unique trace if
and only if for every γ∈Γ∖{1} and every
ε, there is δ>0 and a finite family y_1,…,y_k
∈[Γ] such that, for every unitary representation πΓ→ A to a C^*-algebra A with a tracial state σ,
max_i π(y_i)/λ(y_i)≤ 1+δ implies
|σ(π(γ))| ≤ε. The only if direction is
direct: if σ is an arbitrary tracial state on
C^*_λ(Γ), by applying the criterion to
A=C^*_λ(Γ) and π the left-regular representation, we
obtain that |σ(λ(γ))|≤ε for every
γ≠ 1 and ε>0, that is σ is the standard
trace. The converse is proved by standard ultraproduct arguments (see
<cit.> for the details).
The first conclusion of the lemma follows quite easily from this
characterization of the unique trace property. Indeed, if y ∈[Γ], there is an integer p such that λ(y)≤
(1+ε) τ((y^*y)^p)^1/2p, and by compactness the
same p can be taken for every y ∈_≤ q[Γ]. Now
applying the criterion for every γ in the ball of radius 2qp,
we obtain that there is a finite family y_1,…,y_k ∈[Γ]
such that, for π,A,σ as above, max_i
π(y_i)/λ(y_i)≤ 1+δ implies that for
every y ∈_≤ q[Γ], τ((y^*y)^p)^1/2p≤
(1+ε)σ(π((y^*y)^p))^1/2p, and in particular
λ(y)≤ (1+ε)^2 π(y). This is
(<ref>), up to a change of ε.
Observe that exactly the same argument also shows that, without any
further assumption on C^*_λ(Γ), for any fixed d,
(<ref>) can be replaced by
inf_y ∈ M_d() ⊗_≤ q[Γ](id⊗π)(y)/(id⊗λ)(y)≥ 1-ε.
But of course, y_i and δ a priori depend on d. If
C^*_λ(Γ) is exact,
Lemma <ref> allows to remove
this dependence and to conclude the proof of the lemma.
§.§ Proof of Proposition <ref> and Proposition <ref>
Let i_n be the degree of P_n and w(q) = exp(q/log (2+q)^2). Lemma <ref> implies that u_n extends by continuity to 𝒮_i(Γ), and that on its subspace 𝒮_w(Γ),
|(π_n(x)) - u_n(x)| ≤ε_n x_w.
Without loss of generality, we can take y≠ 0 and normalize it so that y_C^*(Γ)=1.
By Lemma <ref>, the map f ↦ u_n(f(y^*y/2)), is continuous for the
C^i_n+1([-1/2,1/2]) topology, so it extends to a distribution. The assumption that u_n is tempered in turn implies
that its support in contained in
[-λ(y)^2/2,λ(y)^2/2] (see <cit.>). In particular, using λ(y)>0, we have that u_n(f(y^*y))=0 for every f that vanishes on [-λ(y)^2/2,λ(y)^2/2] <cit.>.
Let α=arccos(λ(y^*y)/2)∈[π/3,π/2). Let φ_α be the function given by
Lemma <ref>, but for ε = 1/2. It is even,
so it is of the form φ_α(θ)=f(cos(θ)) for
a continuous function f[-1,1]→ that is zero on [-λ(y)^2/2,λ(y)^2/2], strictly positive outside of this interval, and C^∞ on (-1,1). We take x=f(y^*y). By the preceding discussion, u_n(x)=0 for every n. Moreover, by Lemma <ref>, x belongs to 𝒮_w(Γ), because
∑_N≥1exp(qN/log(1+qN)^2-Mα N/(log(1+α N))^1+1/2)<∞.
So the inequality (<ref>) together with the obvious inequality π_n(x)≤π_n(x) gives
π_n(x)≤ε_n x_w,
which goes to zero by the last assumption.
Now, since f is strictly positive on [-1,1]∖ [-λ(y)^2/2,-λ(y)^2/2], there is c>0 such that f(t/2)>c if |t|≥ (λ(y)+δ)^2. In particular, we have
( π_n(y)>λ(y)+δ) ≤(π_n(x) >c) ≤1/cε_n x_w,
which goes to 0. This proves the first part of the proposition.
The second half follows from the first half of Lemma <ref>.
The first part is identical to that of Proposition <ref>, considering the distribution f↦⊗ u_n(f(y_n^* y_n)).
The second part follows, using the full statement of Lemma <ref>.
§ RANDOM WALKS ON FREE GROUPS
The content of this section will play a key rôle in
the proof of Theorem <ref> (see Lemma <ref>).
§.§ Proper powers
Let μ be a symmetric probability measure on _r, whose
support is finite, contains the identity element and generates _r.
To save space, we will call such a measure reasonable. Let
(g_n)_n≥0 be the corresponding random walk on _r,
that is g_n=s_1s_2… s_n for iid s_i distributed
as μ. Let ρ=ρ(μ) be the spectral radius, that is the
norm of λ(μ) on ℓ_2(_r). We have the easy bound
∀ g∈_r,(g_n=g)=⟨λ(μ)^nδ_e,δ_g⟩≤ρ^n.
We say that an element of _r is a proper power if it is of
the form h^d for h∈_r and d≥2.
There is a constant C=C(μ) such that
(g_n is a proper power )≤ Cn^5ρ^n.
The proof uses that the Cayley graph of _r is a tree, in the
following form:
There is a constant C and an integer a
such that for every h∈_r and every h' on the segment from
the identity to h,
(g_n=h)≤ C∑_k=0^n+a(g_n+a=h and g_k=h'}).
When μ is supported on the generators, this is clear with C=1
and a=0: any nearest-neighbor path from e to h has to pass
through h'. In the general case, the idea is that such a path has
to pass not too far from h', and forcing it to pass exactly through
h' does not cost too much, neither in time nor in probability.
Let us make this idea precise. By our assumptions on μ, there
is an integer r such that the support of μ is contained in
the ball of radius r, and another integer a_0 such that c:=inf_g∈ B(e,r)(g_a_0=g)>0.
Let T be the first hitting time of the ball of radius r around
h', so that if g_n=h then T≤ n. The lemma will follow
from the following observation: if we add 2a_0 steps to the random
walk after time T, with probability at least c^2 we will be
at h' at time T+a_0 and back to g_T at time T+2a_0,
and then we can run the random walk as before. More formally, define
another realization of the random walk as follows: let g'_i be
an independent copy of the random walk, and define
g̃_n=
g_n if n≤ T
g_Tg'_n-T if T≤ n≤ T+2a_0
g_Tg'_2a_0g_T^-1g_n-2a_0 if T+2a_0≤ n.
By the Markov property (T is a stopping time) (g̃_n)_n≥0
is distributed as the random walk with step distribution μ.
Conditionally to T, the event A={g'_2a_0=e and g'_a_0=g_T^-1h'}
happens with probability ≥ c^2, and when it happens we have
g̃_n+2a_0=g_n for every n≥ T and g̃_T+a_0=h'.
Therefore, we can bound the probability of the event
B={g̃_n+2a_0=h and h'∈{g̃_̃k̃,0≤ k≤ n+2a_0}}
as follows
(B)≥(g_n=h and A)≥ c^2(g_n=h).
By the union bound we obtain
c^2(g_n=h)≤∑_k=0^n+2a_0(g̃_n+2a_0=h and g̃_̃k̃=h'),
which is the content of the lemma with C=c^-2 and a=2a_0.
Denote by p_n the probability that g_n is a cyclically
reduced proper power, that is g_n=h^d for a cyclically reduced
h and d≥2. We shall prove the following two inequalities,
from which the proposition follows immediately: there are constants
C_1,C_2 such that
(g_n is a proper power )≤ C_2n^2max_k≤ n+2ap_n+2a-kρ^k.
p_n≤ C_1n^3ρ^n,
We start with the proof of (<ref>). Let h
be cyclically reduced and d≥2. The assumption that h is cyclically
reduced means that h belongs to the segment between 1 and h^d,
and that h^d-1 belongs to the segment between h and h^d
(tautologically if d=2). By two applications of Lemma <ref>,
we have
(g_n=h^d) ≤ C^2∑_k_1+k_2+k_3=n+2a(g_k_1=h,g_k_1+k_2=h^d-1,g_k_1+k_2+k_3=h^d)
=C^2∑_k_1+k_2+k_3=n+2a(g_k_1=h)(g_k_2=h^d-2)(g_k_3=h).
The inequality (<ref>)
allows us to bound the middle term by ρ^k_2, and by symmetry
we have
(g_k_1=h)(g_k_3=h)=(g_k_1=h)(g_k_3=h^-1)=(g_k_1=h and g_k_1+k_2=e).
Summing over all cyclically reduced words, we get
∑_h cyclically reduced(g_n=h^d)≤ C^2∑_k_1+k_2+k_3=n+2aρ^k_2(g_k_1+k_3=e).
By (<ref>), this is less
than C^2(n+2a+1)^2ρ^n+2a. Summing over all d≤ n
we obtain (<ref>).
We now move to (<ref>). Let
h∈_r, not necessarily cyclically reduced. Write h=wh̃w^-1
its reduced expression with h̃ cyclically reduced. Then
h^d=wh̃^dw^-1 is the reduced expression of h^d.
In particular w and wh̃^d are on the segment from the
identity to h^d, and by two applications of Lemma <ref>
we obtain
(g_n=h^d)≤ C^2∑_k_1+k_2+k_3≤ n+2a(g_k_1=w)(g_k_2=h̃^d)(g_k_2=w^-1).
If we first sum over all h̃ such that wh̃w^-1
is reduced , and then over w, we obtain
(g_n is a proper power)≤ C^2∑_k_1+k_2+k_2≤ n+2ap_k_2∑_w(g_k_1=w)(g_k_3=w^-1).
The formula (<ref>) follows
because ∑_w(g_k_1=w)(g_k_3=w^-1)=(g_k_1+k_3=e)≤ρ^k_1+k_2,
by (<ref>).
§.§ Random walks and tempered functions
The following is a useful criterion for a function on a group with the rapid decay property to be tempered. Recall a group is said to have the rapid decay property if it admits a finite generating set and a polynomial P such that, for every a∈_≤ q[Γ],
λ(a)≤ P(R) (∑_γ |a(γ)|^2)^1/2.
Haagerup's inequality <cit.> asserts that free groups with their standard generating sets have the rapid decay property, with P(R)=3(1+R^2).
Let Γ be a finitely generated group with the rapid decay property, and u Γ→ a function. Assume that for every reasonable probability measure μ on Γ, if (γ_n)_n ≥ 0 is the associated random walk on Γ,
lim sup_n ( |u(γ_n)|)^1/n≤ρ(μ).
Then u is tempered, and even completely tempered.
Before we start, let us observe that the assumption (<ref>) in fact holds for every symmetric finitely supported probability measure μ, even if it not reasonable in the sense that its support is not generating or does not contain the identity. Indeed, let ν be a fixed reasonable probability measure. Then for every ε>0, μ_ε = 1/1+ε(μ+εν) is a reasonable probability measure, so that (<ref>) holds for μ_ε. If _ε and denote the law of the random walk with transition μ_ε and μ respectively, the inequality μ≤ (1+ε) μ_ε gives
|u(γ_n)| ≤ (1+ε)^n _ε |u(γ_n)|,
so we obtain
lim sup_n ( |u(γ_n)|)^1/n≤ (1+ε)ρ(μ_ε) ≤ρ(μ) + ερ(ν).
This proves that (<ref>) holds for μ by taking ε→ 0.
We now move to the proof of the proposition. We claim that for every k, q ≥ 1 and every x=x^* ∈ M_k() ⊗_≤ q[Γ],
lim sup_n →∞ |⊗ u(x^n)|^1/n≤ P(q) √(k)(id⊗λ)(x).
Here P is the polynomial satisfying (<ref>), given by the assumption that Γ has the rapid decay property.
By a standard tensor-power trick, (<ref>) implies that u is completely tempered. Indeed, given an arbitrary x ∈ M_k() ⊗_≤ q[Γ], applying (<ref>) to y=(x^*x)^m (which belongs to M_k() ⊗_≤ 2m q[Γ] and satisfies (id⊗λ)(y) = (id⊗λ)(x)^2m) and raising to the power 1/2m, it implies
lim sup_n →∞ |⊗ u((x^*x)^n)|^1/2n≤ (P(2mq) √(k))^1/2m(id⊗λ)(x),
and in the m →∞ limit
lim sup_n →∞ |⊗ u((x^*x)^n)|^1/2n≤(id⊗λ)(x).
So it remains to prove (<ref>). We can normalize x so that ∑_γx(γ)=1, where · is the operator norm on M_k(). Let μ(s) = x(s); it is a symmetric finitely supported probability measure on Γ. By the triangle inequality and the inequality |(x(s_1)… x(s_n))| ≤ k x(s_1)…x(s_n), we have
|⊗ u(x^n)| ≤ k ∑_s_1,…,s_n ∈Γμ(s_1)…μ(s_n) |u(s_1… s_n)| = k |u(γ_n)|.
By the assumption in the extended form above, we obtain
lim sup_n →∞ |⊗ u(x^n)|^1/n≤ρ(μ).
By the rapid decay assumption (<ref>), this is less than
P(q) (∑_s μ(s)^2)^1/2.
By by the basic fact μ(s)^2=x(s)^2 ≤(x(s)^* x(s)), we have
∑_s μ(s)^2 ≤ (⊗λ)(x^*x) ≤ k (id⊗λ)(x)^2,
which proves the claimed inequality (<ref>).
§ PROOF OF THEOREM <REF>
FROM THEOREM <REF>
Let K≥1 be an integer. Most of the objects will depend on K
but our notation will not reflect this. The only exception is the
letter C, which will always denote a constant that does not depend
on anything, but that can change from one line to the next.
The aim of this section is to explain why Theorem <ref>
implies the strong convergence result from Theorem <ref>.
Let
σ_n⊕_|λ|+|μ|=Kπ_λ,μ.
It is a sub-representation of ⊕_k=0^ℓπ_k,K-k^0,
therefore
dim(σ_n)≤(K+1)n^K.
Recall the definition of g_L from <ref>.
Theorem <ref> immediately implies the following
result about σ_n. Throughout this section and as in <ref> _n or simply will denote
the integral with respect to the Haar measure on (n)^r.
For every w∈_r, there
is a rational function φ_w∈(x) such that
* For every integer n≥ Kmax(|w|,1),
φ_w(1/n)=1/n^K_n[(σ_n(u_1,…,u_r))].
* If w≠ e has length ≤ q,
then g_Kqφ_w is a polynomial of degree ≤ D_q=3Kqlog(1+Kq).
* If w is not a proper
power, then φ_w^(i)(0)=0 for all i<K+K/3.
We collect in the next lemma some of the basic properties of g_L
that we need.
Let L≥1 be an integer. The polynomial
g_L(t)=∏_c=1^L(1-c^2t^2)^⌊L/c⌋
has the following properties, for every 0≤ t≤1/2L^2
and every integer i≥0:
1/2≤ g_L(t)≤1,
|g_L^(i)(t)|≤(3iL^3/2)^i,
|(1/g_L)^(i)(t)|≤2· i!·(C√(i)L^3/2)^i.
Fix 0≤ t≤1/2L^2. The inequality g_L(t)≤1
is clear and only uses |t|≤1/L. For the converse, use
that (1-u)≥ e^-2u for 0<u<1/4 to bound
g_L(t)≥exp(-2∑_c=1^LL/cc^2t^2)=exp(-L^2(L+1)t^2)≥1/2.
We now prove (<ref>). Let N=∑_c=1^L⌊L/c⌋
and write g_L(t)=∏_k=1^N(1-c_k^2t^2). Each of the
factors h_k(t)=(1-c_kt^2) is a degree 2 polynomial, so
when we differentiate i times g_L using the Leibniz rule,
we obtain a big sum of terms, in which some factors are derived twice
(so equal to h”_k(t)=-2c_k^2), some are derived once (so
equal to h'(t)=-2c_k^2t) and all the other are not derived.
Gathering the terms according to how many factors are derived twice,
we can write
g_L^(i)/g_L=∑_s=0^⌊i/2⌋i2s(2s-1)!!∑_k_1,…,k_s,ℓ_1,…,ℓ_i-2s all distinct ∏_α=1^sh_k_α”/h_k_α·∏_β=1^i-2sh'_ℓ_β/h_ℓ_β.
The term i2s(2s-1)!! appears as the number of ways to
partition a set of size i (the steps of derivation) into s sets
of size 2 (the steps when a factor h_k is derived twice) and
i-2s sets of size 1 (the steps when a factor h_k is derived
once). Forgetting the condition that the k_α and
ℓ_β are distinct, and bounding (1-c_k^2t^2)≤1
and (2s-1)!!≤ i^i/2, we can bound the preceding, in
absolute value, as follows:
|g_L^(i)(t)|≤ i^i/2∑_s=0^⌊i/2⌋i2st^i-2s(∑_k=1^N2c_k^2)^i-s≤ i^i/2∑_j=0^iijt^i-j(∑_k=1^N2c_k^2)^i-j/2.
Using
∑_k=1^N2c_k^2=∑_c=1^L2⌊L/c⌋ c^2≤ L^2(L+1)≤2L^3,
we obtain
|g_L^(i)(t)|≤ i^i/2∑_j=0^iij(1/2L^2)^i-j(2L^3)^i-j/2=i^i/2(L+√(2)L^3/2)^i≤(3√(i)L^3/2)^i.
This proves (<ref>).
The last inequality (<ref>) is
a formal consequence of the first two. Indeed, by induction we see
that, for any function g we can write the ith
derivative of 1/g as a product of (2i-1)!! terms, each
of which is of the form
± g^(α_1)·…· g^(α_i)/g^i+1
where (α_1,…,α_i) are nonnegative integers that
sum to i. For g=g_L, by (<ref>) and (<ref>),
each of these terms is bounded above that (3√(i)L^3/2)^i2^i+1.
We extend the map w↦φ_w by linearity, setting φ_x=∑_w∈_rx(w)φ_w
for x∈[_r].
If x∈_≤ q[_r],
then for any i,
sup_t∈[0,1/2D_q^2]|φ_x^(i)(t)|/i!≤ p(i,q)x_C^*(_r),
where
p(i,q)=
(2K+2)(CD_q^4/i^2)^i if i≤ D_q,
(2K+2)(CD_q^2)^D_q(C√(i)D_q^3/2)^i-D_q if i>D_q.
Let P=g_Kqφ_x. We know that P is a polynomial of degree
≤ D_q, so to bound its derivatives using the Markov brothers
inequality, we need to derive an a priori bound on the values of P.
For n≥ Kq, we have g_Kq(1/n)∈[0,1] and therefore
|P(1/n)|≤|φ_x(1/n)|=n^-K|[(σ_n(x(u_1,…,u_r)))]|.
For every (u_1,…,u_r)∈(n)^r, the map w↦σ_n(w(u_1,…,u_r))
is a unitary representation of _r (hence of C^*(_r)),
so σ_n(x(u_1,…,u_r))≤x_C^*(_r)
almost surely. Bounding the trace of a matrix by its norm times its
size, we obtain |[(σ_n(x(u_1,…,u_r)))]|≤dim(σ_n)x_C^*(_r),
and we conclude by the dimension bound (<ref>)
that
|P(1/n)|≤(K+1)x_C^*(_r).
We deduce by Lemma <ref> that for
any integer j,
sup_t∈[0,1/2D_q^2]|P^(j)(t)|/j!≤(2K+2)(CD_q^4/j^2)^jx_C^*(_r).
By the Leibniz rule, remembering that P is a polynomial of degre
≤ D_q,
φ_x^(i)=∑_j=0^min(i,D_q)ijP^(j)(1/g_Kq)^(i-j).
We see from Lemma <ref> and (<ref>)
that the leading term is for j=min(i,D_q). The lemma follows
directly if i≤ D_q. If i>D_q, we obtain the lemma by
bounding Kq≤ D_q and i-D_q≤ i, so that
sup_t∈[0,1/2D_q^2]1/(i-D_q)!|(1/g_Kq)^(i-D_q)(t)|≤(CD_q^2)^i(i/D_q)^i-D_q/2.
We deduce two facts from this lemma. In the first, the temperedness of v_i is where most of the results of this paper are used. It is worth noting here that, in the particular case of K=1, a much stronger result is known: all the v_i's are tempered, as proven by Parraud <cit.>. We will see below that they are even completely tempered, see Proposition <ref>.
For every integer i≥0, let v_i w ∈_r ↦φ_w^(K+i)(0)/(K+i)!. There is a polynomial P of degree 4K+Ki+1 such that |v_i(x)| ≤ P_n(q) x_C^*(_r) for every q and every x ∈_≤ q[_r].
Moreover, v_i is tempered if i<K/3.
The existence of P_n is a consequence of Lemma <ref> and the bound D_q≤ CKqlog(1+K)log(1+q), which implies sup_q≥1p(i,q)/(1+q)^4i+1<∞.
Let us explain why v_i is tempered if i<K/3. The proof combines several ingredients: the first is Haagerup's inequality, which tells us that we can apply the criterion for temperedness given in Proposition <ref>. The second is the polynomial growth of v_i: as a particular case of what we have just shown, there is a constant C_i such that for every q and every w ∈_r in the ball of radius q, |v_i(w)| ≤ C_i (1+q)^j, where j=4i+4K+1. The third ingredient is the small support of v_i given by item <ref> in Theorem <ref>. The last is the random walk results from Section <ref>.
Let us put all these ingredients together. Let μ be a reasonable probability measure on _r, and (γ_n)_n be the associated random walk on _r. If μ is supported in the ball of radius q, we know that γ_n belongs to the ball of radius qn, so that
|v_i(γ_n)| ≤ C_i (1+qn)^j ( v_i(γ_n) ≠ 0) ≤ C_i C(μ) (1+qn)^j n^5 ρ(μ)^n.
We deduce
lim sup_n ( |v_i(γ_n)|)^1/n≤ρ(μ),
so v_i is tempered by Proposition <ref>.
The second consequence is an analogue in our context to the master
inequality in <cit.>. In the particular case
K=1, a similar result was obtained in <cit.>.
Let
For every integers r,q,n≥ 1 and every x∈_≤ q[Γ]
|(σ_n(x(u_1,…,u_r))-τ(x)Id)-∑_i=0^r-1v_i(x)/n^i|
≤(C(K+r)^2 log(1+K+r)^12 K^4 log (1+K)^4)^K+r/n^rw(q)x_C^*(_r).
If n≥ Kq, by (<ref>) in Theorem <ref>,
we know that the left-hand side of (<ref>)
is equal to
n^K|φ_x(1/n)-∑_i=0^K+r-1φ_x^(i)(0)/i!n^i|.
By Taylor's inequality and Lemma <ref>,
if we moreover assume that n≥2D_q^2, this is less than
1/n^rp(K+r,q)x_C^*(_r).
If n≤2D_q^2, we claim that this bound is still valid. Indeed, we can bound the left-hand side of (<ref>) by
2dim(σ_n)x_C^*(_r)+∑_i=0^r-1|v_i(x)|/n^i.
By Lemma <ref> and (<ref>), this is less than
n^K(p(0,q) + ∑_i=0^r-11/n^K+ip(K+i,q)) x_C^*(_r),
which is indeed bounded
above by (<ref>), at least if the
constant C appearing there is large enough : for example C≥ 2 e^2 guarantees that 1/n^j+1p(j+1,q) ≥ 2 1/n^jp(j,q) for every j.
The statement of the lemma follows because
sup_qp(i,q)exp(-q/log(2+q)^2)≤(CK^4i^2log(1+K)^4log(1+i)^12)^i.
We can now deduce the main result of this section. Now we make K
vary, so we write σ_K,n the representation that was denoted
σ_n so far, and v_K,i the function denoted v_i so far.
Let A<1/24. Then
for every x∈[_r],
lim_nsup_K≤ n^A|σ_n,K(x(u_1,…,u_r))-x_C_λ^*(_r)|=0.
Let 1 ≤ K_n ≤ n^A for every n, and let π_n be the random representation of _r: π_n(w) = σ_K,n(w(u_1,…,u_r)). Apply Lemma <ref> for the smallest integer r ≥K/3: for every q and every x ∈_≤ q[Γ],
| (π_n(x)) - u_n(x)| ≤ε_n w(q)x_C^*(_r).
where
ε_n = (C K_n^6 (log K_n)^12+8ε)^K_n+r/n^r≤ (C n^24A-1 (log n)^12+8ε)^1/3→ 0
and
u_n(x) = (σ_n,k)τ(x) + ∑_i=0^r-1v_K_n,i(x)/n^i.
By Lemma <ref>, the function u_n is tempered as a finite sum of tempered function, and it satisfies
|u_n(x)| ≤ C(n) (1+q)^4r+4K_n-3.
Moreover, C^*_λ(Γ) has a unique trace (it is even simple <cit.>). So all the hypotheses of Proposition <ref> are satisfied; its conclusion proves the lemma.
§.§ From convergence in probability to almost sure convergence
Lemma <ref> is not exactly our main
result, because we have convergence in expected value.
However, by the concentration of measure phenomenon in the groups
(n) (see <cit.>), we can improve our
results to almost sure convergence.
For every x∈_≤ q[_r],
there is a constant C(x)=∑_w|x(w)||w| such that
(σ_n,K(x(U_1^(n),…,U_r^(n)))≥σ_n,K(x(U_1^(n),…,U_r^(n)))+ε))
≤exp(-(n-2)ε^2/24C(x)^2K^2).
Equip (n)^r with the L_2-sum of Hilbert-Schmidt distances.
The function (U_1^(n),…,U_r^(n))↦σ_n,K(x(U_1^(n),…,U_r^(n)))
is KC(x)-Lipschitz, so the bound is <cit.>
(for the original see <cit.>).
Theorem <ref> now follows by combining Proposition <ref>
and Lemma <ref>.
§.§ Variants and operator coefficients
If we only consider the representation π_k,ℓ from the introduction with ℓ=0, we have the stronger conclusion <ref> in Theorem <ref> instead of <ref>. Taking this improvement into account, with the notation of Theorem <ref>, the conclusion becomes that if A < 1/12,
sup_k≤ n^A|‖π_k,0(p(U_1^(n),…,U_r^(n)))‖ -p(x_1,…,x_r)|=o(1).
This improvement illustrates that the more of the higher derivatives v_K,i from Lemma <ref> are shown to be tempered, the stronger the conclusion. For example, if we knew that v_K,i is tempered for every i and every K (which Parraud has proved in the case K=1 <cit.>), or at least for every i ≤ K/o(1), then the condition on A in Theorem <ref> would be A<1/6.
Let A<1/6 and for every n, let K_n ≤ n^A. Assume that v_K_n,i is tempered for every n and every i. Let k_n = exp(n^1/2 - 2A(log n)^-4).
For every q and every sequence y_n ∈ M_k_n⊗_≤ q(_r), almost surely
lim_n (id⊗σ_K_n,n)(y_n)/(id⊗λ)(y_n)=1.
If the assumption was that v_K_n,i is completely tempered, the proof of the theorem would be a straightforward adaptation of the proof of Theorem <ref>. So our main task is to prove the following:
For every K,I, if v_K,0,…,v_K,I are all tempered, then they are all completely tempered.
For the proof, we need the following result. Here if 𝒜 is an algebra, 𝒜^n denotes its subalgebra equal to the linear span of {x_1 … x_n | x_i ∈𝒜}.
Let 𝒜 be a *-algebra and u 𝒜→ a positive linear map: u(x^*x)≥ 0 for every x ∈𝒜. If there is i such that u(x^i)=0 for every self-adjoint x, then u= 0 on 𝒜^3.
The form ⟨ a,b⟩ = u(a^*b) is a scalar product. By the Cauchy-Schwarz inequality, if k ≥ 2, if a^* = x^*xx^* … (k+1) terms and b= … x (k-1) terms,
|u((x^*x)^k)|^2 = |⟨ a,b⟩|≤ u((x^*x)^k+1) u((x^*x)^k-1),
so by induction we have u((x^*x)^2)=0, and by the Cauchy-Schwarz inequality again, u(x^* x z)=0 for every z ∈𝒜. By polarization we deduce u(xyz)=0 for every x,y,z.
We use the notation and results from Section <ref>. In particular, Lemma <ref> tells us that v_K,i extends by continuity to a linear map 𝒮_4K+4i+1(_r) →. Let 𝒜_i = 𝒮_4K+4i+1(_r) ∩λ. We prove by induction that v_K,i vanishes on (𝒜_i)^3^i+1 for every i≤ I. This readily implies that ⊗ v_K,i vanishes on M_k ⊗ (𝒜_i)^3^i+1 for all k. In particular, v_K,i is completely tempered by Proposition <ref>.
So let us assume that v_K,j vanishes on (𝒜_j)^3^j+1 for every j<i (we assume nothing if i=0), and let us show that it is true for j=i.
Using the induction hypothesis, it follows from Lemma <ref> that on the *-algebra (𝒜_i)^3^i (which is contained in τ and (𝒜_j)^3^j+1 for every j<i),
v_K,i(x) = lim_n n^iσ_K,n(x).
In particular, v_K,i is positive on (𝒜_i)^3^i as a limit of positive maps. Moreover, the fact that v_K,i is tempered implies that v_K,i(x^n)=0 for all n≥ 4K+4i+3, see Proposition <ref>. By Lemma <ref>, we obtain that v_K,i vanishes on (𝒜_i)^3^i+1. This concludes the proof of the proposition.
Thanks to Proposition <ref>, the temperedness assumption is automatically upgraded to complete temperedness. We apply Lemma <ref> with r = n^1/2 - 2A (log n)^-7/2, which is chosen so that
k_n (C(K+r)^2 log(1+K+r)^12 K^4 log (1+K)^4)^K+r/n^r→ 0.
So by Proposition <ref> we deduce the convergence in probability. The almost sure convergence is obtained by the concentration of measure phenomenon from Proposition <ref>.
For K_n=1 (that is, A=0), Parraud proved that v_1,i are tempered for every i. As a consequence, we obtain the unconditional result from Theorem <ref>.
§ PROOF OF COROLLARY <REF>
Corollary <ref> follows easily from the proof of Theorem
<ref> and the following lemma:
For every partitions λ⊢ k,μ⊢ℓ,
every w∈_r and every n>(k+ℓ)|w|, we have
_n[s_λ,μ(w)]=∫_(n)^rs_λ,μ(w(v_1,v_2,…,v_r))dv_1⋯ dv_r.
Let z_1,…,z_r,z'_1,…,z'_r,v_1,…,v_r be
independent random variables where z_i are uniform in the center
of (n) (the complex numbers of modulus one), z'_i are uniform
in the center of (n) (the n-th roots of unity) and v_i∈(n)
are Haar distributed, so that (z_1v_1,…,z_rv_r) are
independent Haar-distributed variables in (n), and (z'_1v_1,…,z'_rv_r)
are independent Haar-distributed variables in (n).
By Schur's lemma, for every z in the center of U(n), π_λ,μ(z)
is a multiple of the identity. On the other hand, we know that (λ_1,…,λ_p,…,-μ_q,…,-μ_1)
is the maximal weight of π_λ,μ, so the corresponding
character of the maximal torus appears in the restriction of π_λ,μ.
This means that the scalar π_λ,μ(z) is z^λ_1+…+λ_p-μ_q-…-μ_1=z^k-ℓ
for every z in the center of (n). Putting these two facts
together, we see that
_n[s_λ,μ(w)] =[s_λ,μ(w(z_1v_1,…,z_rv_r))]
decomposes by independence as the product
_n[s_λ,μ(w)]=[s_λ,μ(w(v_1,…,v_r))]·[w(z_1,…,z_r)^k-ℓ],
and similarly
[s_λ,μ(w(z'_1v_1,…,z'_rv_r))]=[s_λ,μ(w(v_1,…,v_r))]·[w(z'_1,…,z'_r)^k-ℓ].
But w(z_1,…,z_r)^k-ℓ is equal to 1 if w^k-ℓ
vanishes in the abelianization ^r of _r, and is equal
to 0 otherwise. Similarly, w(z'_1,…,z'_r)^k-ℓ
is equal to 1 if w^k-ℓ vanishes in (/n)^r and 0
otherwise. These two conditions coincide if n>|k-ℓ| · |w|, and
in particular if n>(k+ℓ)|w|.
Using Lemma <ref>, we see that Lemma <ref>
also holds if _n denotes the integration over (n)^r:
the validity of (<ref>) when x∈_≤ q[_r]
with n≥2D_q^2 is by Lemma <ref>, the
extension for arbitrary x follows with the same proof. Therefore,
Lemma <ref> also holds for (n).
Moreover, the concentration of measure from Proposition <ref>
also holds for (n) (same reference as there), so Theorem <ref>
holds also for independent variables in (n). Corollary <ref>
follows by Corollary <ref>.
§ TRANSVERSE MAPS
We now fix the free group we work with _r and its generators
{x_1,…,x_r}. Here we introduce a framework, following
<cit.>, that we use to prove Theorem <ref>.
In the sequel, a marked rose refers to a finite CW-complex
structure R_r whose underlying topological space is the wedge
of r circles, with the base-point being the wedge point, denoted
o, and with a marking π_1(R_r,o)≅_r that identifies
the generators {x_1,…,x_r} with oriented circles of
the rose.
The following definition will not be used in the paper but gives important
background motivation for the definitions to follow.
A transverse map is a manifold M, a based marked rose R_r
and a continuous function
f:M→ R_r
transverse to all the vertices of R_r.
In the rest of the paper we work with classes of transverse
maps. Instead of getting into what is an isotopy of transverse maps,
etc, we make the following equivalent combinatorial definition.
An (isotopy) class of (filling) transverse
map from a surface with boundary to a marked rose R_r is the
finitary combinatorial data:
* a marked base-point and orientation for each boundary component
* a ribbon graph structure on the surface, where vertices are discs
and edges correspond to interfaces between discs on their boundaries;
such an interface is called an arc
* information about which arc is in the preimage of each vertex of R_r
other than o, and which sides of the vertex correspond to which
sides of the arc
* up to isomorphism of the above data under decoration respecting homemorphisms.
We refer to these simply as classes of transverse maps.
A class of transverse map f as in Definition <ref>
is strict if for every two vertices p_1,p_2 of R_r
that are consecutive on some circle (and not o), do not
have parallel preimages in the following sense. Suppose the orientation
of their circle in R_r points from p_1 to p_2. The
preimages of p_1 and p_2 are parallel if for every
arc α in the preimage of p_1, there is a parallel[Cobounding a rectangle.]
arc in the preimage of p_2 on the side of α `towards'
p_2.
Given a class [f] of transverse map on surface with boundary there
is an obvious way to get a boundary class of transverse map on a union
of circles, denoted ∂[f].
Given κ:[r]→∪{0} let R_r^κ be the (isomorphism
class of) rose with κ(i)+1 vertices in the interior of the
circle corresponding to x_i. Let |κ|=∑_i∈[r]κ_i.
Given every non-identity w∈ F_r, let 𝔴:S^1→ R_r
denote a fixed immersion[Away from the base point of S^1, where there could be backtracking
if w is not cyclically reduced.] from a based oriented circle to the based rose such that if γ
is the generator of π_1(S^1,basepoint) corresponding
to the chosen orientation,
𝔴_*(γ)=w∈π_1(R_r,o).
§ PROOF OF THEOREM <REF> PART <REF>
Suppose |w|≤ q, λ⊢ k,μ⊢ℓ as before.
By a result of Koike <cit.> it is possible to write
for g∈(n)
s_λ,μ(w)=∑_|λ'|≤ k,|μ'|≤ℓα_λ',μ'^λ,μs_λ'(w)s_μ'(w^-1)
where the coefficients α_λ'.μ'^λ,μ are
given in (ibid.) as sums of products of Littlewood–Richardson
coefficients and importantly, do not depend on n. Then by base
change between Schur polynomials as above and power sum symmetric
polynomials, we obtain
s_λ,μ(w)=∑_|λ'|≤ k,|μ'|≤ℓβ_λ',μ'^λ,μp_λ'(w)p_μ'(w^-1)
where again the coefficients do not depend on n. This tells us
the poles of [s_λ,μ(w)] are at most the union of every
possible pole of
_n[p_λ'(w)p_μ'(w^-1)]
with k'=|λ'|≤ k,ℓ'=|μ'|≤ℓ.
For the purpose of finding the poles, we can here use the Weingarten
calculus in the most naive possible way (later, we need to do something
more sophisticated). To this end, <cit.>
yields
_n[p_λ'(w)p_μ'(w^-1)]=∑_[f:Σ→ R_r^(2)]
∂ f≅𝔴∘φ_λ',μ'(∏_1≤ i≤ rWg_(k'+ℓ')L_i(w)(π_f,i))n^N(f)
where R_r^(2)=R_r^(κ) with κ=(2,2,…,2),
π_f,i∈ S_(k'+ℓ')L_i(w) are permutations determined
from the combinatorial structure of f, and N(f)∈ is similar
— neither depend on n. It is a finite sum.
Each term
Wg_(K+ℓ)L_i(w)(π_f,i)∈(n)
is given by the formula for the Weingarten function <cit.>
Wg_L(π)=1/(L!)^2∑_λ⊢ Lχ_λ(1)^2/s_λ(1)χ_λ(π);
here we are viewing s_λ(1) as the formal element of (n)
s_λ(1)∏_□∈λ(n+c(□))/∏_□∈λh_λ(□),
c(□) is the content of the box,
𝐅
c(□)=j(□)-i(□)
and h_λ(□) is the hook-length of the box. The only
terms in the formula for the Weingarten function that depend on n
are therefore the denominators ∏_□∈λ(n+c(□))
If c≥0 then the factor (n+c(□)) appears at most d(c,L)
times in the denominator of Wg_L(π) where d(c,L)
is the largest natural number d such that
d(d+c)≤ L.
Clearly d(c,L)≤L/c for c>0. If c<0 then a similar
argument says (n+c) appears at most d(n,-c) times.
This means that ∏_□∈λ(n+c(□)) can be
written as n^LQ_λ(1/n) where Q_λ is
a polynomial that divides g_L(1/n) where (as in (<ref>))
g_L(x)∏_c=1^L(1-c^2x^2)^⌊L/c⌋.
Therefore, g_L(1/n)/∏_□∈λ(n+c(□))=1/n^Lg_L/Q_λ(1/n)
is a polynomial in 1/n of degree ≤ L + deg(g_L).
By <ref> the same is true for g_L(1/n)Wg_L(π).
Since
∏_i=1^rg_(k'+ℓ')L_i
divides g_(k+ℓ)q, we see from (<ref>)
that for n≥(k+ℓ)q, the rational function agreeing with g_(k+ℓ)q(1/n)[s_λ,μ(w)]
in this range can be written as a sum of a polynomial in 1/n
of degree ≤(k+ℓ)q+deg(g_(k+ℓ)q) and possibly
a polynomial in n (because of the terms n^N(f)). However,
the polynomial in n is necessarily constant because it is known
(see Theorem 1.7 in <cit.>) that g_(k+ℓ)q(1/n)[s_λ,μ(w)]
remains bounded as n→∞.
To conclude, observe that we can bound the degree of g_L by
2∑_c=1^L⌊L/c⌋≤2∑_c=1^LL/c≤2L(1+log L),
so that (k+ℓ)q+deg(g_(k+ℓ)q)≤3L(1+log L).
§ PROOF OF THEOREM <REF> PART <REF>
AND <REF>
In the paper <cit.> the integrals
∫_U(n)^4(w̃(u_1,u_2,u_3,u_4))s_λ,μ([u_1,u_2][u_3,u_4])du_1du_2du_3du_4
w.r.t Haar measure are calculated in a very specific way that exhibits
cancellations that cannot be easily seen[In fact, we do not know how to.]
using the formula (<ref>). The initial part
of this calculation is very general and does not depend on w̃
or the word w=[x_1,x_2][x_3,x_4]. So the method generalizes
as-is to the case when the (w̃) term is not present and
[x_1,x_2][x_3,x_4] is replaced by any reduced word w.
That is, to integrals of the form
_n[s_λ,μ(w)]∫_U(n)^rs_λ,μ(w(u_1,…,u_r))du_1⋯ du_r
where w is any element of _r, viewed as a reduced word
in the generators.
The analog of <cit.> here is the following
bound. Let R_r^0 be the based CW-complex structure on R_r
with one interior vertex per circle (and a vertex for the wedge point
o). Recall the immersion 𝔴:(S^1,basepoint)→(R_r^0,o)
from <ref>. Let
φ_λ,μ:∐_i=1^ℓ(λ)S^1⊔∐_i=1^ℓ(μ)S^1→ S^1
denote the map whose components are
* multiplication by λ_i on the ith
circle, if i≤ℓ(λ),
* and multiplication by -μ_i (so orientation reversing) on the
(i-ℓ(λ))thcircle for i>ℓ(λ).
This map is a convenient way of book-keeping winding numbers.
We have
_n[s_λ,μ(w)]=O_k,ℓ,w(n^max_w,k,lχ(Σ))
where max_w,k,ℓ is the maximum over classes of transverse
maps [f] from a surface with boundary to R_r^0 such that
Σ is an oriented surface with compatibly oriented
boundary
For some λ'⊢ k and μ'⊢ℓ, ∂[f]
factors as
∂[f]=[𝔴∘φ_λ',μ']
where [𝔴∘φ_λ',μ'] is the (unique)
class of transverse map to R_r^0 obviously associated to the
immersion 𝔴∘φ_λ',μ'. Components
of ∂Σ mapped in an orientation respecting (resp. non-orientation
respecting) way by φ_λ',μ' are called positive
(resp. negative).
Forbidden Matchings In the factorization of ∂ f above,
no such arc has endpoints in a positive and negative component of
∂Σ that are mapped to the same point under φ_λ',μ'.
This proposition follows formally from <cit.>,
but we give most details in the Appendix, <ref>..
The Forbidden Matchings property is completely crucial and
the main point of the method introduced in (ibid.). The following
topological result is a new input to that method.
If w is not a proper power in _r
and cyclically reduced, then any class of transverse map [f] on
underlying surface Σ satisfying the conditions of Proposition
<ref> has
χ(Σ)≤-k+ℓ/3.
If w is not a proper power in _r then
_n[s_λ,μ(w)]=O_k,ℓ,w(n^-1/3(k+ℓ)).
N.B. One can assume without loss of generality in Corollary <ref>
that w is cyclically reduced, as conjugating w does not change
_n[s_λ,μ(w)].
§.§ Proof of Proposition <ref>
Suppose w is cyclically reduced and not a proper power or the identity,
with word length |w|. Suppose that some class of transverse map
[f:Σ→ R_r^0] satisfies P1-P4 of Proposition
<ref>.
Let L(e) denote the number of edges in the chain defining e.
Let V^*,E^* denote the topological vertices and edges respectively.
We have
2E^*=∑_v∈ V^*d(v)≥3V^*
and hence
-χ(Σ)=E^*-V^*≥E^*/3.
We now aim to show E^* is large under the previous assumptions.
Each class of strict transverse map f is a (isomorphism class of)
ribbon graph ℛ whose edges correspond to the arcs of
f. Each edge of the ribbon graph inherits a direction and labeling
by {x_1,…,x_r} from the manner in which f crosses
the (sole) interior vertex of the ith circle
in R_r^0 along this edge.
Say that a vertex v of ℛ is a topological vertex
if it has valence d(v)≥3. A topological edge e is
a maximal chain of edges incident at vertices of valence 2. (As
w is cyclically reduced, there are no vertices of valence 1).
Every topological edge e is bordered by two segments of ∂ℛ
that each spell a subword of w^k_i or w^-ℓ_i for
some i, reading along the fixed orientation of the boundary.
Suppose that for some topological edge e, L(e)>|w|. For a reduced
word v in the generators of _r, write v for
its mirror (reversing order and replacing generators by inverses).
Then both boundary segments spell words
u_1w^au_2, v_1w^bv_2
with a,b∈\{0}, |u_1|,|u_2|,|v_1|,|v_2|<|w|.
Case 1. If a,b have same sign. In this case one arrives
at
wu=vw
with |u|,|v|<w. (see Figure <ref>).
This is impossible since then v is a prefix of w and v
is too, meaning v=v̅ since they have the same length, so v
is empty. Similarly u is empty and w=w contradicting
w being the identity.
Case 2. If a,b have opposite signs. So without loss of
generality assume a>0 and b=-B with B>0, then we have
u_1w^au_2=v_2w^Bv_1.
Condition P4 implies that |u_1|≠|v_2|.
Again without loss of generality assume |u_1|<|v_2|.
Write v_2=u_1w_1.
If |u_2|>|v_1| then write u_2=w_2
to obtain
w^aw_2=w_1w^B
as reduced words. This immediately implies w_2=w_1, as both
are prefixes of w of the same length. Since w is not a proper
power, e.g. <cit.> implies that w_1 is a
power of w, which must be empty since its length is less than w.
If |u_2|<|v_1 then write w_2u_2=v_1
to arrive at
w^a=w_1w^Bw_2.
This implies w_1 and w_2 are both prefixes and suffixes
of w.
Write w=w_1w_3 and obtain
w_3w^a-1=w^Bw_2.
Since B>0, a>1 here and we argue as we did from (<ref>)
to get a contradiction.
The upshot of this argument is that each topological edge e has
L(e)≤|w|.
Since the total lengths of topological edges is (k+ℓ)|w| this
gives |E^*|≥ k+ℓ. Hence from (<ref>)
-χ(Σ)≥k+ℓ/3.
§.§ Proof of Theorem <ref> Part <ref>
If μ=∅, i.e. s_λ,μ=s_λ is a polynomial
stable character, then instead we can use the following two results.
For w∈_r the commutator length of w, denoted by cl(w)∈∪{∞}
is defined by
cl(w)=inf{ g: w=[u_1,v_1]⋯[u_g,v_g] , u_i,v_i∈_r }
interpreted as ∞ if it is not possible to write w as a
product of commutators, i.e. w∉[_r,_r]. The stable
commutator length of w, denoted scl(w), is defined
as
scl(w)lim_m→∞cl(w^m)/m.
On _r, scl takes values in by a result of Calegari <cit.>.
Duncan and Howie proved in <cit.> that
(w)≥1/2
for all w∈_r. This bound can be combined with the following
result of the first named author and Puder following immediately from
<cit.>.
We have
_n[s_λ(w)]=O(1/n^2k(w)).
In fact, <cit.> says that for every w∈[_r,_r],
there is some λ such that the bound (<ref>)
is saturated, but we do not use this here.
Combining (<ref>) with (<ref>) gives
_n[s_λ(w)]=O(n^-k)
as required.
§ APPENDIX: EXTENSION OF THE METHOD OF RANDOM UNITARY REPRESENTATIONS
OF SURFACE GROUPS II
𝖲𝖴
𝔭
𝒩
𝒩̂
Wg
Res
ABG
zd
𝒮𝒮𝒯
ℱ
𝔮
𝖯𝖲𝖴
irr
𝔰𝔲
Ad
𝐦
𝐱
𝖫𝖱
𝐐
𝐙
𝒯
_n^k,ℓ
_n^k,ℓ
𝖬𝖠𝖳𝖢𝖧
𝗌𝗎𝗋𝖿𝖺𝖼𝖾𝗌
𝒯̌̇̌_n^k,ℓ
𝒯̌_n^k,ℓ
ℑ
ȷ𝔧
𝐮̆
𝐯̌
𝐫̊
𝐬
𝐔
𝐕
𝐑
𝒮
In this section we explain the extension of the methodology of <cit.>
to bound the integrals _n[s_λ,μ(w)] defined in (<ref>).
We import some results from <cit.>.
§.§ Background
In this section, fix λ⊢ k and μ⊢ℓ, and
assume n≥ k+ℓ. Let D_λ,μ(n)=s_λ,μ(1).
Write χ_λ for the character of S_k associated
to the irreducible representation W^λ that corresponds
to λ. Let d_λχ_λ(𝕀)= W^λ.
Given λ⊢ k, the element
_λd_λ/k!∑_σ∈ S_kχ_λ(σ)σ∈[S_k]
is the central projection in [S_k] to the W^λ-isotypic
component. We view S_k× S_ℓ as a subgroup of S_k+ℓ
in the standard way.
Let
_λ⊗μ_λ'_μ
where '_μ is the image of _μ under the inclusion
S_ℓ≤ S_k× S_ℓ≤ S_k+ℓ. We define Young
subgroups, for λ⊢ k
S_λ S_λ_1× S_λ_2×⋯× S_λ_ℓ(λ)≤ S_k.
Let
(^n)^⊗ k⊗((^n)^∨)^⊗ℓ.
For I=(i_1,…,i_k+ℓ) let
I'(I;π) i_π(1),…,i_π(k),
J'(I;π) i_π(k+1),…,i_π(k+ℓ).
For π∈ S_k+ℓ let
Φ(π)∑_I=(i_1,…,i_k),J=(j_k+1,…,j_k+ℓ)e_I'(I⊔ J;π)^J⊗ě_I^J'(I⊔ J;π)∈()
and extend the map Φ linearly to [S_k+ℓ].
Recall the definition of the Weingarten function from (<ref>).
Let
z ∑_τ∈ S_k+ℓz(τ)τ
[S_k:S_λ][S_ℓ:S_μ]/d_λd_μ_λ⊗μ(∑_σ∈ S_λ× S_μσ)_λ⊗μ_n,k+ℓ∈[S_k+ℓ].
One has the following bound on the coefficients of z <cit.>
z(τ)=O_k,ℓ(n^-k-ℓ-τ_k,ℓ).
Let
D_λ,μ(n)Φ(z).
* The operator is an orthogonal projection with (n)-invariant
image that is isomorphic to V^λ,μ as a (n)-representation.
* For any i_1,…,i_k+ℓ,j_1,…,j_k+ℓ and any
p∈[k+ℓ],
∑_u_(i_1⋯ i_p-1ui_p+1⋯ i_k+ℓ),(j_1⋯ j_p-1uj_p+1⋯ j_k+ℓ)=0.
Part 1 is the combination of <cit.>.
Part 2 arises from the fact that is zero on the orthocomplement
to the contraction free subspace from <cit.>.
§.§ Combinatorial integration
We write w in reduced form:
w=f_1^ϵ_1f_2^ϵ_2… f_q^ϵ_q, ϵ_u∈{±1}, f_u∈{x_i},
where if f_u=f_u+1, then ϵ_u=ϵ_u+1. For
f∈{a,b,c,d} let p_f denote the number of occurrences of
f^+1 in (<ref>).
The expression (<ref>) implies that for u(u_i:i∈[r])∈(n)^r,
s_λ,μ(w(u)) =_( f_1^ϵ_1 f_2^ϵ_2… f_|w|^ϵ_|w|)
=∑_I_j∈[n]^k,J_j∈[n]^ℓ∏_u=1^q_K_i⊔ L_i,_I_i+1⊔ J_i+1
(u_f_1^ϵ_1)_I_1⊔ J_1,K_1⊔ L_1(u_f_2^ϵ_2)_I_2⊔ J_2,K_2⊔ L_2⋯(u_f_q^ϵ_q)_I_q⊔ J_q,K_q⊔ L_q
=∑_π_1,…,π_1∈ S_k+ℓ∏_i=1^qz(π_i)∑_I_j∈[n]^k,J_j∈[n]^ℓ_K_i⊔ L_i,_I_i+1⊔ J_i+1
(u_f_1^ϵ_1)_I_1⊔ J_1,K_1⊔ L_1(u_f_2^ϵ_2)_I_2⊔ J_2,K_2⊔ L_2⋯(u_f_q^ϵ_q)_I_q⊔ J_q,K_q⊔ L_q
1{ K_i⊔ J_i+1=(I_i+1⊔ L_i)∘π_i : i∈[q] } .
In the last line above the indices run mod q. Each product of matrices
here can be integrated using the Weingarten calculus.
We view all I_u etc as functions from indices (the domain)
to [n]. We view all domains for distinct u as disjoint as possible.
There are however, fixed matchings between the domains of each
I_u⊔ J_u and K_u⊔ L_u.
These will come into play later.
We now define sub-collections of all the indices that are treated
as `the same type' by the Weingarten calculus.
For each i∈[r], let:
* _i be all indices of I_u such that f_u=x_i and
ϵ_u=+1 and indices of L_u such that f_u=x_i
and ϵ_u=-1,
* _i^* be all indices of J_u such that f_u=x_i
and ϵ_u=+1 and indices of K_u such that f_u=x_i
and ϵ_u=-1,
* _i be all indices of K_u such that f_u=x_i and
ϵ_u=+1 and indices of J_u such that f_u=x_i
and ϵ_u=-1,
* _i^* be all indices of L_u such that f_u=x_i
and ϵ_u=+1 and indices of I_u such that f_u=x_i
and ϵ_u=-1.
Let denote the set of data consisting of
* for each i∈[r], σ_i a bijection from _i to
_i^*,
* for each i∈[r], τ_i a bijection from _i to _i^*.
Doing the integral of terms in either (<ref>) or (<ref>)
replaces
(u_f_1^ϵ_1)_I_1⊔ J_1,K_1⊔ L_1(u_f_2^ϵ_2)_I_2⊔ J_2,K_2⊔ L_2⋯(u_f_q^ϵ_q)_I_q⊔ J_q,K_q⊔ L_q
by
∑_∈∏_i∈[r]_k+ℓ(σ_iτ_i^-1)1{ `respected' by }.
Before going on, we make a key argument. Suppose that some index of
I_2 is matched to an index of J_2 by , for example
the two first indices. Then the result of integrating (<ref>),
the terms corresponding to contain all contain factors
∑_a=I_2(1)=J_2(1)_K_1⊔ L_1,_I_2⊔ J_2
where all indices but the first indices of I_2 and J_2 are
frozen (conditioned upon). This is zero by Theorem <ref>
part 2.
This means, going back to (<ref>), if we define ^*
to be the subset of such that
* σ_i never matches indices I_u to those of J_u
for any u,
* σ_i never matches indices of L_u to those of K_u
for any u,
* τ_i never matches indices of K_u to those of L_u
for any u,
* τ_i never matches indices of J_u to those of I_u
for any u,
then we obtain
_n[s_λ,μ(w(u))]=∑_π_1,…,π_q∈ S_k+ℓ∑_∈^*∏_i=1^qz(π_i)∏_i∈[r]_k+ℓ(σ_iτ_i^-1)(π_1,…,π_q,)
where (π_1,…,π_q,) is the number of choices of
I,J,K,L such that
K_u⊔ J_u+1 =(I_u+1⊔ L_u)∘π_u : u∈[q],
indices matched by have the same value.
§.§ Surface construction
Now construct a surface as follows.
Begin with a vertex for every index. Add an edge (called -edge)
between all matched indices (by ). Add an edge (called w-edge)
between indices paired by the fixed identifications
I_u⊔ J_u≅[k+ℓ]≅ K_u⊔ L_u
and direct this edge from the indices on the left hand side above
to those on the right hand side.
Add also an edge (called π-edge) between indices matched by (<ref>).
We now have a trivalent graph.
We now glue two types of discs to this graph following <cit.>.
The boundaries of the discs are glued along two types of cycles in
the graph:
Type-I Cycles that alternate between π-edges and edges.
Such cycles are disjoint.
Type-II Cycles that alternate between w-edges and -edges.
Again, such cycles are disjoint.
The resulting glued discs therefore meet only along the -edges
and the resulting total object is a topological surface we call
Σ(,{π_i}).
The boundary cycles of this surface alternate between w-edges and
π-edges.
For σ∈ S_k+ℓ, let σ_k,ℓ denote the
minimum m for which
σ=σ_0t_1t_2⋯ t_m
where σ_0∈ S_k× S_ℓ and t_1,…,t_m
are transpositions in S_k+ℓ. The analog of <cit.>
is that, after an elementary calculation using (<ref>)
and bounds for the Weingarten function one obtains
∏_i=1^qz(π_i)∏_i∈[r]_k+ℓ(σ_iτ_i^-1)(π_1,…,π_q,)≪_w,k,ℓn^-∑_iπ_i_k,ℓn^χ(Σ(,{π_i})),
hence from (<ref>)
_n[s_λ,μ(w(u))]≪_w,k,ℓ∑_π_1,…,π_q∈ S_k+ℓ∑_∈^*n^-∑_iπ_i_k,ℓn^χ(Σ(,{π_i})).
§.§ Surfaces with large contribution
It is shown in <cit.> — the same proof
applies without change to the current setting — that given any
π_1,…,π_q∈ S_k+ℓ, ∈^*,
it is possible to modify these so that
π'_1,…,π'_q∈ S_k× S_ℓ, ∑_iπ'_i_k,ℓ=0
'={σ'_i,τ_i'}∈^* with for all
and there exists an inequality between corresponding terms
n^χ(Σ(',{π'_i}))≥ n^-∑_iπ_i_k,ℓn^χ(Σ(,{π_i})).
The condition (<ref>) means all type-II cycles
are now rectangles with two (non-consecutive) edges in the boundary;
we now replace each rectangle with an arc connecting the two boundary
segments of the rectangle.
§.§ Connection to transverse maps
We now create a class of transverse map on the surface Σ(',{π'_i})
as follows (cf. Definition <ref>).
The ribbon graph structure is the one dictated by the arcs we just
prior constructed. By construction, they cut the surface into discs.
For each boundary component, place a marked point in some (it does
not matter) π-edge that arose from π'_q. Recall the CW-complex
rose R_r^0 with one point (call it z_i) in the interior
of the circle corresponding to x_i. If an arc arose from a rectangle
with edges that arose from σ'_i=τ'_i then we declare
our function to take the constant value z_i on this arc and the
transverse map will traverse the arc from the σ'_i side
to the τ'_i side.
The property (<ref>) implies that for every boundary
component of the surface, the w-edges are all directed the same
way along this boundary component and hence give an orientation to
the boundary. With respect to this orientation, the isotopy class
of SF transverse map satisfies (<ref>).
The fact that '∈^*, rather than , implies that the
class of transverse map has the crucial Forbidden Matchings
property.
amsalpha
Michael Magee,
Department of Mathematical Sciences, Durham University, Lower Mountjoy,
DH1 3LE Durham, UK
IAS Princeton, School of Mathematics, 1 Einstein Drive, Princeton
08540, USA
Mikael de la Salle,
Institut Camille Jordan, CNRS, Université Lyon 1, France
IAS Princeton, School of Mathematics, 1 Einstein Drive, Princeton
08540, USA
|
http://arxiv.org/abs/2409.02531v1 | 20240904084522 | Modular pipeline for small bodies gravity field modeling: an efficient representation of variable density spherical harmonics coefficients | [
"Antonio Rizza",
"Carmine Buonagura",
"Paolo Panicucci",
"Francesco Topputo"
] | cs.RO | [
"cs.RO",
"astro-ph.IM"
] |
75
Milan, Italy
14-18 October
2024
A3
*Antonio RizzaPhD Candidate, Department of Aerospace Science and Technology (DAER), Politencico di Milano, [email protected]
Carmine BuonaguraPhD Candidate, Department of Aerospace Science and Technology (DAER), Politencico di Milano, [email protected]
Paolo PanicucciAssistant Professor, Department of Aerospace Science and Technology (DAER), Politencico di Milano,[email protected]
Francesco TopputoFull Professor, Department of Aerospace Science and Technology (DAER), Politencico di Milano, [email protected]
Proximity operations to small bodies, such as asteroids and comets, demand high levels of autonomy to achieve cost-effective, safe, and reliable Guidance, Navigation and Control (GNC) solutions. Enabling autonomous GNC capabilities in the vicinity of these targets is thus vital for future space applications. However, the highly non-linear and uncertain environment characterizing their vicinity poses unique challenges that need to be assessed to grant robustness against unknown shapes and gravity fields. In this paper, a pipeline designed to generate variable density gravity field models is proposed, allowing the generation of a coherent set of scenarios that can be used for design, validation, and testing of GNC algorithms. The proposed approach consists in processing a polyhedral shape model of the body with a given density distribution to compute the coefficients of the spherical harmonics expansion associated with the gravity field. To validate the approach, several comparison are conducted against analytical solutions, literature results, and higher fidelity models, across a diverse set of targets with varying morphological and physical properties. Simulation results demonstrate the effectiveness of the methodology, showing good performances in terms of modeling accuracy and computational efficiency. This research presents a faster and more robust framework for generating environmental models to be used in simulation and hardware-in-the-loop testing of onboard GNC algorithms.
Modular pipeline for small bodies gravity field modeling: an efficient representation of variable density spherical harmonics coefficients
Ahmad T. Sheikh, Ali Shoker, Suhaib A. Fahmy and Paulo Esteves-Verissimo
CEMSE Division, King Abdullah University of Science and Technology (KAUST)
Thuwal 23955-6900, Kingdom of Saudi Arabia
{ahmad.sheikh, ali.shoker, suhaib.fahmy, paulo.verissimo{@kaust.edu.sa}}
September 9, 2024
=============================================================================================================================================================================================================================================================================
§ INTRODUCTION
In the last twenty years, the space sector has experienced an unprecedented growth, marked by significant advancements and achievements both concerning Earth's orbiting and deep-space missions. The recent growing interest in small solar system bodies such as asteroids and comets for scientific inspection, exploitation of resources, and planetary defense reasons is pushing the development of innovative engineering solutions to better investigate these celestial bodies. Ground-based observations allow preliminary characterizations of small bodies in terms of bulk properties, such as mass and shape, orbit and rotational state. The major limitation of this methodology is the signal to noise ratio <cit.> which is acceptable only when the target is relatively close to the Earth <cit.>. A drastic improvement in the body characterization can be obtained with in-situ observations with the use of specialized and instrumented probes. Several missions successfully performed proximity operations to these bodies such as the Near Earth Asteroid Rendezvous (NEAR) Shoemaker <cit.>, Dawn <cit.>, the Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) <cit.>, Hayabusa <cit.>, , , and the Double Asteroid Redirection Test (DART)<cit.>. Nowadays, space exploration is witnessing a transition towards the use of CubeSats, miniaturized platforms standardized in size and form factor, for the systematic exploration of the Solar System <cit.>. The idea is to use these platforms performing riskier tasks and operating in a cooperative multi-agent framework while cooping with limited resources <cit.>.
The dynamics characterizing small bodies proximity operations is highly chaotic with solar radiation pressure and non-spherical gravity effects compromising the very existence of stable closed orbits. An accurate modeling of this environment becomes crucial for a safe trajectory design and to guarantee the satisfaction of mission objectives. Moreover, on-board operations requires fast computation to coop with limited available resources. A good compromise between accuracy and efficiency in modeling the gravity field, is typically found in an high-order expansion of the gravitational potential in the form of spherical harmonics. This model is typically valid only outside the Brillouin sphere of the target <cit.>, because of the convergence properties of the Legendre polynomials, used as functional base for the expansion <cit.>. An alternative formulation resolving the convergence issue inside the Brillouin sphere is the polyhedral gravity model proposed in Werner and Scheeres <cit.>. This model computes the gravity field from a constant density polyhedron in an analytical form. Given that the polyhedron is usually computed from images, errors are present in the polyhedral model. The sensitivity of the gravity field to perturbation in the polyhedral shape are investigated by Bercovici et al. <cit.>, to assess the coupling between shape and gravity errors for Werner and Scheeres's <cit.> model. One of the main drawback of the polyhedral gravity model is that it is associated with a constant density distribution inside the body which sometimes results in contrast with gravity measurements <cit.>. Even a small density variation may induce large trajectory deviation with respect to a nominal path, leading the spacecraft to miss its target goals and potentially enter on impact or escape trajectories. A quantitative example of this deviation is discussed later in this work using the variable density gravity model generated with the proposed approach.
Motivated by the need of comparing spherical harmonics coefficients from orbit determination and the ones gathered from constant-density shapes, Werner <cit.> shows how the spherical-harmonics coefficients can be analytically retrieved from a general shape polyhedral with uniform density model of the asteroid combining recursive formula with trinomial algebra. The sensitivity of the spherical harmonics coefficients to shape variations is investigated by Panicucci et al. <cit.> to understand whether the shape can be a major source of error when comparing orbit determination coefficients to shape-deduced coefficients (e.g., see <cit.>) To improve the fidelity of the forward gravity modeling of non-constant density polyhedron, Chen et al. <cit.> extends Werner <cit.> to variable density under the assumption of trinomial density distribution. However, this hypothesis force the density field to be continuous inside the body forbidding the existence of density jumps. This may not be the case when the asteroid is formed by two different parent bodies or when denser nuclei and internal geological structures are considered.
Another widely-used gravity field is mascon model, introduced to model gravity anomalies on the Moon <cit.>. Mascon models <cit.> are typically used to model non-uniform mass distribution. An interesting comparison among different gravity models, spherical harmonics, mascon and polyhedral, is provided by Werner and Scheeres <cit.> for asteroid (4769) Castalia. To the authors knowledge there is currently no formulation of the spherical harmonics expansion under arbitrary variable density distribution and shape of the asteroid. This paper proposes a pipeline for retrieving such gravity model by performing analytical integration over a radially discretized polyhedron. This approach falls thus in between the methodologies proposed by Werner and Cheng <cit.> and the classical Mascon approach.
The paper is structured as follows: Section <ref> describes the general formulation of the gravity field by means of a spherical-harmonics expansion, Section <ref> presents the methodology to compute the coefficients of the expansion with non-uniform density, Section <ref> applies the methodology to benchmark cases to test its performances and, finally, Section <ref> summarizes the major findings.
§ GRAVITY FIELD MODELING
The acceleration due to the central gravity field, expressed in the asteroid fixed frame ℬ, can be computed as the gradient of the gravitational potential U(x⃗)
a⃗_ℬ = ∇ U
Being due to a conservative field, the potential has to satisfy the Laplace equation outside the body, i.e.
∇^2 U (x⃗) = 0
A known solution to the Laplace equation, valid outside the Brouillon of the asteroid, is expressed in spherical coordinates through an infinite expansion of the potential U(r,λ,ϕ) into a series of spherical harmonics projected onto a function space spanned by the associated Legendre . The coordinates r, λ and ϕ are respectively the radial distance from the asteroid, the longitude and the latitude of the spacecraft. To avoid numerical errors and improve accuracy for higher order terms this expansion is typically used in the normalized form
U =μr∑_n=0^∞∑_m=0^n(R_0r)^n nmu(nmcos(m λ)+.
+.nmsin(m λ) )
where u = sin(ϕ). The function nmu refers to the normalized associated Legendre polynomials, while, the scaling factor R_0 is a reference distance used to compute the normalized coefficients nm, and nm. The polynomials nmu are expressed as a function of the Legendre polynomials P_n as <cit.>
nmu = N_nm1-u^2^m2d^m P_ndu^m
with
N_nm = √((n-m) ! (2 n +1) (2-δ(m) )( n+m) !)
and δ(·) indicating the Dirac delta operator.
δ(m) = {[ 1 m = 0; 0 otherwise ].
To speed up the computation of the fully-normalized Legendre polynomials in Equation <ref> the recursive formula introduced by Rapp <cit.> are used. The sequence is anchored on the first three elements as
00u = 1
10u = √(3) u
22u = √(3)√(1-u^2)
and then the terms are recursively computed as in Equation <ref>.
For m = n, n > 2,
nnu = √(2n+12n)√(1-u^2)n-1n-1u
for m = n-1,
nn-1u = √(2n+3) u n-1n-1u
for m < n-1,
nmu = Γ_n,mn-1mu +
- Γ_n,mΓ_n-1,mn-2mu
with
Γ_n,m = √((2n+1) (2n-1)(n-m) (n+m))
Expressing the spacecraft position in Cartesian coordinates, i.e. r⃗_ℬ = x i⃗+y j⃗+z k⃗, the gradient of the pontetial can be expressed as <cit.>
a⃗_ℬ = ∇ U =[∂ U∂r⃗]^T =
= ∂ U∂ r[∂ r∂r⃗]^T + ∂ U∂λ[∂λ∂r⃗]^T + ∂ U∂ϕ[∂ϕ∂r⃗]^T =
= [ (1r∂ U∂ r - zr^2 η∂ U ∂ϕ) x - (1η^2∂ U∂λ) y] i⃗ +
+ [ (1r∂ U∂ r - zr^2 η∂ U ∂ϕ) y + (1η^2∂ U∂λ) x] j⃗ +
+ [ 1r∂ U∂ r z + ηr^2∂ U∂ϕ] k⃗
with η = √(x^2+y^2). The partial derivatives ∂ U∂ r and ∂ U∂λ are given by
∂ U∂ r = -μr^2∑_n=0^∞∑_m=0^n(R_0r)^n (n+1 ) nmu
(nmcos(m λ) + nmsin(m λ) )
∂ U∂λ = μr∑_n=0^∞∑_m=0^n(R_0r)^n (n+1 ) nmu m
(-nmsin(m λ)+nmcos(m λ) )
The derivative ∂ U∂ϕ is more challenging since it involves the derivatives of the associated Legendre polynomial
∂ U∂ϕ = μr∑_n=0^∞∑_m=0^n(R_0r)^n ∂nmsin(ϕ)∂ϕ
(nmcos(m λ)+ nmsin(m λ) )
There are several recursive formulas to compute the derivative of the fully normalized Legendre . For this implementation this is achieved by combining the definition of the normalized Legendre polynomials, in Equation <ref>, with the set of recursive formula in Equation <ref> leading to
∂nmsin(ϕ)∂ϕ = [∂nmu∂ u] cosϕ =
= -m tanϕnmu + Kn,mnm+1u
with
K_n,m = {[ √(n-mn+m+1) m > 0; ; √(n n+12) m = 0 ].
§ NORMALIZED COEFFICIENTS COMPUTATION
The coefficients nm and nm are a function of the asteroid shape and density distribution. In particular they are obtained integrating a set of trinomial shape functions c_nm and s_nm over the body <cit.>
[ nm; nm ] = ∭_B [ c_nm(x,y,z); s_sm(x,y,z) ] dm =
= ∭_B ρ[ c_nm(x,y,z); s_sm(x,y,z) ] dx dy dz
If the integration is performed over a polyhedron, split in a collection of tethraedra, the full integral is equivalent to the summation of the integrals over each
[ nm; nm ] = ∑_s = 1^n_s( ∭_s ρ[ c_nm(x,y,z); s_sm(x,y,z) ] dx dy dz)
Lien and Kajiya <cit.> shows that, under the hypothesis of constant density, each of these integrals, can be solved analytically if a proper change of coordinates is performed.
A similar approach is followed here to derive the formulation with variable density distribution. Consider the single polyhedron shown in Figure <ref>a
The general position inside the tetrahedron can be identified as a linear combination of the vertex coordinates C⃗_1, C⃗_2 and C⃗_3
[ x; y; z ] = [ C⃗_1 C⃗_2 C⃗_3 ][ X; Y; Z ]
or better expressed in compact form as
x⃗ =J⃗_s X⃗
where the matrix J⃗_s is the Jacobian of the transformation. This change of coordinates allows to perform the integration in Equation <ref> over a standard simplex, as shown in Figure <ref>b. From Equation <ref> thus follows
[ nm; nm ] = ∑_s = 1^n_s∭_ss( ρ[ c_nm(X⃗); s_sm(X⃗) ] det(J⃗_s)) dX⃗
Since c̅_nm(x⃗) and s̅_nm(x⃗) are trinomials of order n, and the transformation in Equation <ref> is linear, c̅_nm(X⃗) and s̅_nm(X⃗) will also be trinomials of order n. The shape functions can then be expressed as
[ c_nm(X⃗); s_sm(X⃗) ] = ∑_i+j+k = n([ α_ijk; β_ijk ] X^i Y^j Z^k) =
= ∑_i+j+k = n(p⃗_ijk X^i Y^j Z^k)
The components α_ijk and β_ijk of vector p⃗_ijk can be retrieved through a series of recursive relations anchored on certain initial conditions. In this paper the details of this procedure are omitted and the coefficients can be considered to be known. The interested reader can find a comprehensive discussion of this process in <cit.>.
Combining Equations <ref> and <ref> the steps in hold. Note that this does not contain any assumption on the internal density distribution of the body.
Consider now the uniform discretization of the polyhedron, shown in Figure <ref>, in n_q segments each of them at a constant density ρ_q = ρ(x⃗_q) with x⃗_q being the geometric center of segment q.
The integration over the standard simplex can then be split further more in a summation of integrals over each segment, leading to
[ nm; nm ] = ∑_s = 1^n_s( det(J⃗_s) ∑_i+j+k = n([ α_ijk; β_ijk ]∑_q=1^n_qρ_q . .
. . (∫_0^q^+ g_ijk(Z) dZ-∫_0^q^- g_ijk(Z) dZ)))
with
g_ijk(Z) = ∫_0^1-Z∫_0^1-Z-Y X^i Y^j Z^k dX dY dZ
Lien and Kajiya <cit.> shows that this integral can be computed analytically as
g_ijk(Z) = β(j+1,i+2)i+1(1-Z)^(i+j+2)Z^k
where β is the Beta function, also known as Euler's integral, with tabulated values. By substituting the definition of g_ijk(Z) in Equation <ref> follows that
[ nm; nm ] = ∑_s = 1^n_s( det(J_s) ∑_i+j+k = n([ α_ijk; β_ijk ]. .
. . ∑_q=1^n_q(ρ_q (h_q^+,i,j,k. .
- h_q^-,i,j,k) ) ) )
with
h_q^+,i,j,k = β(j+1,i+2)i+1β(q,k+1,i+j+3)
and β̃ is the incomplete beta function, also known and tabulated.
§ VALIDATION
The expression provided in Equation <ref> provides an exact way of computing the normalized coefficients nm and nm starting from a generic shape model of the asteroid and a generic density distribution ρ(x⃗). Apart from defining the shape of the gravity field, the normalized coefficients provide also important information on the inertia properties of the body. For example, the Center of Mass (CoM) of the body can be derived as:
r⃗_CoM,ℬ = [ 11; 11; 10 ]√(3) R_0
This can be used to validate the proposed methodology. Five test cases are investigated: the first three are toy problems designed to validate the approach comparing the CoM position computed from the coefficients with known analytical solutions while the other two presents a more realistic application case to asteroid (433) Eros in which the acceleration computed with spherical harmonics model is compared against the one obtained with a Mascon model.
The first case is the one of a uniform density sphere with density and radius R_0 = 500 m. Being a sphere with constant density the CoM is clearly in its center, as illustrated in Figure <ref>(a). The second case considers the same body but introduces a density gradient between two hemispheres, see Figure <ref>(b). In particular the density is assumed to be
ρ( x⃗) = ρ_1 = 3204 kg/m^3 x ≥ 0
ρ_2 = 1335 kg/m^3 x < 0
In this case the CoM will only have an x component that can be analytically computed as
x_CoM = 38 R_0 ( ρ_1 - ρ_2ρ_1 + ρ_2)
The third case considers instead the asteroid (486958) Arrokoth, selected for its peculiar shape, see . A uniform density distribution is assumed here, with a density of 235 kg/m^3 and a center of mass position , both estimated by Keane et al.<cit.>.
Normalized spherical harmonics coefficients are computed for these three cases using n_q = 10 in the methodology presented in this section, and the CoM is computed as in Equation <ref>. Table <ref> shows the accuracy achieved in the three cases. The table shows that an accuracy always below 1 meter is achieved even when the CoM location is not analytically computed but retrieved from literature as in the Arrokoth case.
The last two examples refers to asteroid Eros. Thanks to extensive mapping performed by NEAR <cit.>, Eros is the small body for which higher fidelity models are available. A good description of its gravity field is provided by Garmier et al. <cit.>. While the study reconstruct the asteroid density to be almost uniform inside the body, gravitational anomalies are detected that could be due to the existence of denser areas in the central .
To test the accuracy of the gravity model the acceleration obtained from Equation <ref> is compared with the one computed with a variable density Mascon model <cit.> with the same density distribution assumption. To make the computation affordable, the model is derived starting from a 50k polyhedron model[<https://3d-asteroids.space/asteroids/433-Eros>] and performing a mesh simplification to obtain a polyhedron with 12290 vertices and 24576 faces used to compute the gravitational coefficients.
First, a uniform density model is considered with a density 2670 kg/m^3. Figure <ref> shows the error in the acceleration among the two models over an ellipsoid with semi-major axis 30 km× 20 km× 20 km seen from the asteroid north pole. The figure shows an error that is always below 1 mgal = 1e-5 m/s^2. This results to be less than one order of magnitude lower than the perturbation due non spherical gravity at that distance from Eros as shown in Rizza et al. <cit.>. Also note that the picks in the error are clearly linked with the closest regions to the Brillouin sphere.
Then a variable density distribution with an inner core with a density 10% larger than the nominal values is considered. In particular
ρ( r⃗) = ρ_1 = 2937 kg/m^3 r ≤ 5 km
ρ_2 = 2670 kg/m^3 r > 5 km
This density profile, shown in Figure <ref>, leads to an acceleration error that is slightly larger than the one of the previous case, see Figure <ref> but still always below 1 mgal.
To show the relevance of density variation on the spacecraft trajectory a comparison is shown in the following. Considering the initial condition shown on , the trajectory is propagated for 24 h using the gravity field generated from the two previously described density models.
Inertial and asteroid fixed trajectories are shown in Figure <ref> while the error on position and velocity is reported in Figure <ref>.
The observed effect of the variable density gravity field is to induce a cumulative error of more than 10 km and around 0.7 m/s at the end of the considered time horizon.
§ CONCLUSIONS
In this paper a pipeline for semi-analytical computation of normalized spherical-harmonics coefficients is proposed. The algorithm takes as input only a shape model of the asteroid and a generic density distribution function ρ(x⃗). Different levels of accuracy can be achieved simply by increasing the fidelity of the shape model or the number of radial discretization intervals n_q. The methodology has been tested against analytical results, literature findings and numerical simulations, showing good accuracy. The set of generated coefficients could be used on-board to efficiently compute the effects of non-uniform density distribution without relying on computationally heavier gravity models which are difficult to run on-board.
§ REFERENCES
[heading=none]
|
http://arxiv.org/abs/2409.02899v1 | 20240904174308 | Toward 2D Dynamo Models Calibrated by Global 3D Relativistic Accretion Disk Simulations | [
"Matthew D. Duez",
"Courtney L. Cadenhead",
"Zachariah B. Etienne",
"Bernard Kelly",
"Leonardo R. Werneck"
] | astro-ph.HE | [
"astro-ph.HE",
"gr-qc"
] |
§ ABSTRACT
Two-dimensional models assuming axisymmetry are an economical way to explore the long-term evolution of black hole accretion disks, but they are only realistic if the feedback of the nonaxisymmetric turbulence on the mean momentum and magnetic fields is incorporated. Dynamo terms added to the 2D induction equation should be calibrated to 3D MHD simulations. For generality, the dynamo tensors should be calibrated as functions of local variables rather than explicit functions of spatial coordinates in a particular basis. In this paper, we study the feedback of non-axisymmetric features on the 2D mean fields using a global 3D, relativistic, Cartesian simulation from the IllinoisGRMHD code. We introduce new methods for estimating overall dynamo alpha and turbulent diffusivity effects as well as measures of the dominance of non-axisymmetric components of energies and fluxes within the disk interior. We attempt closure models of the dynamo EMF using least squares fitting, considering both models where coefficient tensors are functions of space and more global, covariant models. None of these models are judged satisfactory, but we are able to draw conclusions on what sorts of generalizations are and are not promising.
Toward 2D Dynamo Models Calibrated by Global 3D Relativistic Accretion Disk Simulations
Leonardo R. Werneck
September 9, 2024
=======================================================================================
§ INTRODUCTION
Untilted black hole accretion disks are three-dimensional, turbulent systems, but their fluid and field profiles can be viewed as combinations of an axisymmetric, slowly evolving “background” and non-axisymmetric “fluctuations.” Often, only the background is of interest, with fluctuations considered primarily for their feedback effects on the background. This perspective motivates the use of 2D (axisymmetric) simulations to model accretion flows.
Simply evolving the general relativistic magnetohydrodynamic (GRMHD) equations in 2D will almost certainly be inadequate; MHD flows are often not only quantitatively but even qualitatively different in 2D versus 3D, as illustrated by opposite turbulent energy cascades <cit.> and the 2D anti-dynamo theorem <cit.>. What we aim for are 2D evolutions that “look like” 3D evolutions. Note that even this is ambiguous. Do we want 2D evolutions that resemble the azimuthal average of 3D evolutions, or do we want 2D evolutions that resemble representative 2D slices of 3D evolutions? Both approaches might be interesting, but they will likely be very different. Consider an evolution variable Ψ(r,θ,ϕ,t) with azimuthal Fourier decomposition ∑_mΨ̂_m(r,θ,t)e^imϕ. Azimuthal averaging picks out the m=0 contribution, and if this is subdominant, the azimuthal average Ψ will be significantly smaller than a 2D slice of a 3D realization. We might also expect most of the (r,θ) fluctuation to average out, so that azimuthal averaging might act as a low-pass filter, removing most eddies. Given these differences, one must choose when designing a model whether the 2D fields represent azimuthal averages or a representative slice. For this paper, we consider 2D models of azimuthally averaged 3D evolutions.
One might also worry that the distinction of background versus fluctuating turbulence is too sharp. In terms of the Fourier decomposition of fields in the azimuthal direction, it seems safer to distinguish high-m modes from the background than, for example, m=1 or m=2 modes. A possible criterion would be to consider an m 0 mode is “turbulence” only if the eddy turnover time is less than the orbital period. For simplicity, consider unmagnetized incompressible turbulence with turbulent velocity ∼v_T at the largest length scale H (the disk scale height) and the usual Kolmogorov energy cascade. At scale ℓ, the velocity scale is v_ℓ and the timescale is τ_ℓ≈ℓ/v_ℓ. As specific kinetic energy density flows to lower scales at the same rate ≈ v_ℓ^2/τ_ℓ for all ℓ, it follows that τ_ℓ≈ H^1/3ℓ^2/3v_T^-1. Turbulent modes then have τ_ℓ<Ω^-1, where Ω is the orbital frequency. Estimating m≈ R/ℓ at radius R, the eddy time is then sufficiently short if
m > m_ crit≈ R H^1/2(Ω/v_T)^3/2 .
It might be necessary to evolve m<m_ crit (a compromise between 2D and 3D, assuming m_ crit turns out to be a small number), with the m>m_ crit contribution safely modeled by a turbulent/dynamo closure.
Filtering away the fluctuating component alters the evolution equations, as can be seen by azimuthally averaging the Newtonian induction equation:
∂_t𝐁 = ∇× (𝐁×𝐯)
= ∇× (𝐁×𝐯)
≡∇× (𝐁×𝐯 + Δ 𝐄) .
To evolve these equations, we require a closure condition, providing the dynamo EMF Δ 𝐄 as a function of azimuthally averaged variables. A commonly considered closure <cit.> is
Δ E^i = α^i_j B^j + η^i_j J^j
where J^j is the current, B^j is the magnetic field, α^i_j is the dynamo alpha tensor, and η^i_j is the turbulent magnetic diffusivity tensor.
Dynamo corrections to 2D GRMHD simulations have been undertaken by a number of groups <cit.>, usually assuming an isotropic dynamo: α^i_j=α_ dynδ^i_j, η^i_j=η_ dynδ^i_j. The α_ dyn term enables the dynamo alpha effect: toroidal field produces poloidal field. The alpha effect enables 2D GRMHD simulations to maintain magnetic fields at the strength seen in 3D simulations. (Note that using it for this reason implies a representative 2D slice
rather than an azimuthal average interpretation.) The η_ dyn term
is the turbulent magnetic diffusivity, which acts like a resistivity. As α_ dyn is a pseudoscalar, it is expected to switch sign at the equator, so some such spatial dependence on α_ dyn is often stipulated.
Dynamo terms are also sometimes added to 3D GRMHD simulations <cit.>. In this case, the distinction between background and fluctuation (and hence the meaning of averaging) is done differently—the fluctuation is the MHD flow at spatial scales below the grid scale. It is not clear a priori if the same Δ 𝐄 model should apply to the two cases.
Three-dimensional MHD simulations can be used to extract α^i_j and η^i_j. Gressel and Pessah <cit.> calculated these using a test-field method. They studied shearing box simulations and averaged over boxes, so their distinction of mean versus fluctuating fields was not quite the same as the azimuthal average, but they were able to study dependence on shear and vertical field. Assuming α^i_j=α_ dynδ^i_j and η^i_j=0, Hogg and Reynolds <cit.> used global MHD simulations to extract α_ dyn in each hemisphere. Bendre <cit.> introduced a method of extracting α^i_j and η^i_j by least-squares fitting via singular value decomposition (SVD), which they applied to radiatively inefficient accretion disks in Dhang <cit.>. Both test-field and SVD methods find deviations in dynamo tensors from isotropy and report fairly consistent magnitudes. Dhang note the difficulty of extracting a clear dynamo signal in the disk region, as well as other indications that the field and flow are mostly non-axisymmetric. The SVD method has also been used for solar dynamo and binary neutron star remnant calculations <cit.>.
The recent study by Jacquemin-Ide <cit.>, using GRMHD disk simulations to study dynamo action from the nonlinear evolution of the magnetorotational instability, also presents some relevant findings. They reported that m=1 to m=3 modes are crucial in dynamo generation of poloidal field (perhaps suggesting a suitable m_ crit).
They also emphasize the importance of magnetic flux advection from the outer to the inner disk. This might be a less prominent effect in the small disks (resembling short gamma ray burst setups) often studied in compact binary merger contexts. Finally, they do estimate the viability of an α_ dyn and η_ dyn dynamo closure using correlation coefficients between Δ E_ϕ and B_ϕ or J_r. The average Δ E_ϕ-B_ϕ correlation oscillates about zero and does not show a significant time-average; the Δ E_ϕ-J_r correlation does have a time average clearly distinct from zero, but is strongest with a time offset between cause and effect, suggesting it is mediated by additional dynamics.
The number of studies that extract dynamo coefficients from global disk simulations, particularly relativistic simulations, remains small given the parameter space of disk states, and not all existing results have high statistical significance. Furthermore, dynamo coefficient extraction is of two types: either a single average coefficient is extracted for the entire disk (or a hemisphere thereof), or dynamo coefficients are extracted at each (r,θ) point in space. These spatial dependencies are presumably proxies for dependence on the local physical quantities and their derivatives. If these dependencies could be made explicit, the resulting model for Δ E^i with only a few global fitting constants (as opposed to fitting scalar functions of space) could be expressed in covariant tensor form, making it immediately generalizable to general spatial coordinate systems and more easily generalizable to other accretion systems (varying disk thickness, magnetic flux, and rotation law).
In this paper, we present a new analysis of the azimuthally averaged evolution of a magnetorotationally turbulent disk around a Kerr black hole. We use data from the Cartesian numerical relativity code IllinoisGRMHD <cit.>. We present a detailed analysis of the degree of non-axisymmetry, introducing measures of its dominance in magnetic energy, kinetic energy, and angular momentum transport. We find that most of the magnetic field and non-azimuthal velocity field are averaged out by azimuthal averaging, and momentum transport is predominantly non-axisymmetric. Thus, azimuthally averaged 3D ideal GRMHD resembles viscous hydrodynamics more than 2D ideal GRMHD. We also propose ways of estimating an average α_ dyn and η_ dyn. An elegant definition of the former comes from the Lorentz invariants of the azimuthally averaged field tensor. For the latter, we take advantage of the fact that, at least for the first 10000 M, 2D ideal GRMHD overpredicts the magnetic energy compared to azimuthally averaged 3D GRMHD, so we estimate η_ dyn by the resistivity needed to achieve this level of field suppression. For this purpose, we add a new phenomenological resistivity to the HARM code and compare with the 3D results.
Next, we attempt to extract α^i_j and η^i_j using SVD least-squares fitting. We consider models for which coefficients are functions of (r,θ), functions of θ, or functions of local scalars and pseudoscalars (with spacetime dependency only from those scalars and pseudoscalars). We introduce norms for the error in the best-fit model and variance in coefficient extraction to easily assess the reliability of models. None of the models meet all of our pre-set standards.
Since these models assuming eq:NewtonianDynamo are not judged satisfactory, we assess certain alternative classes of models which maintain the same basic structure of the dynamo EMF.
First, we consider whether a successful global model could be created if the RMS deviation from axisymmetry of the velocity or magnetic field were known. To be usable as a closure condition, an evolution equation for one of these variables that tracks the true evolution well enough would have to be introduced, similar to the evolution of the mean turbulent kinetic energy in k-ϵ models <cit.> (which might be generalized to MHD <cit.>). In fact, such scalars do not provide acceptable dynamo models, so the motivation to devise such evolution equations does not arise. Second, we ask whether the residual EMF subtracting off the EMF computed from m=0 to m=3 components of 𝐯 and 𝐁 is more amenable to dynamo models of the form eq:NewtonianDynamo. It is not. We conclude that more general models for Δ E^i are likely required.
This paper is organized as follows. In Sec. <ref>, we review the GRMHD and dynamo equations. In Sec. <ref>, we provide details on the 3D simulation, in particular analyzing the contributions of non-axisymmetric fields. In Sec. <ref>, we describe how to produce estimates of the isotropic α_ dyn, η_ dyn. In Sec. <ref>, we describe how we produce fits to α^ij, η^ij and present quality of fits. We summarize and present conclusions in Sec. <ref>.
We use units G=c=1 throughout. Latin letters from the beginning of the alphabet (a–f) are spacetime indices, running 0–3. Indices i, j, k are spatial, running 1–3. When writing equations in geometric form, vectors and forms are written in boldface, with 2-forms getting a “2” superscript prefix (e.g., 2𝐁).
§ AN ANALYTIC DYNAMO MODEL
For use in a relativistic code, the dynamo closure condition must at least be spatially covariant; this will account for the unpredictable evolution of the coordinate system or the deliberate use of curvilinear coordinates or coordinates with designed radial or angular concentrations. Azimuthal averaging introduces a preferred direction, the azimuthal Killing vector 𝐞_ϕ≡∂/∂ϕ, which generally will leave an imprint in the α and η tensors, so that 3D covariant equations for these tensors will be expected to include 𝐞_ϕ. Whether the equation must be 4D covariant, i.e., Lorentz covariant, is more debatable. It could be argued that the averaging procedure itself, which is defined to be azimuthal average at a fixed time, breaks Lorentz invariance. Indeed, subgrid and effective viscosity prescriptions that are purely spatial, which have been widely and successfully used in numerical relativity <cit.>, can have similar justifications. On the other hand, in one case, we found it important to keep terms first-order in velocity to properly recover the Newtonian limit <cit.>.
Assume a foliation of the spacetime with the usual normal to the slice ñ = -N 𝐝𝐭 and 3-metric γ_ab = g_ab + n_a n_b. The field tensor associated with the azimuthally averaged fields is
F^ab = n^a E^b - n^b E^a + ϵ^abcB_c ,
⋆ F^ab = B^a n^b - B^b n^a + ϵ^abcE_c ,
where E· n = B· n = 0, and ϵ^abc=n_dϵ^abcd. (Here ϵ^abcd=|g|^-1/2[abcd] is the Levi-Civita tensor and [abcd] is the totally antisymmetric Levi-Civita symbol.) For the rest of this section, we will suppress lines above averaged quantities, assuming that all quantities are azimuthal averages.
The fluid frame is defined by the azimuthally averaged 4-velocity u^a, which can be decomposed as ,
where W is the Lorentz factor and 𝒱, defined so 𝒱^an_a=0, is the Eulerian velocity, not the transport velocity. For the rest of this section, we will suppress lines above averaged quantities, assuming that all quantities are azimuthal averages.
The electric field uE^a in the fluid frame,
which is also the Lorentz force, is
uE^a≡ F^ab u_b = W E^a + n^a E^b 𝒱_b + ϵ^abc𝒱_b B_c .
In ideal MHD, this is zero, and the components parallel and perpendicular to 𝐧 must independently vanish, so
MHDE^a = -1/Wϵ^abc𝒱_bB_c .
One can also compute the electric field using the transport velocity 3-vector v^i≡ u^i/u^t (cf. <cit.>):
E_i = -1/Nϵ_ijk(v^j+β^j)B^k ,
where N is the lapse and β^i is the shift.
In the fluid frame, the magnetic field field is
uB^a = -⋆F^ab u_b
= W B^a + B^b𝒱_b n^a - ϵ^abc𝒱_b E_c .
A covariant expression that recovers the Newtonian limit (Eq. <ref>) is
F^abu_b = -α^a_b uB^b
+ η^a_b ϵ^cbdeu_c∇_d uB_e .
Note that, in the last term, we can switch freely between ∇ and ∂ since d and e are antisymmetrized.
Another possibility would have been to use the actual current
F^abu_b = -α^a_b uB^b
+ η^a_b uJ^b ,
which is different from the previous equation because of the displacement current ℒ_𝐧𝐄 term in Ampere's law. However, the identification of the diffusivity term with ∇×𝐁 should be considered more fundamental than its identity with 𝐉, because it is motivated by an expansion in terms of one over the length scale of the magnetic field. Hereafter, we will simply denote the curl of 𝐁 as 𝐉.
Unfortunately, eq:uB itself has the electric field in it. Luckily, this term is proportional to velocity and subdominant in most regions. For a dynamo equation with only an isotropic α tensor (η^a_b=0), Most has shown that one can solve for E^a analytically without needing to assume small velocity <cit.>. Instead, we will substitute the ideal MHD electric field (<ref>) in the η term, so that the weak dynamo case is properly recovered. Then
uB^a = W B^a + B^b 𝒱_b n^a - ϵ^abc𝒱_b (-W^-1)ϵ_cef𝒱^eB^f
= B^b𝒱_b n^a + 1/W[B^a + B^b𝒱_b 𝒱^a] ,
where we have used W^2=1+𝒱^2.
As for the curl term in (<ref>)
ϵ^abcdu_a∂_c uB_d = -W ϵ^bcd∂_c uB_d + ϵ^abcd𝒱_a ∂_c uB_d .
The first is just a spatial curl, albeit of the boosted B, which is easy to compute. The second term is, in general, difficult to calculate because it will include time derivatives which are less easily available to an evolution code or post-processing analysis. However, in most cases, this term will be small. Suppose we are willing to limit ourselves to cases in which 𝒱^i and n^i are small enough (implying that the shift vector is small, which is usually the case when not very close to a black hole for commonly chosen gauge conditions) that we can keep only terms at lowest order in these. Then 𝒱_0=B_0=0, and for spatial components of the free index b the index c must be the one that is the time component. In the weak-dynamo limit, the time derivative of B^a is proportional to 𝒱^i, and for slowly-evolving metrics, ∂_tB_a is proportional to ∂_t B^b. So this term, which looks first-order in 𝒱, is actually second-order, and will be dropped with slightly easier conscience. We also drop the term B^b𝒱_b n^a in uB^a (see Eq. <ref>) as second-order.
For the component of Eq. (<ref>) perpendicular to 𝐧, and using (<ref>), we are left with
W E^a = -ϵ^abc𝒱_b B_c
- α^a_b W^-1 [B^b + B^c 𝒱_c 𝒱^b]
- η^a_b W ϵ^bed∂_e [(B_d + B^c𝒱_c 𝒱_d)W^-1] .
§ NON-AXISYMMETRIC FEATURES OF THE 3D SIMULATION
§.§ Methods of 3D simulation
The simulation we used was produced as part of the Event Horizon GRMHD code comparison project <cit.>. The fixed spacetime is a Kerr black hole with dimensionless spin parameter a/M = 0.9375. The gas is an ideal gas with adiabatic index Γ=4/3. The initial state of the gas has constant specific angular momentum, defined as ℓ = u^tu_ϕ. In the absence of magnetic forces, the gas forms an equilibrium torus with an inner radius of 6M and maximum density at 12M. We seed a magnetic field at t=0, with the maximum of the magnetic pressure P_B^ max and the maximum of the gas pressure P_ gas^ max related as P_B^ max/P_ gas^ max=100. The field is calculated from a toroidal vector potential A_ϕ∝ max(ρ/ρ_ max-0.2,0). This initial state produces a “Standard and Normal Evolution” (“SANE”) accretion flow, i.e., one for which magnetic flux on the horizon does not significantly affect accretion.
The IllinoisGRMHD <cit.> simulation uses a Cartesian FMR grid (utilizing the Cactus/Carpet infrastructure), in which the coarsest grid is a cube with a half-sidelength of 1750M. There are 7 different resolutions used, including 6 levels of refinement atop this coarse grid, and the finest-grid cube has a resolution of Δ x=Δ y=Δ z ≈ M/4.388571. The finest grid is composed of four cubes, each with a half-sidelength of 27.34375M, and is larger than the black hole, which has a radius of 1.348M. IllinoisGRMHD evolves a vector potential at cell edges to exactly preserve a finite difference version of the constraint ∇· 𝐁=0 <cit.>. The magnetized fluid is evolved using a high-resolution shock-capturing scheme, with PPM reconstruction and an HLLE approximate Riemann solver.
The disk was evolved for a time of 10000M, long enough to see many orbits of fully-developed turbulence. Volume data of all evolution variables were stored on a spherical-polar grid. Data was output at a high cadence of 0.57M, but we find it sufficient to use only every Δ t=100M for time averages below. When performing fits for dynamo coefficients, we use timesteps separated by Δ t=300M, where the increased spacing—roughly corresponding to the orbital period at the initial location of maximum density—is chosen to reduce correlation between data at adjacent steps while retaining sufficient data (31 steps), starting at t=1000M, after the magnetic field has become strong and the disk transition from the kinematic to the dynamical regime.
The evolution time is admittedly shorter than usual for dynamo studies. Ordinarily, one would want to observe multiple cycles of the expected dynamo wave. Assuming an αΩ dynamo with α_ dyn∼ 10^-4, a characteristic vertical scale H∼ 10M, and a characteristic orbital frequency Ω∼ 10^-2M^-1, we would expect a dynamo period ∼√(H/(Ωα_ dyn))∼ 10^4M <cit.>, meaning the full evolution time is not more than a cycle, and we will not be able to produce “butterfly” diagrams of the average toroidal field at different latitudes showing multiple dynamo cycles, as has been done in some longer disk simulations (e.g. <cit.>).
We will, however, evolve comfortably long enough to observe Δ 𝐄 acting.
§.§ Overview of 3D data
We define the following averages
X(r,θ,t) ≡1/2π∫_0^2π dϕ X(r,θ,ϕ,t) ,
⟨ X⟩(r) ≡∫_T^2Tdt∫_π/3^2π/3 dθ∫_0^2πdϕ√(γ) X(r,θ,ϕ,t)/∫_T^2Tdt∫_π/3^2π/3 dθ∫_0^2πdϕ√(γ) ,
where T=5000M.
To minimize effects of the metric on the spatial dependencies of vector and tensor components, we will use the normed components X^î≡√(γ_ii)X^i and X_î≡√(γ^ii)X_i when generating plots and fitting formulas. (Note that we drop the Einstein summation convention in the above definitions, so there is no sum over i.)
For a quantity X(x_A) which is a nonlinear function of the primitive variables x_A=(B^i,v^i,ρ,P), define the residual as
Δ X = X - X(x_A) .
Suppose instead of retaining only the average of the spherical-polar components of 𝐯 and 𝐁, one were to retain the azimuthal Fourier decomposition, truncated at a particular m=m_t. Call Ψ_m_t the approximation of Ψ with all azimuthal Fourier modes of m<m_t. Then define the residual EMF Δ_m𝐄≡𝐁×𝐯 - 𝐁_m×𝐯_m. Thus, Δ𝐄 = Δ_0𝐄.
In Fig. <ref>, we plot azimuthal and time averages of Δ E^î, B_î, and J_î. A linear relationship between Δ E^î and B_î or J_î is not immediately suggested to the eye. Of the three vectors, the B_î components are spatially smoothest, while J_î show small-scale behavior. B_î are strongest near the poles, J_î in the disk, and Δ E^î in between.
In Fig. <ref>, we plot azimuthal and time averages of the components of the transport velocity v^î and the EMF residual beyond m=3, denoted Δ_3E^î in accord with the notation introduced above. The structure of Δ_3E^î is similar to Δ E^î, suggesting that the former (representing higher-m modes) significantly contributes to the latter. Velocity is dominated by the disk rotation and the polar outflow; little of the disk turbulence survives the averaging process. We also plot three scalar quantities which might be expected to correlate with Δ_3E^î (but do not), whose discussion we defer to Sec. <ref>.
§.§ Non-axisymmetric energies and fluxes
The azimuthal structure and the effect of azimuthal averaging are illustrated in Fig. <ref>, in which B^r and E_r are plotted as functions of θ and ϕ at one radius and time. The azimuthal average is shown as a bar on the right. We see that, at least for these quantities, azimuthal mean structure dominates only near the poles. At intermediate latitudes (i.e. in and near the disk), there is small-scale structure (high m contribution), which averages out to value much smaller than the maximum magnitudes.
As in <cit.>, we compute the Maxwell stress averaged over time and angle ⟨ w^rϕ⟩ as a function of r from the full magnetic field in 3D in two ways: by integrating b^r̂b^ϕ̂ in Kerr-Schild coordinates and by integrating in a local frame comoving with the fluid with appropriate comoving volume element. (See <cit.> for details.) We use the frame of the azimuthally averaged velocity for easier comparison with the stress from the azimuthal mean field. [Of course, the azimuthally averaged velocity is itself a nonlocally-defined field, but once computed it has a local value at each point in 3D or 2D (r, θ) space.] Like in <cit.>, we find that the two prescriptions give almost identical results. As is commonly done, we define a Shakura-Sunyaev α_ SS by
α_ SS≡⟨ w^rϕ⟩/⟨ P_G+P_B⟩ ,
where P_G and P_B are the gas and magnetic pressure, respectively.
The result, included in Fig. <ref>, closely resembles the left panel of Fig. 19 in <cit.>, especially the dip at smaller r in the lower resolution results, which is a new check on the consistency of IllinoisGRMHD with the codes with spherical-polar grids. We also compute the Maxwell stress from the azimuthally averaged field 𝐁, using the same MHD formulas and in the azimuthally averaged comoving frame. This is found to provide a much lower stress except very near the black hole, indicating that most of the angular momentum transport in the bulk of the disk accomplished by the magnetic field is done so by the non-axisymmetric field (the fluctuations, not the mean), a point also mentioned by Dhang <cit.>. Of course, using an ideal MHD formula for the stress tensor is not quite appropriate for the azimuthally averaged field tensor, which includes the effect of Δ 𝐄, but this residual EMF is itself a measure of the prevalence of nonaxisymmetry in the field and so does not affect the conclusion.
Because of the difference between the local 3D and azimuthally averaged velocity fields (𝐯 vs 𝐯), there is also a mean Reynolds stress which can transport angular momentum. This is calculated in the frame comoving with the local azimuthally averaged velocity field: t^r̂ϕ̂ = ρ h u^r̂u^ϕ̂. (Note that the azimuthal mean velocity does not contribute, since if the field were axisymmetric, u^î would be zero in the frame comoving with the azimuthal average velocity.) This Reynolds component is lower than the full (primarily non-axisymmetric) Maxwell component but still larger than the Maxwell stress of the mean field.
In addition to the shear-type stresses quantified by α_ SS, there can be a pressure-like contribution to the stress tensor from the non-axisymmeric Reynolds and Maxwell stresses. We define the turbulent pressure as
P_t ≡1/3Δ T^ab(g_ab+u_au_b) .
This is a very rough definition—applied instead to the stress-energy tensor of ideal MHD, it would give P + b^2/6, where b^2=b_ab^a and 𝐛 is the magnetic field in the 𝐮 frame. We find this pressure to be close to the magnetic pressure (of the full field), which even after saturation is about a factor of 10 lower than the gas pressure. (See Fig. <ref>, discussed below, which plots the associated energy densites.) Thus, although turbulence dominates the angular momentum transport, it is very subdominant in pressure effects (e.g., determining the disk height).
We also compute time and angle-averaged energy densities in the disk. For the full magnetic energy density, this is b^2/2, computed using the 3D local 𝐁 and 𝐮. For the magnetic energy of mean field, we use the same formula (justified as above for stresses) using azimuthally averaged quantities. The internal energy density of the gas is P/(Γ-1). The kinetic energy of a fluid with 4-velocity 𝐮_1 relative to a frame 𝐮_2 can be defined as follows. Start from the Reynolds term in the fluid stress tensor T_R^ab(𝐮_1)≡ρ h u_1^au_1^b. The desired energy is in the 𝐮_2 direction, so we must project T_R^abu_2a. For an energy density, we must also specify a 4-velocity indicating the direction of flux of this energy we are considering; this is the frame in which the spatial integral converting energy density to energy would be performed. The natural choices are 𝐮_1 (for comoving frame) and 𝐧 (for regular spatial integration on the slice), and we choose the latter. The Reynolds energy density is then u_2an_bT_R^ab(𝐮_1)=-(𝐮_1·𝐮_2)Wρ h. We then subtract off the rest contribution when 𝐮_1=𝐮_2 to get
e_K = -[(u_1· u_2) + 1]Wρ h .
There are three relevant 3-velocity fields in this problem: 1) 𝐯, the 3D local velocity; 2) its azimuthal mean 𝐯; and 3) the rotational velocity, which is 𝐯 with radial and meridional components removed. From each 3-velocity field, a 4-velocity field can be constructed by imposing the normalization 𝐮·𝐮=-1 (meaning that the t component of the azimuthal mean 4-velocity is not exactly the azimuthal mean of the t component of the local 4-velocity). We thus compute a turbulent kinetic energy density, comparing the 3D local velocity to the azimuthal average, and a rotational kinetic energy, comparing the azimuthal average velocity to the rotational velocity. Energies are plotted in Fig. <ref>. We omit the rotational energy, which dominates all others, as would be expected for an accretion disk. The figure shows that the magnetic energy saturates more than an order of magnitude below the internal energy. However, the magnetic energy of the mean field saturates at a much lower value than this total magnetic energy. This indicates that the magnetic energy is mostly non-axisymmetric (“turbulent”), and we infer that the energy in the turbulent magnetic field is close to the total magnetic energy. The kinetic energy from the difference of azimuthal mean vs actual velocities (the turbulent kinetic energy) dominates over the non-rotational kinetic energy from the mean field, indicating that inside the disk eddy motion dominates over mean poloidal motions. The total magnetic energy is seen to be very close to the turbulent kinetic energy, indicating that the kinetic and magnetic turbulent energies come into equipartition with each other.
§ ISOTROPIC DYNAMO MODELS
§.§ Estimating an isotropic alpha effect
If one knows that the dynamo EMF is of an isotropic alpha form, extracting the pseudoscalar coefficient α_ dyn becomes straightforward. Starting from the Newtonian formula 𝐄=𝐁×𝐯 + α_ dyn𝐁, one can take the dot product of both sides with respect to 𝐁 to get α_ dyn = E· B/B^2. Remembering that B^2≫ E^2 (see Fig. <ref>), we can recognize this as the ratio of the two Lorentz invariants of the electromagnetic field. We may, then, straightforwardly generalize the Newtonian formula as
α_ dyn≡1/2⋆F_abF^ab/F_abF^ab .
a pseudoscalar, as expected. Note that these are the invariants of the azimuthally averaged field tensors, not averages of the invariants of the field in 3D.
Because α_ dyn is expected to change sign across the equator, it would be inappropriate to compute an average over angles at a given radius, or an average over the entire grid (unless one were to presume this parity and perform a sign flip in one hemisphere); it is better to average over time and radius to extract an average α_ dyn as a function of θ. This is plotted in Fig. <ref>.
A hemispherical average pseudoscalar α_ dyn had previously been extracted from a 3D disk simulation by Hogg and Reynolds <cit.>. They find α_ dyn in the range 1–2× 10^-4. In magnitude, this is quite consistent with Fig. <ref>. There has been some disagreement over the sign of α_ dyn, with some finding it positive in the upper hemisphere and some finding it negative (always with the other hemisphere being opposite sign). Our α_ dyn is negative in the Northern hemisphere, consistent with <cit.>.
§.§ Calibrating isotropic diffusivity effect with 2D MHD
To estimate an effective η_ dyn, we use the fact that the energy of 𝐁 inside the disk in the 3D simulation is smaller not only than the magnetic energy in the disk of the non-averaged field in 3D, but also than the magnetic energy for a 2D MHD simulation of the same disk. One could say that the difference in energy is transferred from the mean field to the non-axisymmetric field. Since this field is not captured by the fields tracked in a 2D simulation, it is in a sense equivalent to internal energy. The nonaxisymmetric effect that accomplishes this effective dissipative transfer thus acts as a diffusivity, and one can estimate an effective η_ dyn by finding what level of resistivity is needed to reduce the disk magnetic energy to the same degree.
We wish to implement a version of eq:relativistic_dynamo with α^i_j=α_ dynδ^i_j and η^i_j=η_ dynδ^i_j in a relativistic 2D MHD code. Such codes usually evolve the magnetic field 2-form (or, equivalently, the densitized magnetic field vector) ^2B_ij≡ϵ_ijkB^k ≡ [ijk]B̃^k.
Including an effective resistivity will introduce second spatial derivatives in the induction equation. We wish to obtain a spatially covariant equation while avoiding Christoffel symbols to the extent possible. Thus, we continue with form notation where possible, although the spatial metric necessarily will be invoked in Hodge duals and in creating the B-field 1-form ^1𝐁=B_a dx^a. We introduce an EMF 1-form 𝐑 to include all deviations from ideal MHD. Continuing in form notation, the dynamo-modified induction equation is
∂_t2𝐁=-ℒ_𝐯2𝐁 + d𝐑 .
For the standard isotropic dynamo, 𝐑 is
𝐑^ dyn=α_ dyn1𝐁 + η_ dyn⋆ d 1𝐁 ,
where in fact we use u𝐁 as defined in Eq. <ref> for the magnetic field. In components,
R^ dyn_k = α_ dynB_k + η_ dyn1/2ϵ_ijkA^ij ,
A^ij = γ^iℓγ^jm(∇_ℓ B_m - ∇_m B_ℓ)
= γ^iℓγ^jm(∂_ℓ B_m - ∂_m B_ℓ) .
(I.e., by using exterior derivatives, we avoid covariant derivatives.) In order to avoid the timestep limitations of explicitly evolving a parabolic system, we promote R_k to an evolution variable driven to its dynamo form:
∂_t R_k = τ_ drive^-1(R^ dyn_k - R_k) .
We set τ_ drive^-1=τ̂_dΩ_K, where Ω_K is the Keplerian angular frequency; we report results for τ̂_d=0.5 but have checked that the behavior of all quantities is insensitive to it within a wide range. Smaller τ_ drive will force R_k to track R^ dyn_k more closely, but we wish to keep η_ dyn/τ_ drive < 1 to avoid acausal propagation speeds.
The coefficient η_ dyn is set to c_sℓ, where ℓ is an effective mixing length; it is natural to suppose that it is similar in magnitude to the mixing length used to set the alpha effective viscosity. This similarity is confirmed for shearing box studies (more precisely, that the turbulent magnetic Prandtl number is roughly unity <cit.>), but it should be remembered that the distinction between mean and turbulent field is different in these studies, so their applicability to azimuthally averaged dynamics cannot be certain. We set
ℓ = α_η c_sΩ_K^-1f_ρ f_β f_ BH ,
where α_η is a free constant (the symbol “α” chosen for its analogy to α_ SS, not to α_ dyn), and the factors f_ρ=ρ/(ρ+10^-4ρ_ init,max), f_β=P_ gas/(P_B+P_ gas), f_ BH= max[0, 1-(r/3M)^-2] suppress resistivity in the low-density region, the magnetosphere, and near the horizon, respectively.
We implement the above in the harmpi code, a version of the GRMHD code HARMPI <cit.> publicly available at github <cit.>. Note that, unlike truly 4D covariant resistive MHD codes, we do not evolve the actual electric field, although making 𝐑 an evolution variable arguably accomplishes something similar. Our less-precise implementation does have the advantage that no problem of stiffness arises in the limit of low η_ dyn.
In Fig. <ref>, we show the magnetic energy in the disk as a function of time for several simulations. For the 3D simulation, we plot both total magnetic energy and the (much smaller) magnetic energy of the mean field. We also show harmpi in 2D with ideal MHD, which produces a magnetic energy far too high during the evolution period to match the mean of the 3D field. Better agreement can be achieved by adding an isotropic dynamo η term with α_η=0.018.
In Fig. <ref>, we plot magnetic flux Φ≡∫ |B^r| dA on the horizon. For electromagnetic energy extraction and jet formation, it is important that suppressing the disk magnetic energy does not also suppress the horizon flux. Our implementation of η does not add resistivity outside the high-density region or near the horizon, and we do see that the α_η=0.018 run maintains a horizon flux that is fairly constant and similar to 2D. It remains to be seen how it would fare for longer timescales (not pursued for this project for lack of a 3D comparison) and whether the polar field, which eventually diminishes in 2D, needs to be replenished by a slight alpha effect. Furthermore, it is a separate challenge to show that, for a more extended magnetized disk (e.g., the more common MAD test disks <cit.>), magnetic flux could, over an extended period of time, accurately survive while advecting onto the horizon.
§ NON-ISOTROPIC DYNAMO MODELS
We now attempt to extract α^i_j and η^i_j by linear least-squares fitting. Suppose one attempts a fit using N data points, each data point corresponding to one event—a particular spatial location at a particular time. At each data point, there are three equations, one for each component Δ E^i. For a given component i, combining all data points, we have the matrix equation that we wish to be satisfied
𝐲_i ≅𝐀_i·𝐱_i ,
where 𝐲_i is length-N column vector made of all the Δ E^i, 𝐱_i is a 6-dimension column vector made of the α^i_j and η^i_j ∀ j, and 𝐀_i is a N× 6 matrix made of B^j and J^j ∀ j. We use the symbol “≅” to mean that this is an equation deviation from which will be minimized (but, since there are more conditions than variables, will not be exactly satisfied). The best fit model is a particular 𝐱_i=𝐱_i^ fit .
In choosing a residual function to be minimized, one must decide how data points are weighted. One natural choice is to treat the absolute error at all points as equally important and minimize (𝐲_i -𝐀_i·𝐱_i^ fit)^T
(𝐲_i -𝐀_i·𝐱_i^ fit). This will have the effect that the fit will care more to reduce relative error at points with strong field and high Δ E^i. Alternatively, one might wish to weight relative errors at each point similarly. One can do this by normalizing 𝐀 and 𝐲 at each spacetime event P used in the fit, dividing all components of each by ||y||_P, the L2 norm of the elements of 𝐲:
||y||_P=√(1/3∑_i=1^3y_i^2(P))
It is not obvious which sort of weighting is more desirable, so we have experimented with both. For runs with weighting, we wish to avoid the possibility of fits dominated by attempts to reduce relative error in unimportant weakly-magnetized, very low-EMF regions. Therefore, for these fits, we multiply 𝐀 and 𝐲 at each event P by the weighting factor
w_P = max(||y||_P, 10^-3 y_ max)^-1 ,
where ||y||_P is the L2 norm ||y|| at P, and y_ max is the maximum value of ||y||_P across all P used for the given fit (i.e., the L∞ norm across events of the L2 norm at each event).
§.§ Model validation
One measure of model accuracy is how well the model matches the original data. We compute this accuracy by the following measure:
χ_i^2 ≡(𝐲_i -𝐀_i·𝐱_i^ fit)^T
(𝐲_i -𝐀_i·𝐱_i^ fit)/𝐲^T𝐲 .
This is related to the usual chi-squared statistic but with a different normalization. More commonly, χ^2 is normalized by the expected error, so that its magnitude for a good fit is unity. In this case, we normalize by the the data itself, so a very good fit would have χ^2≪ 1. This is probably too much to hope for, so we would settle for χ^2<0.5 or so.
We also measure the variance, which provides the uncertainty in the best fit values 𝐱_i^ fit. A variance estimate is provided by the SVD composition, valid for Gaussian statistics and uncorrelated data points. In fact, even for the staggering in space and time we use to mitigate correlations, data points remain significantly correlated, so Bendre <cit.> find (and we confirm) that the Gaussian variance significantly underestimates the uncertainty in 𝐱_i^ fit.
Instead, we estimate variance, similarly to Bendre , by computing 𝐱_i^ fit restricting the data used to different subsets of timesteps. In our case, we compute 𝐱_i^ fit for even timesteps (𝐱_ even), for odd timesteps (𝐱_ odd), and for all timesteps (𝐱_ all). Between two fits, 𝐱^A and 𝐱^B, define relative difference for each component i of 𝐱 as δ^A,B_i≡ (x^A_i - x^B_i)^2/(x^A_i + x^B_i)^2. For each i, we get an overall δ_i by taking the maximum of δ^ even, odd_i, δ^ even, all_i, and δ^ odd, all_i. For the fit to have some value, there must be at least one component of 𝐱^ fit that is consistent across fits (and this will probably be the one that contributes most to the fit), so for our overall variance measure, we minimize over components i:
var(𝐱) = min_i[max(δ^ even, odd_i, δ^ even, all_i, δ^ odd, all_i)] .
Some fits involve more than one point in space, so for Models 2 and 4 below we also compute variances by skipping even and odd points in r and (if it varies within the fit) θ. Then for each x_i component we maximize over relative differences for different time and space samples before extracting the best-constrained component.
This is a very generous measure, since we only require one dynamo coefficient to be well constrained. For the model parameters to be meaningful, we require var([ 𝐱])<0.3.
Note that the more global the model, the more data points are relevant per fitting parameter. This will tend to reduce variance, since one has very good statistics, but it is more difficult to have acceptable χ^2, since the model must work well for more points. Conversely, for more local models, χ^2 will likely become smaller, but one should not be too encouraged by this, because one can fit anything as the number of conditions drops toward the number of fitting parameters, and the tendency to over-fit will often be revealed in a high variance.
§.§ Pointwise models
We attempt two models for which α^i_j and η^i_j are taken to be functions of spatial coordinates but independent of time. In Model 1, they are taken to be functions of r and θ, i.e., there is an independent fitting procedure applied at each point. This is similar to the model construction of Dhang <cit.>. In Model 2, we take the dynamo coefficients to be functions of θ, i.e., they are presumed to be constant in both r and t. Thus, compared to Model 1, each fit in Model 2 involves more data (corresponding to multiple points along a radial ray), which will tend to result in better variance but worse χ^2. We perform fits both with and without weighting, meaning equations at data points are weighted either with the factor 1 or the factor w in eq:weighting.
Results for model validation are shown in Table <ref>. For presenting average variance and χ^2, we divide the space into two regions: “jet” for the region within π/3 of the axis, and “disk” for all other latitudes. As we see, none of the models are very good by either of our validation measures.
§.§ Global models
We now attempt to fit to a model for which α^ij and η^ij are not explicit functions of spatial coordinates.
For a covariant model, we would also have to build fitting formulas for these tensors out of physically relevant vectors and tensors, e.g., the metric γ^ij or the velocity v^i. Since α^ij is axial, we require at least one pseudoscalar or pseudovector ingredient. The construction of an azimuthally averaged system only makes sense if an approximate azimuthal Killing vector 𝐞_ϕ=∂/∂ϕ has been identified.
We can, thus, assume its availability and expect that it may appear in the formulas for dynamo tensors, both because of its physical relevance (as roughly the direction of mean velocity) and its relevance in constructing the 2D system (which breaks 3D rotational invariance). We will use the normalized vector
ê_ϕ≡ (g_ϕϕ)^-1/2∂/∂ϕ .
We get a pseudovector, corresponding to the direction of the symmetry axis, by taking a curl:
ζ^i ≡ϵ^ijk∇_j(e_ϕ)_k = ϵ^ijk∂_j(e_ϕ)_k .
We will use the normalized version ζ̂≡ζ/|ζ|, which resembles the Cartesian z direction.
Some dynamo models also use ∇ρ, the gradient of the density, or they use a gradient of the turbulent velocity (assuming some knowledge of this) <cit.>.
For a thin accretion disk, the direction of density gradient would be close to sign(z)ζ̂ except on the equator. To build the expected parity factor in α_ dyn (something like cosθ), we instead want a nearly radial third vector, which could be interpreted as the direction of gravity. Since we have already assumed an approximate timelike Killing vector is available to us, 𝐞_𝐭, we might as well use it to define this direction
𝐟 ≡ -∇(e_t· e_t) = -∇g_tt ,
𝐟̂ ≡ 𝐟/|f| .
This then provides the pseudoscalar μ≡f̂·ζ̂, which is close to cosθ. The resulting triad {ϕ̂, ω̂, 𝐟̂} provides a suitable vector basis inside the disk. For global models including the polar region, it is not usable because on the poles ω̂ and 𝐟̂ are parallel or anti-parallel. We can alternatively use the azimuthal Killing vector to define
ϖ ≡ -∇(e_ϕ· e_ϕ) = -∇g_ϕϕ ,
ϖ̂ ≡ ϖ/|ϖ| .
Thus, 𝐟̂ resembles the spherical polar radial direction and is used to define the pseudoscalar μ, while ϖ̂ resembles cylindrical polar radial direction and is used as a basis vector.
We then have a triad of normalized but not orthogonal vectors in ê_ϕ, ζ̂, and ϖ̂ (and an associated dual cobasis); we denote components in this basis by hatted capital Latin letters, e.g. B_Î=B· e_Î.
The global dynamo model is a tensor equation for constructing the residual EMF Δ 𝐄 in terms of the magnetic field 𝐁, an axial vector, and some polar vector 𝐙, which we choose to be either 𝐉≡∇×𝐁 or B^2𝐯_T, where 𝐯_T is the transport velocity of the fluid. For 𝐁, we report results using u𝐁 (and its curl for 𝐉), although we have also tried using the normal frame 𝐁 with similar results.
The tensor equation is
Δ E^Î = α^ÎK̂B_K̂ + η^ÎK̂Z_K̂ ,
α^ÎK̂(𝐱,t) =
a^ÎK̂μ^n_a(Î,Ĵ) ψ(𝐱,t) ,
η^ÎK̂(𝐱,t) =
e^ÎK̂μ^n_e(Î,Ĵ)ψ(𝐱,t) ,
where n_a(Î,Ĵ) and n_e(Î,Ĵ) are either 0 or 1, whichever is needed to obtain consistent parity for all terms.
The scalar function ψ(𝐱,t) can be any scalar function constructed from the dynamical variables, with variables expected to be correlated with Δ 𝐄 being attractive choices. Below, we consider several choices, including unity and the 2D spatial shear scalar σ=(σ_IJσ^IJ)^1/2. We have also tried the gradient of the angular velocity |∇Ω|, the sound speed c_s, and the fraction of the pressure that is gas or magnetic (i.e. P_G/(P_G+P_B) and P_B/(P_G+P_B)).
We also try the scalar RMS deviation of 𝐁 and of 𝐯_T from their azimuthal average, e.g.,
B_ rms = [|B-B|^2]^1/2 .
In an actual 2D simulation, these functions would not be available. Rather, new evolution variables would have to be introduced with evolution equations designed to suitably approximate them. However, designing such dynamics is only a worthwhile exercise if it is found that the actual RMS measures of non-axisymmetry provide useful ψ(𝐱,t) variables.
We also consider two alternatives for what constitutes the EMF residual: Δ𝐄 and Δ_m𝐄.
We produce two types of global models. For Model 3, we use data across all r, θ, and t to produce a single fit for the numbers α^ÎK̂, η^ÎK̂. For Model 4, we use the time average of the data and fit for the numbers α^ÎK̂, η^ÎK̂ using points across all r, θ.
We introduce a couple of variations. First, it is possible that the dynamics is very different in the jet and disk regions, so we create fits using data only within certain latitude regions. For this purpose, “disk” is defined as the region between π/3 < θ < 2π/3 and “jet” is defined as the regions θ<π/5 and θ>4π/5, so the intermediate coronal region is excluded from both. This division is slightly different from the “disk” and ”jet” designations for reporting Models 1 and 2 quality measures in Table <ref>, but the separation here plays a different role. It is used to determine which data is used for creating a given fit, rather than just which region is being averaged over for reporting a given norm after fitting is completed.
Also, one could argue that a dynamo model needn't capture small-scale features of Δ E^Î, so in some cases we try smoothing data before fitting. This effectively reduces the amount of independent data and can be expected to give higher χ^2. We convolve the data with a smoothing kernel, here chosen to be a triangle function in each direction centered on the point. Specifically, if the half-length of the triangle is N_ sm points, then variable f at point i, j (indexing location in r and θ, respectively) is smoothed as follows:
f_i,j → ∑_m=-N_ sm^N_ sm∑_n=-N_ sm^N_ smw_mn/𝒵 f_i+m,j+n ,
w_mn = (1-|m|/N_ sm+1)(1-|n|/N_ sm+1) ,
𝒵 = ∑_m=-N_ sm^N_ sm∑_n=-N_ sm^N_ sm w_mn .
Thus, data at each point is replaced by a weighted sum of its neighbors, and the amount of smoothing is determined by the half-length of the triangle, N_ sm.
Quality of fit measures are reported in Table <ref>.
Due to the large number of events used to fit a small number of global parameters, best-fit values to at least some dynamo coefficients can now be determined precisely. However, χ^2 reveals that none of these best fits are good matches for the data, at least not good enough according to our criteria for viability. We see many cases with χ^2 close to 1, which occurs because the functions 𝐁 and 𝐙 do not well overlap with 𝐄 (see Fig. <ref> and <ref>), and none of our chosen correlates ψ successfully corrects for this, so the best fit is obtained by making the model values 𝐀·𝐱^ fit small compared to the data 𝐲.
§ SUMMARY AND CONCLUSIONS
Modeling non-axisymmetric effects in accretion disks is challenging because, for some quantities of interest, the non-axisymmetric/fluctuating part dominates over the axisymmetric/mean part. Inside the disk, the mean field is often itself more like residual noise than a dominant field component, and it is not surprising that non-axisymmetric effects cannot be easily correlated to its local values. One crucial feedback of fluctuations onto the mean structure is the angular momentum transport, often captured via viscous hydrodynamics. Mean fields remain important in the polar region, and for the long-term maintenance and evolution of this region, and for the late-time oscillating toroidal mean field, 2D models will need to model the mean field while incorporating non-axisymmetric effects. Using results for a 3D Cartesian FMR GRMHD code, we have quantified the effective dynamo α_ dyn and η_ dyn. We then tested whether this model is a good fit to the residual EMF, considering a wide range of pointwise and global models. We have considered dynamo closures of growing complexity. 1) We considered models with Δ 𝐄 a function of 𝐁 and 𝐉, perhaps with added proportionality to some other function of variables available to the 2D mean system. 2) We generalized this to use proportionality to the RMS magnetic or kinetic energy, assuming that some evolution system involving only mean fields could be devised to roughly mimic these. 3) We generalized further and considered that only the high-m residual Δ_3 𝐄 need be fit by a closure involving mean fields. None of these provided satisfactory models (although, of course, evolving m=1 to m=3 modes of all variables would capture a significant fraction of the non-axisymmetric effects in itself even without a closure for Δ_3 𝐄, at the cost of being a low-azimuthal resolution 3D model rather than a 2D model). In particular, designing evolution equations for B_ RMS or v_ RMS does not seem promising because we have failed to find a way to use them effectively. More general and sophisticated models could still be attempted, e.g., providing evolution equations for
Δ 𝐄 itself, analogous to turbulence models that evolve the Reynolds stress.
On the other hand, this is a stricter test than what is required of viscous hydrodynamics, in that it is usually not considered necessary to show that the residual stress tensor is well-fitted by a viscous stress tensor with some kinematic viscosity that is either a constant or is some specified function of azimuthal mean variables. Rather, we insist that the viscosity capture one particular effect and calibrate a coefficient setting ν so that one effect has the appropriate magnitude. Similarly, if one identifies the key features needed for long-term evolution, e.g., reducing poloidal field strength in the disk while maintaining it in the poles, simple models can be calibrated to do this.
The main interest of this study is to have explored some new ways of quantifying dynamo effects in 3D simulations. The main limitation of this study is the narrowness of the data to which it has been applied. More could be learned by carrying out a similar analysis on a run that continues much longer for multiple dynamo cycles. It is also necessary to run with different accretion disk-black hole configurations: disks around lower-spinning black holes, different seed fields (e.g. toroidal), larger disks, disks that become magnetically arrested within a reasonable evolution time, radiatively efficient disks.
We acknowledge S. Bose, M. Forbes, F. Foucart, F. H. Nouri, and E. Most for useful conversations and encouragement, and we express deep gratitude to A. Tchekhovskoy for making the HARMPI code publicly available. M.D. gratefully acknowledges support from the NSF through grants PHY-2110287 and PHY-2407726 and support from NASA through grant 80NSSC22K0719. Z.B.E. gratefully acknowledges support from NSF awards AST-2227080, OAC-2227105, PHY-2110352, and PHY-2409654, as well as NASA awards ISFM-80NSSC21K1179 and TCAN-80NSSC24K0100.
B.J.K. gratefully acknowledges support from NASA LISA Preparatory Science award 80NSSC24K0360; this material is also based upon work supported by NASA under award number 80GSFC21M0002.
Computational resources for performing the GRMHD simulations were provided by the WVU Research Computing Thorny Flat HPC cluster, which is funded in part by NSF OAC-1726534.
Post-simulation analysis was performed in part on the Pleiades cluster at the Ames Research Center, with support provided by the NASA High-End Computing (HEC) Program.
|
http://arxiv.org/abs/2409.02156v1 | 20240903180000 | SymTFT Fans: The Symmetry Theory of 4d N=4 Super Yang-Mills on spaces with boundaries | [
"Iñaki García Etxebarria",
"Jesús Huertas",
"Angel M. Uranga"
] | hep-th | [
"hep-th"
] |
=1
C
S
R
Z
X
Y
T
P
N
O
IFT-UAM/CSIC-24-126[]Department of Mathematical Sciences,
Durham University,
Durham, DH1 3LE, United Kingdom[]Instituto de Física Teórica IFT-UAM/CSIC,
C/ Nicolás Cabrera 13-15, Campus de Cantoblanco, 28049 Madrid, [email protected]@[email protected] SymTFT construction is an efficient way of studying the
symmetries of Quantum Field Theories. In this paper we initiate the
study of the SymTFT construction for interacting theories on spaces
with boundaries, in the concrete example of d=4=4 Super
Yang-Mills theory with Gaiotto-Witten boundary conditions. The
resulting SymTFT has a number of peculiarities, most prominently a
fan-like structure with a number of different SymTFT sectors joined
by gapless interfaces that merge at the Gaiotto-Witten boundary
SCFT. SymTFT Fans
The Symmetry Theory of 4d =4 Super Yang-Mills on spaces with boundaries
Angel M. Uranga^
September 9, 2024
======================================================================================
§ INTRODUCTION
In the Euclidean formulation of Lorentz-invariant Quantum Field
Theories, internal symmetries give rise to topological operators of
codimension one. During the last few years many papers, starting with
<cit.>, have convincingly shown that it is very
illuminating to think of all the topological operators present
in any given Quantum Field Theory as symmetries, sometimes referred to
as categorical symmetries to keep in mind both that we are
significantly broadening the traditional definition of symmetry, and
that category theory is a natural tool for talking about the
topological QFTs describing the topological subsector of the
theory. Generic topological operators in QFT are not necessarily
codimension one, and the fusion rules for topological operators are
not necessarily group-like, so the resulting algebraic structures can
be very rich, and encode very subtle information about the Quantum
Field Theory at hand. We refer the reader to any of the excellent
recent reviews
<cit.>
for surveys of different aspects of this rapidly developing field.
We will focus on a recent proposal for capturing the categorical
symmetries of a theory known (somewhat imprecisely in our case, for
reasons we will explain momentarily) as the SymTFT construction
<cit.>. In
the simplest variant of this approach, the QFT
is realised in terms of the interval compactification of a
topological field theory (the
SymTFT), with gapped boundary conditions at one end, and
gapless boundary conditions at the other end. The SymTFT construction
arises very naturally from geometric engineering in string theory and
holography, and indeed we will use holography in this work to analyse
the SymTFT construction in a regime that has not been much studied
before: d-dimensional theories on manifolds with boundaries. As we
will see, in this context it is still possible to understand the
symmetries of the d-dimensional theory on a manifold with boundary
in terms of a (d+1)-dimensional construction on a space with
corners, where additional gapless degrees of freedom live.
The resulting bulk is not fully topological, so it would perhaps be
better to talk about a symmetry theory instead (as emphasised
in a related context in <cit.>, for instance), but in
the cases we study in this paper the non-topological nature of the
bulk ends up being fairly mild: the (d+1)-dimensional theory is
topological except on non-topological d-dimensional interfaces. So
by a small abuse of language we will still refer to the bulk as the
SymTFT.
The specific theories we will focus on are d=4=4 SYM on a
space with boundary. We choose the boundary conditions preserving half
of the superconformal symmetry which have been studied by Gaiotto and
Witten in <cit.>, and which we briefly
review in section <ref>. Our basic tool for understanding the
SymTFT for such configurations is the holographic dual, constructed in
<cit.>,
and reviewed in section <ref>. We extract the SymTFT from
the holographic dual in section <ref>. The resulting SymTFT
is similar to the SymTree construction introduced in
<cit.>, although with a number of differences that we
describe in detail in section <ref>. A striking feature of
our setup is the various gapless domain walls connecting the different
SymTFTs are arranged in a fan configuration, with the
three-dimensional Gaiotto-Witten BCFT living at the tip of the
fan. See figure <ref> for a sketch of the
SymTFT.
A number of expected features of the SymTFT can be understood from
brane physics in the original string theory setup. We develop various
aspects of the relation between these two viewpoints in
section <ref>, in particular the effect of translating
topological operators between different sectors in the SymTFT
fan. Section <ref> goes further in this direction,
studying the SymTFT interpretation of bringing 7-branes from infinity
— an operation which is fairly natural and well understood from the
brane point of view, but leads to some puzzles (which we resolve)
when reinterpreted from the SymTFT point of view. Finally, in
<ref> we generalise the discussion in the previous
sections by allowing for orientifolds localised at the boundary.
In this work we exploit the supergravity duals of the QFT with
boundary to learn about its topological sector and extract the SymTFT,
but our results are also relevant in the reverse direction. Indeed,
gravitational solutions with End of the World (ETW) boundary
configurations, dubbed dynamical cobordisms
<cit.>[For
related ideas, see
<cit.>
for early references, and
<cit.>
for recent works.], provide dynamical realisations of the Swampland
Cobordism Conjecture <cit.>, generalising the concept
of bubbles of nothing <cit.> (see
<cit.>
for recent developments). In this spirit, the ETW configurations we
will consider were in fact described as dynamical cobordisms of
AdS_5×^5 in <cit.>. Hence, our in-depth
understanding of the topological structure of the gravitational bulk
theory and its interplay with the topology of the ETW configuration
are a first step towards understanding it for more general dynamical
cobordisms, providing a valuable tool for the study of topological
properties of general cobordism defects ending spacetime.
Note added: As we were finishing this paper we became aware of
a number of upcoming papers by other groups discussing various aspects
of SymTFTs for theories on spacetimes with boundaries
<cit.>. We are very
thankful to the authors of these papers for agreeing to coordinate
submission.
§ GAIOTTO-WITTEN CONFIGURATIONS
In this section we will quickly review — focusing on the ingredients
most relevant for the discussion of the SymTFT — the boundary
conditions studied in <cit.>, which
preserve half of the superconformal symmetry, namely OSp(4|4), of
d=4=4(N) SYM. These boundary conditions involve a 3d
SCFTs (BCFT_3) on the boundary of the 4d theory. In these works the
CFT_4/BCFT_3 systems were realised as systems of N semi-infinite
D3-brane ending on a set of NS5- and D5-branes (with finite segments
of D3-branes suspended between them), thus realising a quiver gauge
theory of the kind introduced in <cit.>, which under
suitable conditions flows in the IR to an interacting 3d =4
SCFT.
§.§ The brane construction and its field theory
We consider N D3-branes along the direction 012, of semi-infinite
extent in the direction 3 (namely x^3>0, without loss of
generality), and at the origin in the remaining directions. These
D3-branes end on stacks of NS5-branes, which span the directions 012
456, and at the origin in the direction 3 789, and stacks of D5-branes
spanning 012 789, and at the origin in 3 456. Actually, in order to
define the D3-brane boundary on the NS5- and D5-brane system, it is
more appropriate to consider the NS5- and D5-branes to be slightly
separated in the direction x^3, and to admit D3-branes suspended
between them. Once a specific set of constraints, to be reviewed
below, are satisfied, one can take the limit of coincident 5-branes,
which implements the flow to the strongly coupled IR BCFT_3 boundary
conditions for the CFT_4. Because the UV theory, defined by the
Hanany-Witten brane configuration, preserves 8 supercharges (3d
=4), the resulting BCFT preserves maximal OSp(4|4)
superconformal symmetry. Note that the R-symmetry subgroup
SO(3)× SO(3) of SO(6) is manifest as the rotational symmetry
in the 3-planes 456 and 789.
A key role in the brane configuration is played by the linking numbers
of the 5-branes. These can be defined as the total monopole charge on
the 5-brane worldvolume theory <cit.>, and can therefore
be measured at infinity of the 5-brane worldvolume. Hence they are
invariant under changes in the ordering of branes and the
corresponding brane creation effects. In practice, given an ordering
of the 5-branes, the linking number of a 5-brane is given by the net
number of D3-branes ending on the 5-brane from the right (because
D3-brane boundaries are (singular) monopoles on the 5-brane
worldvolume theory) plus the total number of 5-branes of the other
kind (i.e. NS5 vs D5) to the left of the 5-brane (because they can
lead to D3-branes ending from the right via Hanany-Witten brane
creation processes).
There are two conditions for the boundary brane configuration to
define a supersymmetric BCFT_3: (1) Any D5-brane on which a net
non-zero number of D3-branes ends from the right is located to the
right of all the NS5-branes. This constraint ensures that the brane
configurations have an interpretation in gauge theory, with the
NS5-branes with suspended D3-branes among them providing a quiver
gauge theory, and the D5-branes with non-positive net number of
D3-branes yielding flavours for some of the nodes. (2) For each kind
of 5-brane (NS5 or D5) the linking numbers are non-decreasing from
left to right. For D5-branes, this is automatic except for the
D5-brane to the right of all NS5-branes, and for these the condition
guarantees the absence of decoupled degrees of freedom in the limit of
coincident 5-branes; the condition for NS5-branes then follows from
S-duality.
Related to the above, linking numbers play a role in non-abelian IR enhancement of U(1) global (0-form) flavour symmetries of the 3d UV theory, which are realised as symmetries on the worldvolume of the 5-branes. In the strong coupling regime, n 5-branes of the same kind and with the same linking number lead to monopole operators in the field theory which enhance the symmetry to U(n). Hence linking numbers provide the invariant quantities characterising the BCFT_3.
Let us provide an extra intuition of the interpretation of the linking number in defining the boundary conditions. Consider the case of a single kind of 5-brane, e.g. D5-branes (NS5-branes admit a similar description due to S-duality), organised in m stacks, labelled by an index b, with multiplicities m_b of D5-branes with equal linking number L_b. Since in this case there are no NS5-branes, the linking numbers are realised in terms of D3-branes ending on the D5-branes from the right. Each D5-brane in the b^th stack has L_b D3-branes ending on it, with boundary conditions associated to the L_b-dimensional irreducible representation R_L_b of SU(2) (namely, the SO(3) associated to 789), which is realised by the three (matrix-valued) complex scalars of the 4d =4(N) SYM theory, of the form [X^i,X^j]∼ϵ^ijk X^k. Hence, for the complete system of D5-brane stacks, the boundary condition is associated to a (reducible, in general) representation of SU(2)
R_N=⊕_b=1^m m_b R_L_b .
The SU(2) commutation relation can be regarded as describing the
D3-branes puffing up into non-commutative funnels corresponding to the
D5-branes, whose width is related to the number of D3-branes, i.e. the
linking numbers. The ordering of linking numbers mentioned above
ensures that narrow funnels fit inside wider funnels without
intersections which would introduce additional degrees of freedom.
An efficient way to describe these systems is to focus on the 3d
=4 theories realised on the D3-branes suspended among the
5-branes, and to describe the semi-infinite D3-branes as the gauging
of a global symmetry of the BCFT_3. One prototypical example is that
of the T[SU(N)] theories <cit.>, which are defined as
the IR fixed point of the gauge theory on a system with N NS5-branes
and N D5-branes, realising the gauge theory
U(1) - U(2) - … - U(N-1) -[(N)] ,
with bifundamental hypermultiplets of consecutive gauge factors, and N hypermultiplets in the fundamental of the U(N-1) group, acted on by the (N) global symmetry. Namely, there are N NS5-branes separated by intervals of (an increasing number of) suspended D3-branes, and N D5-branes between the two rightmost NS5-branes, hence giving flavours to the U(N-1) factor.
The gauging of the (N) symmetry is implemented by moving the D5-branes infinitely to the right, introducing N semi-infinite D3-branes in crossing the last NS5-brane. We thus get a semi-infinite stack of N D3-branes ending on a set of N NS5-branes, all with linking number 1 (hence with an enhanced U(n) flavour symmetry). A similar configuration (obtained by S-duality and exchange or 345 and 789 (i.e. 3d mirror symmetry) exists for N D3-branes ending on a set of N D5-branes with linking number 1. Hence this provides a boundary condition of the kind (<ref>), with the N dimensional representation being trivial (i.e. N copies of the trivial 1-dimensional representation).
Let us pause for an interesting observation nicely illustrated by this example. The symmetries of fairly general 3d =4 theories (including those from from NS5- and D5-brane configurations) were considered in <cit.>, in particular the just discussed enhanced 0-form flavour symmetries, as well as the 1-form symmetries, so we may apply their results to our setup. In particular, considering the above T[SU(N)] theories, the 1-form symmetry obtained upon gauging the flavour SU(N) (meaning (N) with global structure of SU(N)) was identified in <cit.> to be _N. Hence, despite the fact that the D5-branes introduce flavours in the fundamental of the 4d SU(N), the electric 1-form symmetry is not fully broken, but rather there survives a diagonal combination of it with a _N in the enhanced flavour SU(N) on the D5-branes. Similarly, for other gaugings SU(N)/_p with N=pk, the surviving electric 1-form symmetry is _k<cit.>.
The generalisation of the T[SU(N)] theories to the general configurations of D5-branes described in (<ref>) is known as the T_ρ[SU(N)], where ρ is a partition of N given by
N= 1+…+1_m_1 times+2+…+2_m_2 times+…
i.e. with m_b parts equal to L_b, according to the decomposition of the representation (<ref>).
As a concrete example, we may consider the theory corresponding to the
trivial partition ρ of N into 1 part equal to N. We thus have
1 D5-brane with linking number N, namely the N D3-branes end on 1
D5-brane exploiting the N-dimensional irreducible representation of
SU(2). By extending the arguments in <cit.>, one
can check that upon gauging of SU(N) the electric 1-form symmetry of
the theory is _N. Analogously the 1-form symmetry for
SU(N)/_p gauging is _k with k=N/p. We will recover this
pattern from the SymTFT in section <ref>.
There is a generalisation of the above class to the case with both
NS5- and D5-branes. The starting point is to also introduce stacks of
n_a NS5-branes with linking numbers K_a, in addition to the
earlier stacks of D5-branes, obeying the rules provided above, to
define a 3d =4 theory with no semi-infinite D3-branes. Then one
can rearrange the configuration by locating all the D5-branes to the
left of the NS5-branes. We end up with a set of N D3-branes on a
segment, ending on D5-branes from the left (resp. NS5-branes from the
right) according to a partition ρ (resp. ρ̂) of N,
equivalently the multiplicities and linking numbers of the
5-branes. The data (ranks and fundamental flavours of all the unitary
gauge factors in the quiver) of the original gauge theories can be
obtained from the partitions. These gauge theories, under the
conditions already explained, flow to IR 3d SCFTs known as
T_ρ^ρ̂[SU(N)], and provide BCFT_3 boundary conditions
by gauging their (N) global symmetry.
Let us be more explicit on this last point. In addition to the above defined linking number, it is useful to introduce an equivalent set of linking numbers K̃_a, L̃_b as the net number of D3-branes ending on the 5-brane from the right minus the total number of 5-branes of the other kind to the right of the 5-brane. These new versions differ from the old ones just in an overall shift by the total numbers of D5- and NS5-branes, respectively. In the 3d T_ρ^ρ̂[SU(N)] theories, we have N D3-branes ending on the NS5-branes from the right and on the D5-branes on the left, hence the total sum of NS5-brane linking numbers ∑_a n_aK_a is equal to N, and similarly for the total sum of D5-brane linking numbers ∑ m_bL̃_b=-N. Upon gauging the (N) flavour symmetry, we have an extra set of N semi-infinite D3-branes ending on the 5-brane set from the right, hence the condition that the BCFT_3 provides a boundary condition for the 4d =4su(N) SYM theory is
N=∑_a n_aK_a + ∑ m_bL̃_b ,
These ingredients will be manifest in the gravitational dual description in the next section, and will be inherited as key ingredients in the SymTFT.
Before that, we would like to mention that the Hanany-Witten brane configurations can be used not only to provide boundary conditions for 4d =4(N), but also more general configurations. These include configurations of NS5- and D5-branes separating two stacks of semi-infinite D3-branes (a particular case of Janus configurations <cit.>), which we will exploit in the discussion of orientifold theories in section <ref>. In addition, one can also construct theories in which there are several configurations of NS5- and D5-branes separated by large distances, with sets of D3-branes suspended between consecutive configurations. These have gravitational duals including multiple AdS_5×^5 asymptotic regions, and will be briefly mentioned in section <ref>.
§.§ The holographic dual: End of the World branes
In this section, we will briefly review the gravitational duals of the brane configurations described in the last section. They are given by a particular class of explicit 10d supergravity backgrounds studied in <cit.> (see also <cit.> for recent applications).
The most general supergravity solutions preserving 16 supersymmetries
and SO(2,3)× SO(3)× SO(3) symmetry were explicitly
constructed in <cit.>. They are
gravitational duals of Hanany-Witten brane configurations of NS5- and
D5-branes with stacks of D3-branes as in the previous section
<cit.>. The geometries in general have a “bagpipe”
structure <cit.> (see figure <ref>), namely
they look like AdS_4×^6, with ^6 a 6d compact manifold
(the “bag”), save for a number of AdS_5×^5 throats (the
“pipes”) sticking out of it. We will eventually focus on the
specific case with only one AdS_5×^5 ending on the bag,
which provides the gravity dual of 4d =4(N) SYM on a 4d
spacetime with boundary.
The most general supergravity solutions preserving 16 supersymmetries
and SO(2,3)× SO(3)× SO(3) symmetry have the structure of a
fibration of AdS_4×_1^2×_2^2 over an oriented
Riemann surface Σ. The ansatz for the 10d metric is
ds^2= f_4^2 ds^2_AdS_4+f_1^2 ds^2__1^2+f_2^2 ds^2__2^2+ds^2_Σ .
Here f_1, f_2, f_3 are functions of a complex coordinate w of Σ, in terms of which
ds^2_Σ=4ρ^2 |dw|^2 ,
for some real function ρ. There are also non-trivial backgrounds for the NSNS and RR 2-forms and the RR 4-form; we will provide only the necessary results, referring the reader to the references for further details.
There are closed expressions for the different functions in the above metric, describing the BPS solution. As an intermediate step, we define the real functions
W∂_w h_1∂_w̅h_2+∂_w h_2∂_w̅h_1
, N_1 2h_1h_2|∂_w h_1|^2-h_1^2 W , N_2 2h_1h_2|∂_w h_2|^2-h_2^2 W
in terms of the functions h_1,h_2 to be specified below. The dilaton is given by
e^2Φ=N_2/N_1 ,
and the functions are given by
ρ^2=e^-1/2Φ√(N_2|W|)/h_1h_2 , f_1^2=2e^1/2Φ h_1^2√(|W|/N_1) , f_2^2=2e^-1/2Φ h_2^2√(|W|/N_2) , f_4^2=2e^-1/2Φ√(N_2/|W|) .
In the following we focus on the solutions describing a single asymptotic AdS_5×^5 region, as befits the gravitational dual of a stack of semi-infinite D3-branes ending on a configuration of 5-branes, i.e. a 4d =4(N) SYM theory with a boundary on which it couples to a Gaiotto-Witten BCFT_3.
The Riemann surface Σ can be taken to correspond a quadrant w=r e^iφ, with r∈ (0,∞) and φ∈[π/2,π]. At each point of the quadrant we have AdS_4×_1^2×_2^2, fibered such that ^2_1 shrinks to zero size over φ=π (negative real axis) and ^2_2 shrinks to zero size over φ=π/2 (positive imaginary axis).
Hence the boundaries of the quadrant Σ are actually not boundaries of the full geometry, which closes off smoothly over those edges. The general solution includes a number of 5-brane sources, describing the NS5- and D5-branes, and spikes corresponding to asymptotic regions AdS_5×^5, which describe 4d =4 SYM sectors on D3-branes.
All these features are encoded in the quantitative expression for the functions h_1, h_2, which for this class of solutions, have the structure
h_1 = 4 Im(w)+2∑_b=1^md̃_b log(|w+il_b|^2/|w-il_b|^2) ,
h_2 =-4 Re(w)-2∑_a=1^n d_a log(|w+k_a|^2/|w-k_a|^2) .
Here the supergravity solution includes 5-brane sources, which are
located at specific points in the two edges of the quadrant. Concretely the NS5-branes
are along the φ=π axis, with stacks of multiplicity n_a at
positions w=-k_a, and the D5-branes are along the φ=π/2
axis, with stacks of multiplicity m_b at positions w=il_b. The
multiplicities of 5-branes are related to the parameters d_a,
d̃_b via
n_a= 32π^2 d_a∈ , m_b= 32 π^2 d̃_b∈ .
Specifically, the supergravity solution near one of this punctures is locally of the form of a NS5-brane spanning AdS_4×^2_2 (respectively a D5-brane
spanning AdS_4×^2_1), with n_a units of NSNS 3-form flux
(respectively m_b units of RR 3-form flux) on the ^3
surrounding the 5-brane source. The latter is easily visualised, by
simply taking a segment in Σ forming a half-circle around the
5-brane puncture, and fibering over it the ^2 fibre shrinking at
the endpoints, so that we get a topological ^3, see figure
<ref>. Hence, near each 5-brane puncture, the metric factorises as
AdS_4×^2×^3 fibered along a local radial
coordinate r̃ parametrising the distance to the puncture.
Asymptotically, at r→∞ far away from the 5-branes, the two
^2's combine with the coordinate φ to form a ^5, so
in this limit we have AdS_5×^5, with N units of RR 5-form
flux. The parameter N will shortly be related to other quantities
in the solution. The ^5 is manifest in figure <ref>,
by fibering _1^2×_2^2 over an arc φ∈[π/2,π]
at fixed r>|k_a|,|l_b|, so that each of the two ^2 shrinks to
zero size at one of the endpoints of the arc, leading to a topological
^5.
The ^5 can be deformed to smaller values of r, and the 5-form
flux is preserved, except when the arc crosses one of the 5-brane
punctures. At this point, the 5-form flux can escape through the
5-brane and decreases, leaving less flux in the inner ^5 shell;
this is the gravitational manifestation that some number of D3-branes
ends on the 5-brane. Hence, the change in the 5-form flux encodes the
linking number of the 5-branes in the corresponding stack. Following
<cit.>, in the ^2×^3 geometry around the
stacks of n_a NS5-branes and of m_b D5-branes, there are
non-trivial integrals
∫_^2_2 C_2=K_a , ∫_^3_1 H_3=n_a ; ∫__1^2 B_2=L̃_b , ∫_^3_2 F_3=m_b .
Note that here we have used the usual linking numbers K_a for the NS5-branes, but the modified linking numbers L̃_b, introduced in section <ref> for the D5-branes. We recall that the linking numbers K_a, L_b (respectively K̃_a, L̃_b) of 5-branes are defined as the net number of D3-branes ending from the right on the 5-brane plus (respectively, minus) the number of 5-branes of opposite kind to the left (respectively, to the right) of the 5-brane.
With this proviso, the change in the flux
F̃_5=F_5+B_2F_3-C_2H_3 ,
with F_5=dC_4, upon crossing an NS5-brane (resp. D5-brane) is given by n_aK_a (resp. m_bL̃_b), with no sum over indices. Hence the stacks gather the 5-branes with same linking number, and their (n_a), (m_b) gauge symmetry is the holographic dual of the enhanced non-abelian flavour symmetries (or their duals) for the BCFT_3 mentioned in <ref>.
An important point about (<ref>) is that the integrals of NSNS and RR 2-form potentials on the ^2's are essentially integers, related to linking numbers of the brane configuration. This may seem striking, because periods of p-form potentials are not topological quantities, and can be changed continuously. Hence, we clarify that the values in (<ref>) correspond to a particular choice for B_2, C_2 and C_4, specifically leading to a vanishing F_5=dC_4 on the asymptotic ^3×^2's associated to the 5-branes, see <cit.>. Any topological deformation of the configuration, such that the periods of B_2, C_2 are no longer integer, implies a change in C_4 such that F_5 is no longer vanishing on ^3×^2, and compensates thing off, so that we recover the same flux of F̃_5 in (<ref>). Thus, the invariant statement is that the 5-form flux jumps by the corresponding integer quantity upon crossing the 5-branes.
The flux over the ^5's decreases in such crossings until we reach
the region r<|k_a|,|l_b|, in which there is no leftover flux, so
that the ^5 shrinks and spacetime ends in an smooth way at r=0,
see figure <ref>. This implies that the relation between
the asymptotic 5-form flux N and the 5-brane parameters is
N=∑_a n_a K_a+∑_b m_bL̃_b ,
nicely dovetailing (<ref>).
A final point we had skipped is the relation between the location of the 5-brane punctures k_a, l_b and the linking numbers K_a, L̃_b, i.e. the 2-form fluxes (<ref>). The positions of the 5-brane punctures are determined by the expressions
K_a=32π(k_a +2∑_bd̃_b arctan(k_a/l_b)) , L̃_b=32π(l_b +2∑_a d_a arctan(k_a/l_b)) .
We refer the reader to the references for a derivation of this result and other related details.
As we had anticipated, and as emphasised in <cit.>, the above supergravity background describes a bordism to nothing of the asymptotic AdS_5×^5 via an End of the World (ETW) configuration <cit.>. We review this perspective in what follows, also as a recap of the fundamental parameters determining the structure of the solution, and their interplay with the field theory parameters.
Let us describe the asymptotic AdS_5(×^5) region in the Poincaré patch, so the the holographic boundary (which is a half-space semi-infinite in the direction 3) is manifest. The AdS_4 geometry in the supergravity solution corresponds to an AdS_4 slicing of AdS_5, where the slices of the foliations correspond to radial lines sticking out of the holographic 3d boundary of AdS_4, see figure <ref>. The slices of this foliation are labelled by the coordinate r=|w| in the Riemann surface Σ of the supergravity solution. The asymptotic region r→∞ in the 10d supergravity solution corresponds to the radial lines closer to the 4d holographic boundary, and correspond to an AdS_5 (×^5) with N units of 5-form flux.
As one moves inward in the Riemann surface to values of r corresponding to the 5-brane puncture locations, the 5d geometry hits 5-brane sources spanning AdS_4 slices. In the Poincaré picture they correspond to lines sticking out radially from the 3d boundary of the 4d holographic boundary in a fan-like structure. The 5-brane sources actually correspond to local internal geometries ^2×^3, with background fields (<ref>), namely 3-form fluxes n_a or m_b (encoding the 5-brane multiplicity for NS5- and D5-branes, respectively), and 2-form backgrounds K_a, L̃_b. The latter determine the explicit angular locations of the 5-brane stacks in Poincaré coordinates (equivalently, the radial positions of the 5-brane punctures in Σ in the 10d solution) via (<ref>). The 5-form flux over the ^5 decreases upon crossing the 5-branes by amounts n_aK_a or m_bL̃_b, respectively. Eventually, the 5-form flux is completely peeled off and at a critical angle in Poincaré coordinates the ^5 shrinks and spacetime ends. Hence, the solution describes and End of the World (ETW) configuration.
A final word of caution about the above 5d picture. As is familiar,
the scale of the internal geometries are comparable to those of the
corresponding AdS geometries [This is known as `no scale
separation' in the swampland literature <cit.> (see
<cit.> for a review and further references).]. In
our setup this holds both for the asymptotic AdS_5×^5 and
the AdS_4×_6“bag” dual of the BCFT_3. As usual in
holography, one may often still use a lower dimensional perspective,
by removing the internal space via a dimensional reduction, rather
than a physical effective theory below a decoupled KK scale. This is
extremely useful in cases where the dimensional reduction is a
consistent truncation, as in the case of the asymptotic
AdS_5×^5, where the 5d perspective has been usefully
exploited for over two decades. To some extent, as discussed in detail
in <cit.>, this does not apply to the configuration
with the ETW configuration, due to its reduced symmetry, and the
lower-dimensional perspective has a more limited applicability. This
will however not pose any problem for our analysis, which is focused
on topological aspects related to the generalised symmetries of the
QFT and its SymTFT as obtained from the topological sector of the
holographic realisation. We turn to this study in the next section.
§ THE SYMTFT
A natural way to obtain the SymTFT in d-dimensional quantum field
theories with a (d+1)-dimensional holographic dual is by extracting
the topological sector of the dual
<cit.>. (More
generally, it is possible to perform a similar analysis for
geometrically engineered QFTs without requiring the existence of a
tractable large N gravitational dual, see for instance
<cit.>
for works in this direction.)
In our present context of the 4d =4(N) SYM with
Gaiotto-Witten BCFT_3 boundary conditions, we may use this same
strategy by analysing the holographic dual reviewed in section
<ref>. Indeed, in the asymptotic AdS_5×^5
region the recipe amounts to reducing the 10d Chern-Simons coupling of
type IIB string theory on an ^5 with N units of RR 5-form flux,
which, as studied originally in <cit.> leads to the 5d
SymTFT[In this paper it will be sufficient for us to think of
B_2 and C_2 as differential forms, as opposed to cochains in
differential cohomology or K-theory.]
S_5d=N/2π∫__5 B_2 dC_2 ,
where we have allowed for a general 5d manifold , and we work in
the convention where the supergravity fields B_2, C_2 are
2π-periodic. (In a number of places below we will just highlight
the parametric dependence on N or similar parameters fixing the
order of the discrete symmetries.)
In the case of the holographic dual of the 4d SCFT with BCFT_3
boundary conditions, we encounter a crucial new structure, which we
dub the SymTFT Fan, as we approach the ETW configuration. Indeed, the
holographic 5d bulk is split into several pieces by the presence of
the 5-branes, which moreover carry non-topological field theories on
their worldvolume. Each piece of the 5d bulk is still associated to a
compactification of the 10d theory on ^5, albeit with decreasing
number of units of RR 5-form flux, until the flux is eventually peeled
off and the ^5 can safely shrink, see figure <ref>,
in the spirit of the cobordism to nothing explained in the previous
section. Hence, we expect that the dimensional reduction on ^5,
and the SymTFT (<ref>) for different effective values of N
still plays and important role, but new structures are required to
describe the 5-brane theories and how they glue the SymTFTs.
Since our Symmetry Theory at hand has several boundaries, we fix some
terminology for them to avoid confusions, see figure
<ref>: we refer to the holographic 4d half-space as
holographic boundary or physical boundary, as in standard SymTFTs,
because it supports the local degrees of freedom of the d=4=4
theory; the same applies to the holographic 3d space supporting the
BCFT_3, which we will also refer to as the 3d boundary of the 4d
physical theory. The topological boundary is that at which topological
boundary conditions are set for the SymTFTs. Finally, the boundary of
the 5d bulk spacetime, at which the ^5 shrinks and spacetime
ends, is referred to as ETW boundary. We also sometimes use ETW brane
or ETW configuration for the complete systems of 5-branes and ETW
boundary providing the bordism to nothing for the asymptotic 5d bulk,
in practice the AdS_5(×^5) region.
§.§ The SymTFT Fan: Gluing SymTFTs via U(1)s
In this section we discuss the structures underlying the gluing of the
different SymTFT wedges via the 5-brane theories into the SymTFT
Fan. This structure of the Symmetry Theory connects very nicely with
several recently introduced tools in the generalisation of the concept
of SymTFTs.
§.§.§ Relation to SymTrees
We now address the key point about how the SymTFTs in the different ^5 regions of the geometry are related. The crucial observation is that, as explained in section <ref>, in the full supergravity description, the regions near the 5-branes correspond to spikes locally corresponding to AdS_4×^2 times a cone over ^3, as befits the local geometry around a backreacted stack of 5-branes. For simplicity, in what follows we focus on a single stack of D5-branes (NS5-branes work similarly due to S-duality), and we denote by n the number of 5-branes in the stack and P its linking number.
This implies that there are n units of RR 3-form flux over the ^3 (NSNS 3-form flux for NS5-branes), and there is a non-trivial background of the NSNS 2-form B_2 over the ^2 (RR 2-form for NS5-branes)
∫_^2B_2=P .
This implies that when the solution approaches a 5-brane, the geometry
corresponding to the compactification on ^5 with N units of RR
5-form flux splits into a region corresponding to ^5 with N-nP
units of RR 5-form flux (the 5d region “across” the 5-brane in
figure <ref>), and a region corresponding to a
compactification on ^2×^3 with nP units of 5-form flux.
The latter arises from
∫_^2×^3F̃_5=∫_^2×^3(F_5+B_2F_3)=nP ,
where we have used the fact that there are no D3-brane sources and hence no F_5 flux in the ^2×^3 regions. In other words, the ^2×^3 is homologically the difference of the two ^5's in both sides of the 5-brane, see figure <ref>, hence the total flux of F̃_5 must be conserved.
The splitting of the geometry implies a splitting of the SymTFT, as follows. As suggested above, the two ^5 regions (on both sides of the 5-brane) lead to two SymTFTs of the kind (<ref>) with coefficients N and M N-nP, which we denote by SymTFT_N and SymTFT_M, respectively. On the other hand, out of their meeting locus there sticks out another SymTFT (which we denote by SymTFT_^2×^3), obtained from the reduction of the topological sector of the 10d theory on the compact ^2×^3 geometry with the above mentioned fluxes and background fields. The structure of this SymTFT_^2×^3 will be discussed later on, but we anticipate it contains a coupling (<ref>) with coefficient nP. The three SymTFTs meet at a juntion, which supports non-topological degrees of freedom, which will be discussed later on. The complete picture of the Symmetry Theory is hence a modification of figure <ref> in which each 5-brane is replaced by a branch SymTFT, as depicted in figure <ref>.
This local structure (local in the sense that it is away from the physical and topological boundaries of the SymTFTs) is identical to the SymTrees introduced in <cit.>, in which a SymTFT can split into multiple decoupled SymTFTs at a (non-topological) junction theory. However, it is crucial to notice that there are the following two important (and related) differences between our setup and the SymTree constructions in <cit.>:
∙ The first is that in the SymTrees in <cit.> the splitting of the `trunk' SymTFT into the branch SymTFTs happens as one moves from the topological boundary to the physical boundary of the complete Symmetry Theory; in our present setup, the splitting of the SymTFTs happens along a direction parallel to the boundaries of the complete Symmetry Theory, namely each of the SymTFTs in the tree (as well as the junction theory) extend across the Symmetry Theory sandwich between the topological and the physical boundaries. On the topological boundary, this implies that each of the SymTFTs after the splitting has its own set of topological boundary conditions, rather than being determined by the junction theory as in <cit.>. On the physical boundary, the parent SymTFT_N ends up on the 4d SCFT on the half-space, whereas interestingly all the SymTFTs after the splitting end up on the same 3d BCFT. This is related to the next point.
∙ The second main difference, related to the previous one, is that in <cit.> the splitting of the SymTFT is a consequence of the existence of a energy scale below which the full theory is separated into different decoupled sectors. In our context, both the 4d physical theory and its 3d boundary theory enjoy superconformal symmetry, so there are no scales in the system, hence the splitting of the SymTFT is not controlled by the criteria explained in <cit.>. The difference is clearly displayed in the holographic dual, where, in the SymTrees in <cit.>, the splitting occurs along the holographic direction, hence is controlled by a physical energy scale, whereas in our setup the splitting occurs along a direction parallel to the holographic boundary and takes places at all scales. In fact, in the supergravity solution the amount of separation between the branches after splitting increases with the distance to the holographic boundary in a characteristic fan-like structure; this nicely dovetails the behaviour in a setup with conformal invariance, where the physical separation is related to the energy scale as dictated by dimensional analysis, due to the absence of fundamental energy scales.
From the above argument, we expect on general grounds that the fan-like structure will occur for the holographic dual of any CFT with BCFT boundary conditions. This is a particularly useful observation since explicit supergravity solutions of gravitational duals of CFTs on spacetimes with boundaries are scarce. We thus expect that our observation of the role of SymTFT Fans in 4d =4 SYM with boundaries extends to more general classes of CFTs with boundaries, and are useful to uncover their Symmetry Theory structures, even when the supergravity solution is now known.
§.§.§ The 5-brane SymTFT
Let us now go into more detail into the SymTFT_^2×^3 corresponding to the reduction of the topological sector of the 10d theory on ^2×^3. The first observation is that, in a sense that will be made more precise later on in the context of nesting, it corresponds to the 5d SymTFT of the 4d physical theory of the stack of n 5-branes compactified on ^2 (and propagating on AdS_4 or whatever replaces it in more general context). This has a (n)≃(n)⊕(1) gauge symmetry. Leaving the (1) factor for later on, we focus on the non-abelian factor, which leads to a 4d supersymmetric (n) pure SYM theory. The configuration is actually closely related to the realisation in <cit.> of this gauge theory as the context of the Klebanov-Strassler holographic dual <cit.>. In fact, this holographic dual is based on a cone over T^1,1^2×^3 with n units of RR 3-form flux over the ^3 and nP units of 5-form flux over ^2×^3. Therefore the derivation of the SymTFT of the 5-brane theory essentially follows <cit.>. We will not need its detailed structure and simply note that it includes a coupling (<ref>) with coefficient nP due to (<ref>); we refer the reader to appendix A in <cit.> for additional details.
§.§.§ The junction theory
In addition to the (n) theory, there is the (1) degree of freedom. As explained for SymTrees in <cit.>, the (1) is actually not localised on the 5-brane physical theory, but rather belongs to the junction theory. In fact, from our earlier discussion about the general structure of the splitting of SymTFTs and the matching of the B_2,C_2 sectors with action (<ref>) with coefficients N, M, nP, the local structure of the junction is identical to that in the SymTree corresponding to an adjoint Higgsing (N)→ [ (N_1)⊕(N_2) ], with N_1=nP and N_2=M in our case. Following <cit.>, the (1) dynamics can be isolated as the decomposition (N)→(N_1)⊕(N_2)⊕(1) inherited from SU(N)→ [SU(N_1)× SU(N_2) × U(1)]/_L with L= lcm (N_1,N_2) the least common multiple of N_1, N_2.
Let us provide a simple heuristic argument that there is a (1) theory supported at the junction theory describing the gluing of two ^5's and a ^2×^3 geometry (which 5-form fluxes related by their homology relation). The argument is in the spirit of the SymTree structure in <cit.> for a Higgsing, but, as explained, the local structure of the SymTrees applies analogously to our case. Consider a stack of m D3-branes located at a conifold singularity, and split it into two stacks of m_1, m_2 D3-branes (with m_1+m_2=m) located at slightly different positions away from the singular point. We may take the distance between the two points much smaller than the (already small) distance from them to the singular point. This corresponds to performing a Higgsing of the (m)⊕(m) Klebanov-Witten theory <cit.> to the diagonal =4(m) SYM theory, which is subsequently Higgsed down to two decoupled =4(m_1) and (m_2) SYM theories. At the level of the Symmetry Theory, the above system is described by a SymTree, where one branch corresponds to a SymTFT_^2×^3 (i.e. reduction over the base space of the conifold singularity T^1,1^2×^3), which turns into a SymTFT_m (i.e. reduction on the ^5 around the joint stack of m D3-branes) to reproduce the Higgsing to the diagonal SU(m), which then splits in a junction into a SymTFT_m_1 and SymTFT_m_2 (corresponding to the reductions on the ^5's around the individual stacks of m_1 and m_2 D3-branes). Overall, we have a SymTree with asymptotic branches given by a SymTFT_^2×^3 and the two ^5 theories, SymTFT_m_1 and SymTFT_m_2. The junction theory is obtained by smashing together the transition SymTFT_^2×^3 → SymTFT_m and the junction of the SymTFT_m, SymTFT_m_1 and SymTFT_m2. The latter junction was studied explicitly in <cit.>, where it was shown that it supports a (1) gauge theory, arising from the group theory analysis of the Higgsing
(m)→[(m_1)⊕(m_2)]=(m_1)⊕(m_2)⊕(1)_L ,
with L= lcm (m_1,m_2). On the other hand, the former transition, associated to the Higgsing (m)⊕(m)→(m) does not support any (1)'s, because the Higgsing by the bifundamental matter lowers the rank; hence we do not expect additional non-trivial degrees of freedom. The complete junction theory is therefore a 4d (1) gauge theory identical to that associated to the Higgsing (<ref>).
As explained, this applies to the local structure of the trivalent SymTree describing the crossing of the 5-brane in the Symmetry Theory of the 4d =4(N) SYM on a half-space coupled to BCFT_3 degrees of freedom. We may even follow the analysis in <cit.> to write down the junction conditions between the background fields of the 1-form symmetries in the different SymTFTs. In our case, for a SymTree corresponding to a D5-brane SymTFT_^2×^3 branch, focusing on the coupling of the junction theory to the RR 2-form background field C_2, at the location of the junction we have
.m_1/gC_2^(m_1)|_ jnct.=.m_2/gC_2^(m_2)|_ jnct.=.m_1/gC_2^(m)|_ jnct. .
Here the relation is in _g, with g= gcd(m_1,m_2), and in
our case[Note that in our system, we have m_2+m=m_1,
i.e. m=m_1-m_2 as opposed to the above Higgsing, in which
m=m_1+m_2. The sign flip is however irrelevant for the formal
arguments to derive the presence of the (1) in the junction
theory and the derivation of the junction conditions. It is formally
equivalent to considering that m_2 describes a stack of
anti-D3-branes.]m_1=N, m_2=nP, m=M. The superindex in the
fields C_2 corresponds to its value in each of the SymTFTs,
evaluated at the location of the junction. There is a similar
gluing condition for B_2.
An interesting implication of the above junction conditions is that there is a local 1-form symmetry _g. In fact, this reproduces the electric 1-form symmetries studied in particular cases in section <ref>. For instance, for the boundary conditions corresponding to the T[SU(N)] theory, the Symmetry Theory contains one stack of N D5-branes, each with linking number 1, joining the SymTFT_N of the asymptotic 5d theory, and a trivial SymTFT corresponding to the region with no 5-form flux near the ETW boundary. The jump in 5-form flux is hence from N units directly to zero, and the junction conditions above lead to an unbroken _N symmetry, matching the expected electric 1-form symmetry. Actually, the specific symmetry realised in the physical theory is determined by the choice of boundary conditions; this can be exploited to reproduce the electric 1-form symmetries for other gaugings SU(N)/_p, although we leave the details as an exercise for the interested reader.
Similarly, for the configuration corresponding to a single D5-brane with linking number N, the junction conditions indicate an unbroken _N, in agreement with the field theory result in section <ref>. We will provide a different argument for the _g symmetry in the general case in section <ref>.
In the following section we provide a rederivation of the above results from a different perspective, which exploits the worldvolume fields and couplings in explicit D5-branes.
§.§.§ Retraction of the 5-brane SymTFT
An interesting operation introduced in <cit.> for
SymTrees is the retraction of one or a subset of the branches, which
in their context corresponded to smashing together the physical
boundary of those branches with the junction theory. In our present
setup, we can also consider the retraction of branches in the SymTree,
and in particular it is interesting to consider the retraction of the
branches corresponding to the 5-branes. This now corresponds to simply
collapsing the corresponding SymTFT_^2×^3, bringing the
(n) physical theory on top of the junction theory. In terms of
figure <ref>, we collapse the blue/green lines of each
physical 5-brane theory with the blue/green line of its corresponding
junction theory. The result of this retraction should precisely
correspond to figure <ref>, with a fan of SymTFTs of the
kind (<ref>) separated by the non-topological theories on
the worldvolume of explicit 5-branes.
In fact it is possible to recover the main properties derived from the geometric realisation of the SymTree junction by using the properties of explicit 5-branes in the system, as follows. Since the interesting structure is related to the center of mass U(1), we simply take a single D5-brane, i.e. n=1 (general n simply introduces some multiplicities in the coefficients; also NS5-branes lead to similar results up to S-duality). Let us consider the topological couplings arising from the D5-brane worldvolume, in particular
∫_D5 ( C_4 F_D5+ C_2 F_D5 F_D5 ) .
where
F_D5=B_2 + F_D5 .
Here B_2 is the pullback of the bulk B_2 on the D5-brane worldvolume, and F_D5 is the field strength for the D5-brane worldvolume U(1) 1-form gauge connection F_D5=dA_D5. In (<ref>) and similar forthcoming expressions we will skip numerical factors in the coefficients, and just highlight the parametric dependence on the flux or brane multiplicities.
We now use the linking number is given by the total worldvolume monopole charge, namely
∫_^2 F_D5=P .
This is the D5-brane probe version of the supergravity background (<ref>), because the NSNS 2-form B_2 in the 5-brane backreacted geometry encodes the 2-form B_2 in the probe approximation together with the 5-brane worldvolume gauge field A_D5 (which is just a trivialisation of the latter). Hence we get the 4d couplings
P∫_4d ( C_4 + C_2 F_D5 ) .
The first term in (<ref>) describes the coupling of the D5-brane to the RR 4-form, and implies that the RR 5-form flux on ^5 jumps by P units as one crosses the D5-brane. Regarding the second term, the contribution of B_2 in F_D5 leads to a coupling
P∫_4d C_2 ∧ B_2 .
This nicely reproduces the topological coupling of the 5-brane branch SymTFT_^2×^3 in the former SymTree before the retraction. It accounts for the jump in the coefficient of the trunk SymTFT (<ref>) by P as one crosses the 5-brane, nicely dovetailing the change in RR 5-form flux discussed in the previous paragraph.
Finally, the remaining term has the structure
P∫_4d C_2 ∧ F_D5 .
This should be interpreted as the coupling of the U(1) in the junction theory with the background field C_2, as anticipated in the previous section. Note that in the case n=1, the (n) piece of the 5-brane theory is trivial, so the 5-brane theory is just the (1) junction theory.
Note that for a D5-brane, the junction theory couples to the background field C_2, ultimately related to the 1-form symmetry of the holographic dual 4d SCFT. Clearly NS5-branes lead to similar couplings for B_2, the background field ultimately related to the dual 1-form symmetry of the holographic dual 4d SCFT. In later sections this will become manifest in the behaviour of diverse charged operators upon crossing the 5-branes.
§.§.§ A remark on Nesting of the 5-brane SymTFT
We would now like to remark on an interesting point. We have emphasised that the supergravity solution for the ETW configuration with the 5-brane sources in section <ref> is the gravity dual of the 3d Gaiotto-Witten BCFT_3. It is thus reasonable to expect that at the topological level the structure of the Symmetry Theory in the ETW configuration, with the SymTree-like structure in figure <ref> corresponds to the Symmetry Theory of the 3d BCFT.
This however leads to a seemingly striking feature: The 3d BCFT is actually coupled to a set of non-topological 4d gauge theories, i.e. the physical 4d theory on the 5-brane worldvolumes. Each of the latter provides the physical boundary of a 5d SymTFT_^2×^3, while the 3d BCFT is the physical boundary of a fan of SymTFTs separated by junction theories.
In other words, the 3d BCFT lies in a corner of a 5d SymTFT (or rather, a set of them) bounded on one side by a 4d physical theory describing the gauged version of the flavour symmetry (or a set of them), and on the other side by a SymTFT (or rather, a junction theory at the intersection of several SymTFTs).
Although this structure may seem exotic, it falls in the recent generalised structure of Nesting of Symmetry Theories introduced in <cit.>. Specifically, this structure provides the Symmetry Theory for d-dimensional physical theories whose flavour symmetries are described in terms of a (d+1)-dimensional physical theory sticking out of the original one. This whole configuration is regarded as the physical boundary of a Symmetry Theory, given by the (d+2)-dimensional SymTFT of the (d+1)-dimensional flavour physical theory, bounded by the (d+1)-dimensional SymTFT of the original d-dimensional physical theory. In other words, the d-dimensional physical theory sits at a corner of a (d+2) SymTFT, at the intersection of a boundary given by the (d+1) flavour physical theory and another boundary given by the (d+1)-dimensional SymTFT.
In our case, we have a similar (but even more complex) setup. The BCFT_3 is sitting at the corner of the 5d SymTFT_^2×^3, at the intersection of a 4d boundary given by the physical 5-brane theory and a 4d boundary given by the junction theory of the SymTFT_^2×^3 with the ^5 SymTFTs. In fact, the situation is even more complex because there is one such structure per 5-brane, so the BCFT_3 is sitting at the 3d corner of several such nestings.
Although the details of this picture are is not essential for our purposes, we will refer to this setup at various points in later sections.
§.§ Topological operators and crossing the SymTFT Fan
We have discussed the behaviour of the different topological fields in the Symmetry Theory of the 4d =4(N) theory on spacetime with a boundary coupled to a Gaiotto-Witten BCFT_3. In this section we discuss the behaviour of various topological operators, in particular as they move across 5-branes between different SymTFT wedges.
§.§.§ Generalities
Since our SymTFT Fans have a local SymTree structure, studying the motion of operators across 5-branes in our setup is similar to studying the motion of topological operators among the trunk and the branch SymTFTs in a SymTree in <cit.>. The only differences arise from the different orientation of the SymTree with respect to the topological and physical boundaries of the SymTFTs.
For instance, consider topological operators stretching between the two boundaries of a SymTFT in the SymTree in our setup (i.e. describing charged operators), and localised on the SymTree branches; they behave similarly to symmetry operators in <cit.>, i.e. topological operators in the bulk of the SymTFT not ending on its boundaries. Hence, in our context, charged operators in the trunk SymTFT of the SymTree can be moved across the junction theory and become charged operators of one of the branch SymTFTs, or of several of them simultaneously, with the possibility that they leave some operator of the junction theory along the way, see figure <ref>.
In section <ref> we had introduced the complementary representation of our Symmetry Theories by retracting the SymTFT branches corresponding to the 5-branes, as a set of wedges of SymTFTs (<ref>) with different coefficients, separated by explicit D5- and NS5-branes carrying induced D3-brane charge to ensure conservation of the 5-form flux.
It is easy to follow this retraction in the configurations in figure <ref>, which basically amounts to smashing together the two blue lines corresponding to the junction theory and the physical boundary of the 5-brane branch SymTFT, and stacking the possibly present defects of the latter (labelled V^(2)) on top of the possibly present defect of the junction theory. The set of possibilities thus reduces, as cases (ii) and (iv) become of the same kind, and so do the cases (v) and (vi).
In the discussion of topological operators below we will find that the same crossing transitions admits different brane interpretations in terms of the full SymTree configuration or its retraction. This will be discussed in the explicit examples below.
Regarding topological symmetry operators, they are characterised by the fact that they link the charged defect operators above, hence, in our setup they are localised in the interval between the physical and the topological boundaries. A first kind is those which are localised also in the SymTree. When moved across the 5-branes, their behaviour is similar to that of topological symmetry operators in the SymTree in <cit.>. For illustration of several possibilities, one may take those in figure <ref>, but considering them localised, rather than stretched, in the SymTFT interval. Their behaviour upon retraction of the 5-brane SymTFTs also follows that indicated above. A second class of topological symmetry operator is obtained when they stretch in the direction parallel to the physical and topological boundary of the Symmetry Theory, and hence hit the junction theory (or several of them). The possible behaviours of such operators at the junction are similar to the behaviour of topological defect operators in <cit.>, and some particular examples are shown in figure <ref>.
We will study some particular examples of operators and their behaviour with respect to the junctions in the coming sections.
§.§.§ Line operators, F1s and D1s
We start the discussion in the SymTFT_N which is dual to the bulk of the 4d =4(N) SYM theory. As is familiar since <cit.>, the basic set of topological operators are realised in string theory in terms of F1 or D1 strings stretching from the topological to the physical boundary (representing Wilson and 't Hooft line operators, respectively), and D1 or F1 string worldsheets linking them (representing the symmetry generating operators, respectively).
Consider now the SymTFT_M theory at the other side of e.g. a D5-brane (with M=N-nP in earlier notation). In this SymTFT we have similar sets of topological operators realised in terms of F1 and D1 strings. This seemingly leads to the following puzzle. Consider for instance the F1 stretched between the topological and physical boundaries in the SymTFT_N, and act on it with a D1 linking it, so it transforms by a phase e^2π i/N. We can consider the same kind of configuration, but in the SymTFT_M, in which case the charged operator transforms by a phase e^2π i /M. The puzzle arises because seemingly both the F1 and the D1 can be moved from the SymTFT_N to the SymTFT_M without any discontinuity, i.e. without crossing the D5-brane, by simply avoiding it in the internal geometry. Yet the phase transformation seems to jump discontinuously in the transition.
An equivalent way to phrase the above puzzle is to consider the homological difference between the F1s in the two SymTFTs; this corresponds to an F1 which extends from the SymTFT_N to the SymTFT_M across the 5-brane (although it may avoid it in the internal space, so no actual crossing occurs). Considering a D1 which links this F1, it seemingly acts by introducing different phases in the different SymTFTs.
This puzzle was already addressed in <cit.>, in the SymTree which describes the adjoint Higgsing (N)→(M) ⊕(N-M)⊕(1). The configuration of an F1 starting in the SymTFT_N, crossing the junction and entering the SymTFT_M (equivalently the SymTFT_N-M) describes the relation between the Wilson line operators in the different theories. As explained in <cit.>, when translating the stringy realisation in terms of a F1 into field theory operators, the F1 in the SymTFT_M corresponds to an (M) Wilson line, but with the extra dressing of a Wilson line of the (1)⊂(M), which ultimately is recast as an operator of the (1) junction theory (and similarly for the SymTFT_N-M). This accounts for the extra phases required to compensate for the jump in the phase when one acts on the object with a D1 linking it.
Clearly, the same explanation holds in our context, with the SymTFT of
the 5-brane (with its induced N-M units of D3-brane charge) playing
the role of the SymTFT_N-M. In terms of the diagrams in figure
<ref>, the topological operators crossing the junction
are dressed by a junction theory operator, as in diagram (iii). Coming
back to the original picture with the two F1s describing charged
operators in the SymTFT_N and SymTFT_M, the above argument means
that the same microscopic object (the F1) corresponds to two different
operators in the two theories, and when transforming one into the
other, there arises an additional dressing by an operator of the
U(1) junction theory. In terms of the diagrams in figure
<ref>, the topological operators picks up a junction theory
operator upon crossing the 5-brane, as in diagram (ii). Clearly a
similar discussion holds for D1 line operators.
A complementary microscopic viewpoint on the above discussion is as
follows. From the string theory perspective, both the F1 and the D1
charges are initially -valued. But the presence of e.g. N units
of 5-form flux in the compact space implies there are physical
processes which violate their charge in N units. These are the
baryonic vertex given by D5-brane wrapped on the 5d compact space,
which emits N F1 strings <cit.>, and its S-dual
wrapped NS5-brane, which emits N D1 strings. This allows to recast
the 5d action (<ref>) in terms of _N-valued fields in a
5d topological _N theory. Our statements mean that these discrete
fields for the different SymTFTs are to be identified but up to an
overall normalisation, which in fact corresponds to the junction
conditions (<ref>).
§.§.§ The baryon vertex
The change in the order of the discrete symmetry between the SymTFTs is hence encoded in the existence of the baryon vertex and the dual baryon vertex, and how they behave when moved across the 5-branes, as we study in this section.
In short, as we will see in explicit examples below, in the full SymTree configuration the crossing transitions can be understood in terms of geometric relations and Freed-Witten consistency conditions [As usual we use this term to refer to incompatibilities of branes and fluxes in general for non-torsion classes <cit.>, i.e. beyond the original Freed-Witten study of the torsion case <cit.>.]. On the other hand, in the Symmetry Theory after the retraction, the crossing transitions can be understood in terms of Hanany-Witten brane creation effects <cit.> involving the explicit 5-branes. This is expected given the general relation between Freed-Witten consistency conditions and Hanany-Witten effects (see <cit.> for discussion).
For simplicity, we focus on the setup of a single D5-brane stack, as has been implicitly done in the previous pictures, with a single D5-brane with linking number P. Hence, we consider a SymTree with a trivalent junction in which the SymTFT_N (corresponding to reduction on ^5 with N units of 5-form flux) splits into a SymTFT_M (with M units of flux on ^5) and SymTFT_^2×^3 with N-M=P units of 5-form flux (from 1 unit of RR 3-form flux over ^3 and a NSNS 2-form period P on the ^2). Equivalently, after the retraction, we have the SymTFT_N and SymTFT_M separated by one explicit D5-brane wrapped on a maximal ^2⊂^5 and carrying P units of induced D3-brane charge due to the non-trivial integral (<ref>) of F_D5 over ^2.
Consider the baryonic vertex of the SymTFT_N, namely a D5-brane wrapped on the ^5 and emitting N fundamental strings <cit.> ending in the physical boundary corresponding to the 4d QFT on the half-space. As explained above, although the baryonic vertex corresponds to the identity operator in the SymTFT_N, it has a non-trivial meaning in the microscopic string theory realisation, as encoding the actual reduction of -valued charges to _N.
Let us consider the picture of the retracted SymTree, and consider moving this baryonic D5-brane (and its N attached F1s) across the 4d subspace spanned by the D5-brane wrapped on ^2 (which we refer to as 4d D5-branes to avoid confusion). In the process of crossing to the SymTFT_M, there is a Hanany-Witten effect between the baryonic D5-brane and the P units of induced D3-brane charge on the 4d D5-branes, resulting in the creation of P fundamental strings stretching between them. One can now recombine them with P of the original N F1 strings and detach them from the baryonic D5-brane, leaving a D5-brane on ^5 with just M F1 strings, see figure <ref>. This is precisely the baryonic vertex appropriate to represent the identity in the in the SymTFT_M.
Note however that we still have the P detached strings, which now
stretch from the 4d D5-brane to the physical boundary of the
BCFT_3. In holographic language, they correspond to P Wilson
line operators, and correspond to the dynamical quarks localised at
the 3d boundary of the 4d CFT. In the language of the Symmetry Theory,
they define an operator of the junction theory left over after the
crossing, as explained above.
Let us rederive the above crossing of the baryonic operator but in terms of the full SymTree configuration, rather than its retraction. In this picture, the baryonic operator in the SymTFT_N, namely the D5-brane on the ^5 with N units of 5-form flux (hence emitting N F1 strings), is homologically equivalent to the baryonic operator in the SymTFT_M, i.e. a D5-brane on the ^5 with M units of flux (hence emitting M F1 strings) and a D5-brane on the ^2×^3 of the SymTFT_^2×^3 emitting P F1 strings due to the 5-form flux on this branch. It is easy to check that the latter are due to the induced D3-brane charge, and connect directly with the Hanany-Witten picture above: the D5-brane on ^2×^3 has P units of induced D3-brane charge due to the NSNS 2-form background (<ref>); the P induced D3-branes wrap on the ^3, which has 1 unit of RR 3-form flux (encoding the 4d D5-brane charge), hence have a Freed-Witten anomaly and must emit P F1 strings. The picture of emission of F1 strings due to 5-form flux or RR 3-form flux on the induced D3-branes is just a reflection of the fact that the 5-form flux in this branch arises from the Chern-Simons couplings, c.f. (<ref>).
We have thus rederived the crossing process in a simple geometric
fashion, and recovered the appearance of the operator on the SymTFT of
the 5-brane theory, see figure <ref> (analogous to the
transition (i)→ (ii) in figure <ref>, but for symmetry
operators). We note that, although the geometric picture of the
crossing would seem to suggest that there is no junction operator
created in the process, the (1) degree of freedom is delocalised,
hence we depict the presence of a junction operator, as discussed with
the explicit 4d D5-brane in the Hanany-Witten description. We also
emphasise that, although the operators we are discussing correspond to
the identity operator in each of the SymTFTs, there is non-trivial
information about how they are related upon crossing the
junction. This is the reflection of the junction conditions of the
underlying fields.
We finish by emphasising that the baryon vertices of the different SymTFTs can be used to derive the local _g 1-form symmetry unbroken by the junction conditions. Near the junction of the SymTFT_N, the SymTFT_M and the SymTFT_^2×^3, we can consider processes annihilating F1 strings in sets of N, or in sets of M (or equivalently, of N-M) or integer linear combinations thereof. So by Bezout's theorem, F1 strings are conserved mod g with g= gcd(N,M). For instance, in the particular case of the SymTFT of the T[SU(N)] boundary conditions, we have one stack of n D5-branes with linking number 1, so M=0, and hence we get a _N 1-form symmetry, as discussed in the field theory setup in section <ref>, and in section <ref> from the viewpoint of the junction conditions.
§.§.§ The dual baryon vertex
Let us now consider the dual baryon vertex, which is given by an NS5-brane wrapped on the ^5, emitting N D1 strings, ending on the boundary of the physical 4d theory on the half-space. In the absence of boundary for the 4d QFT, S-duality of 4d =4(N) SYM relates this to the baryonic vertex in the previous section; however, in the presence of boundaries for the 4d QFT, the Symmetry Theory contains explicit 4d D5- or NS5-branes (or their SymTFTs in the SymTree picture), which are not invariant under S-duality. Hence the dual baryon operator must the studied on its own.
We start with a dual baryon vertex of the SymTFT_N, given by one NS5-brane wrapped on the ^5 and emitting N D1 strings due to the 5-form flux. Let us first consider the picture of the retracted SymTree, namely with an explicit 4d D5-brane with P units of induced D3-brane charge, separating the SymTFT_N and SymTFT_M regions. If we now move the NS5-brane across the 4d D5-brane, there is a Hanany-Witten effect between the 4d D5-brane and the NS5-brane, leading to the creation of one D3-brane wrapped on the ^2 and stretched between the NS5- and the D5-brane (for a stack of a number n of 4d D5-branes, there would be n D3-branes created). In addition, there is a non-trivial Hanany-Witten effect of the NS5-brane with the P induced D3-brane charge on the 4d D5-brane, which creates P D1 strings stretched between them. Note that they can be equivalently regarded as P units of D1-brane charge induced on the previous D3-brane due to the non-trivial NSNS 2-form background on the ^2; for clarity we prefer to keep the discussion of the created D3- and D1-branes separate.
As in the discussion above for the F1 strings of the baryonic D5-brane, one can recombine the newly created P D1 strings with P of the D1 strings emitted by the NS5-branes, to obtain an NS5-brane emitting M D1 strings, precisely as required by the change in the 5-form flux on the ^5. In the process we are also left with P D1 strings stretching from the D5-brane to the boundary corresponding to the physical theory of the BCFT_3.
In addition, we have the single D3-brane, wrapped on the ^2 and stretched between the NS5-brane and the 4d D5-brane. An important point is now that the ^2 is topologically trivial on the ^5, but not on the worldvolume of the D5-brane (as it wraps precisely this ^2). This means that the boundary of the D3-brane on the NS5-brane is actually trivial, and the D3-brane can unwind and snap away. On the other hand, the boundary of the D3-brane on the D5-brane is non-trivial, so the D3-brane cannot unwind completely. The result is that the D3-brane wraps a 3-chain B_3 whose boundary is the ^2 wrapped by the D5-brane, ∂ B_3=^2. A minimal-volume representative of this 3-chain is a half-^3 in the ^5 at the location of the 4d D5-brane. In the picture of the geometry as an ^2×^2 fibration over a Riemann surface (the quadrant), the process and the structure of the 3-chain is shown in figure <ref>. In analogy with the previous section, we expect the P D1 strings and the D3-brane stretching out should correspond to an operator in the D5-brane theory. This can be made more clear using the full SymTree picture, as we do next.
Consider the crossing of the NS5-brane in the full SymTree, i.e. growing back the SymTFT_^2×^3 associated to the 5-brane theory. We start with the dual baryon vertex of the SymTFT_N, namely the NS5-brane wrapped on the ^5 with N units of 5-form flux and emitting N D1 strings. To move it across the junction, we simply use the homology relation among the ^5's and ^2×^3. We obtain an NS5-brane wrapped on the ^5 with M units of 5-form flux, emitting M D1 strings, and an NS5-brane wrapped on ^2×^3, with P units of 5-form flux, emitting P D1 strings. The former is simply the dual baryon vertex of the SymTFT_M, as expected. The NS5-brane on ^2×^3, in addition to the Freed-Witten anomaly enforcing the emission of the P D1 strings, has a Freed-Witten anomaly due to the single unit of RR 3-form flux on ^3, which enforces the emission of a D3-brane wrapped on ^2 (as above, we keep their discussion of the emitted D3- and D1-branes separate). The situation is very similar to the above Hanany-Witten derivation of the crossing, but with an interesting novelty. The ^2 wrapped by the emitted D3-brane is non-trivial in the ^2×^3 geometry of the SymTFT_^2×^3, although it is trivial in the global geometry. Namely, the D3-brane can unwind but only in the region of the junction, in which the ^2×^3 geometry blends with the ^5 in which the ^2 is trivial. This means that the 3-chain B_3 wrapped by the D3-brane stretches from the 4d D5-brane on the physical boundary of the SymTFT_^2×^3 to the junction theory, and ends there.
In formal terms, the dual baryon operator in the
SymTFT_^2×^3 emitting P D1 strings is non-genuine, so
it must be dressed by a operator which stretches to the junction
theory, as sketched in figure <ref>.
It is satisfying to recover the by now familiar phenomenon that in
SymTFTs arising from holography subtle effects involving topological
operators in the SymTFT description follow easily from long-understood
string theory physics. We will encounter similar phenomena in even
richer setups in the next sections.
§ INCLUSION OF 7-BRANES
In the previous section we have emphasised that the Symmetry Theory of our system has the local structure of a SymTree, albeit with a different location, as compared with <cit.>, of the topological and physical boundaries. This implies that the junction theory reaches the “topological” boundary, and requires the introduction of physical, non-topological, boundary conditions there. In a string theory context, a natural choice is to use string theory objects to provide boundary conditions. In this section we explore the introduction of 7-branes on which the 4d 5-branes end.
§.§ Generalities
We will phrase our discussion in terms of the retracted SymTree picture, with explicit 5-branes in the configuration. This facilitates the discussion of 5-branes ending on 7-branes. We note that in order to carry out a similar picture in the full SymTree picture, one should consider introducing a SymTFT for the 7-brane theory, and consider it as the boundary of the SymTFT_^2×^3, in the spirit of the nesting of SymTFTs in <cit.>.
§.§.§ The 7-brane boundary conditions
As in previous sections, we focus on the case of 4d D5-branes separating the SymTFT_N and SymTFT_M (by S-duality, similar consideration follow for NS5-branes ending on the S-dual 7-branes). We also focus on the case of a single D5-brane, with P=N-M units of induced D3-brane charge. As is familiar (already from <cit.> in a T-dual setup), D5-branes can end on NS5-branes or on D7-branes. The two choices differ in what boundary condition they impose on the D5-brane worldvolume gauge fields. The choice of D7-branes is natural in our setup since it allows to nicely translate the flavour symmetries realised on the D5-branes to the D7-branes worldvolume theory.
Hence, we consider D7-branes wrapped on ^5, and spanning a 3d Minkowski subspace on the AdS_4 slice, initially at a location pushed onto the topological boundary, so that they provide boundary conditions for the D5-branes ending on them. Since the D7-branes are located at a boundary, there are no closed paths in the Symmetry Theory which go around the D7-branes, and the axion C_0 can be defined globally, i.e. there is no need to specify a branch cut (equivalently, it is sticking out outwards, away from the Symmetry Theory).
It is interesting to notice that the D7-branes just introduced are of
the kind considered in <cit.>. Following it, we can
consider moving the D7-branes into the 5d bulk of the Symmetry
Theory. In doing that, it is necessary to introduce the branch cut
implementing the SL(2,) monodromy of the 7-brane. From our above
discussion, it is clear it stretches from the location of the 7-brane
in the bulk to the topological boundary. Hence, the endpoint of the
branch cut at the topological boundary still encodes the relevant
information about the boundary condition of the D5-brane (or junction)
theory. From <cit.>, this choice of branch cut
orientation corresponds to the 7-branes creating a duality
interface. Namely, the topological boundary conditions of the
SymTFT_N and SymTFT_M are related by a non-trivial SL(2,)
duality transformation due to crossing of the branch cut. As
emphasized in <cit.>, although the particular shape of
a D7-brane branch cut is not physically observable, its endpoint at
the topological boundary is, and leads to observable effects.
Hence the topological boundary of the SymTFT Fan is a set of gapped boundary conditions for the different SymTFTs, separated by 7-branes on which the 5-branes are ending, see figure <ref>a. If the 7-branes are pushed into the bulk, to make their effects more manifest, the SymTFT gapped boundary conditions are separated by the endpoints of the 7-brane branch cuts, see figure <ref>b.
Since we are focusing on the case of a single D5-brane, it is clear that it suffices to consider a single D7-brane on which it ends. We would like however to make a comment regarding the general case of a stack of n>1 D5-branes. Recall that the worldvolume gauge fields on the D5-brane have Neumann boundary condition on the physical boundary, so as to couple to the currents of the (n) enhanced global symmetry of the BCFT_3, as required by the holographic interpretation. On the other hand, they have Dirichlet boundary conditions on the D7-brane. This implies that there is a version of the s-rule in <cit.>, which requires that each D7-brane can be the endpoint of at most one D5-brane, so that one needs n D7-branes. A complementary heuristic explanation is that the (n) global symmetry of the physical theory is morally realised as the worldvolume (n) gauge symmetry on a stack of n D7-branes (more precisely, on the bound system of the stack of n D5-branes ending on n D7-branes)[This is similar to the realisation of flavour symmetries in Hanany-Witten setups when the flavour branes are moved off the interval between the NS5-branes, a process which creates new D-branes ending on the flavour branes.].
§.§.§ The 5-form flux and the need for explicit D3-branes
We return to the setup with a single D5-brane separating the SymTFT_N and the SymTFT_M with P=N-M, and ending on a D7-brane in the bulk of the Symmetry Theory, with a branch cut stretching to the topological boundary.
As explained earlier, the jump in flux across the D5-brane, implicit in the separation between the SymTFT_N and the SymTFT_M, is accounted for by the P units of induced D3-brane charge. Since the D5-brane ends on the D7-brane, which is localised in the 5d bulk, it would now be possible to cross from the SymTFT_N to the SymTFT_M without encountering any discontinuity, other than crossing the D7-brane branch cut. But the D7-brane branch cut simply implements the SL(2,) transformation, which does not act on the 5-form flux. Hence, we are forced to introduce an explicit stack of P D3-branes stretching from the D7-brane to the topological boundary of the SymTFT, see figure <ref>.
The D3-brane charge is actually not ending on the 7-brane, but rather
it continues along the corresponding 5-brane in the form of induced
D3-brane charge. This is as it should be, since D3-branes cannot end
on 7-branes. This is also clear when looking at the 4d D3-brane
worldvolume coupling
P∫_4d C_2 B_2 ,
which is the continuation of the D5-brane coupling (<ref>) and explains the jump
in the coefficient of the 5d topological coupling B_2F_3 from the SymTFT_N to the SymTFT_M. Hence the structure of the SymTFT Fan is maintained even when the 5-branes end on the 7-branes.
Although we will not use it, we would like to remark on the corresponding picture in the full SymTree. As explained before, the 5d SymTFT_^2×^3 is now bounded by the 4d Symmetry Theory of the 3d 7-brane theory, but which should now correspond to a junction theory beyond which we have the SymTFT_P of the stack of P D3-branes, see figure <ref>. Hence we again have a combination of SymTrees and Nesting of SymTFTs.
We conclude by noting that the effect of the D3-brane charge was crucial for the Hanany-Witten effect for the baryonic D5- and NS5-branes across the 4d D5-brane in sections <ref> and <ref>. In the configuration where 4d D5-brane ends on a D7-brane, the extra P explicit D3-branes from the D7-brane to the topological boundary guarantee that the same effect takes place if the baryon vertex is moved from the SymTFT_N to the SymTFT_M across the D7-brane branch cut.
In the following section we discuss in more detail the effects related to moving objects between the SymTFTs across the branch cut. As expected from <cit.>, this implements the SL(2,) monodromies corresponding to a duality interface in the theory.
§.§ The SL(2,) monodromy
In this section we discuss the effects experienced in the Symmetry
Theory and its topological operators as one crosses from the
SymTFT_N to the SymTFT_N across the D7-brane branch cut (and the
P explicit D3-branes). This is a particular case of the phenomena
discussed in <cit.>, and implements the SL(2,)
monodromies corresponding to a duality interface in the theory.
As above, we focus on the simple setup of a single D5-brane ending on a single D7-brane. In our discussions below, the effect of the D3-brane charge (either explicit or induced on the 4d D5-brane), which has already been mentioned, will we included implicitly, except for a few explicit mentions.
§.§.§ Stacking of TQFT
We now derive the effect of the D7-brane branch cut on the Symmetry Theory. Recall that the physical effect of the D7-brane branch cut in string theory is that the 10d RR axion shifts C_0→ C_0+1. From the perspective of the holographic dual 4d =4(N) SYM theory, this corresponds to a shift of the θ angle. There is a mixed anomaly between the shift of the θ angle and the electric 1-form symmetry, encoded in the 5d TFT
S=2π i N-1/N∫_Ydθ/2π∪ P( B)/2 ,
where B is the background coupling to the electric 1-form symmetry and P is the Pontryagin square.
For the geometries of our interest (5d spaces foliated with 4d slices given by spin manifolds over which θ is constant) P( B) is even, so the only relevant piece is the 1/N term. We will also be sensitive only to the de Rham version of P∼ B^2.
We now follow the derivation in <cit.> of this 5d TQFT from the holographic setup. We use the well-defined 3-form field strength
F̃_3= dC_2-C_0dB_2 ,
to rewrite the 5d topological coupling (<ref>) as
N∫_5d B_2F_3=N∫_5dB_2 F̃_3-N∫_5d dC_0 B_2 B_2 .
The last term reproduces (with the usual redefinitions B_2∼ B/N) the 1/N term in (<ref>).
Hence, if we take an interval I across the D7-brane branch cut (so that C_0→ C_0+1 along it), and integrate the 5d coupling, its variation is given by
Δ∫_5d(- NdC_0 B_2 B_2 ) =
-N∫_4d B_2B_2 .
This is precisely the kind of variation required, corresponding to the stacking of a TQFT in the SymTFT (i.e. of a counterterm in the dual QFT), as befits the shift of the θ angle. One may worry that, on the two sides of the D5/D7 system, the 5-form flux changes by P units, and this requires the coefficient of the above coupling to jump as N→ M. Actually, this is solved because of the additional crossing of the P explicit D3-branes, which have a worldvolume coupling
P∫_4dC_0 B_2B_2 .
This amounts to P units of a boundary contribution of the coupling ∫_5d dC_0 B_2B_2, so that on the SymTFT_M side the coefficient is effectively reduced from M to N.
In short, the effect of the D7-brane branch cut is the stacking of a TQFT, according to the effect of the SL(2,) as a shift of the θ angle corresponding to the duality interface.
§.§.§ Line operators and (p,q) string webs
Let us consider the fate of line operators described in terms of F1 and D1 strings in the SymTFT_N as they cross the branch cut. As explored in section <ref>, this can be efficiently discussed by considering F1 or D1 strings which stretch between the SymTFT_N and the SymTFT_M across the junction theory. The crossing over the P explicit D3-branes leads to the effects already discussed in section <ref>, which account for the change in the order of the discrete valuedness of the background fields, and hence of the topological operators[Incidentally, note that in this context, the D3-branes separating the SymTFT_N and the SymTFT_M is the retracted version of a SymTree corresponding precisely to adjoint Higgsing. Namely, the SymTFT corresponding to the D3-branes is a SymTFT_P associated to reduction on ^5 with P units of 5-form flux.]. Hence we only need to consider their transformation across the branch cut of the 7-brane, i.e. the SL(2,) transformation [ 1 1; 0 1 ]. The F1 is a (1,0) string, so it is invariant under this action. On the other hand, the D1 is a (0,1) string, and hence turns into a (1,1) string, as shown in figure <ref>a. For charged line operators, this is just the manifestation of the transformation of monopoles into dyons upon a shift of the θ angle.
It is interesting to consider the above transformation of the D1 strings when they are moved across the D7-brane, so that they cross over the D5-brane, see figure <ref>b. In this crossing there is a Hanany-Witten effect creating an F1 string stretching from the D7-brane to the D1 string, and turning the latter into a (1,1) string due to charge conservation in the resulting string junction. Hence we recover the same overall result, even though the D1 string has not crossed over the D7-brane branch cut.
Hence we recover the interpretation of D7-branes (with branch cuts stretched to the topological boundary) as duality interfaces in <cit.>, via their action on F1 and D1 strings.
§.§.§ Baryon and dual baryon vertices
In the discussion in the previous section we have briefly mentioned the jump in the order of topological operators when they cross from the SymTFT_N to the SymTFT_M. In order to get a more direct insight we quickly derive the transformation of baryon and dual baryon vertices as they move across the D7-brane branch cut (and the P D3-branes), extending the discussion in sections <ref> and <ref>.
We start with the baryon vertex in the SymTFT_N region, namely a D5-brane wrapped on the ^5 and emitting N F1 strings. Both the baryonic D5-brane and the F1 strings are left invariant when moved across the D7-brane branch cut. On the other hand, when moved across the P D3-branes, there are P newly created F1 strings. They combine with P of the original F1 strings emitted by the baryonic D5-brane, leaving the latter with M F1 strings. This corresponds to the baryon vertex of the SymTFT_M region. In addition, there remain P F1 strings being emitted by the D3-branes. In the full SymTree picture, these would correspond to the baryon vertex of the SymTFT_P region, as well as an operator of the (1) junction theory. The result is as in section <ref>, with the explicit D3-branes here playing the role of the induced D3-brane charge there.
Consider now the dual baryon vertex in the SymTFT_N region, namely an NS5-brane wrapped on the ^5 and emitting N D1s, see figure <ref>a. When moved across the P D3-branes, there is a Hanany-Witten creation of P suspended D1 strings, see figure <ref>b. Upon recombining, we have an NS5-brane emitting M D1 strings, and P D1 strings stretching from the P D3-branes, see figure <ref>c. These P D1s will be pretty much spectators in what follows, so we do not mention them explicitly, and focus on the NS5-brane. We now move the NS5-brane across the D7-brane branch cut, dragging its M D1 strings with it. Upon crossing, the NS5-brane turns into a (1,1) 5-brane, which emits M(1,1) strings, which turn into D1s when they cross the branch cut, see figure <ref>d. This reproduces the expected effect of the SL(2,) monodromy on the dual baryonic vertex.
It is interesting to continue manipulating the above object in order to relate it to a simple dual baryonic vertex of the SymTFT_M. To do so, we move across the 4d D5-brane the M D1 strings, which necessarily cross the D7-brane without possibly avoiding it. This crossing produces P F1 strings which combine with the (1,1) and D1 strings forming a (p,q) string web, see figure <ref>e, in analogy with section <ref>. At the topological level, we can split the (1,1) 5-brane, and the string web including its M attached (1,1) strings, as 1 NS5-brane emitting M D1 strings, and 1 D5-brane emitting M F1 strings which end on the D7-brane, see figure <ref>f.
The NS5-brane emitting M D1 strings is just the dual baryon vertex of the SymTFT_M. On the other hand, the baryonic D5-brane emitting M F1s ending on the D7-brane can be pushed onto the latter and regarded as a defect associated to the junction theory.
We hope these examples suffice to illustrate the behaviour of different objects in the configuration, and their interplay with the SL(2,) duality interface introduced by the D7-branes.
§ ORIENTIFOLD AND ORBIFOLD BOUNDARIES
§.§ Generalities on orientifold and orbifold 5-planes
In this section we discuss a generalisation of the above boundary
configuration by allowing for the introduction of orientifold planes,
and orbifolds related to them. We will focus on orientifold/orbifolds
which are localised at the boundary in the direction 3. Namely, we are
not considering for instance the well-studied case of O3-planes
parallel to D3-branes, leading to =4(n) or (n)
theories (see
<cit.>
for discussions of the holographic duals of such systems). Moreover,
we are also interested in configurations that preserve the same
symmetries and supersymmetries preserved by D5- and NS5-brane boundary
conditions, so as to stay in the set of boundary conditions studied in
<cit.>. This essentially restricts us
to consider the introduction of O5-planes in the same directions as
the D5-brane, namely 012 789, or orbifold 5-planes in the same
directions as the NS5-branes, namely 012 456.
Orientifold 5-planes (O5-planes for short) are defined by quotienting
by the orientifold action Ω R, where Ω is worldsheet
parity and R is a geometric _2 involution acting as a
coordinate flip in the directions 3456,
(x^3,x^4,x^5,x^6)→ (-x^3,-x^4,-x^5,-x^6). As discussed, this
preserves the same supersymmetries as a D5-brane along 012789. The 6d
plane fixed under R is an O5-plane, and there are several discrete
choices for its properties (in analogy with O3-planes in
<cit.>), classified by discrete RR and NSNS backgrounds
on the _3 geometry around it <cit.>. The
O5^--plane and O5^+-plane differ in the value of the NSNS 2-form
on an _2⊂_3, carry RR charge ∓ 2 (as measured
in D5-units in the double cover), and project the symmetry (n) on
a stack of D5-branes on top of them down to (n) or (n),
respectively.
There are variants of these, dubbed O5^±, which
arise when the non-trivial _2 RR background C_0 is turned on at
the O5-plane location. The O5^- has RR charge -1
and can be described as the O5^- with one stuck D5-brane on top, so
it leads to a gauge symmetry (n+1) when n additional D5-branes
are located on top of it. The O5^+ is an exotic
version of the O5^+-plane, has RR charge +2 and also leads to a
group (n) when n D5-branes (counting before orientifolding,
here n must be even) are located on top of it.
Let us briefly discuss orbifold 5-planes. They are obtained by
quotienting by the orbifold action R'(-1)^F_L, where R' is a
geometric _2 involution flipping the coordinates
(x^3,x^7,x^8,x^9)→(-x^3,-x^7,-x^8,-x^9). As discussed, this
preserves the same supersymmetries as an NS5-brane along 012456. The
6d plane fixed under R' is the orbifold 5-plane, and there are also
several variants <cit.>. Interestingly, the orbifold
5-planes are related by S-duality to O5-planes <cit.>; in
particular the perturbative one, which has a localised twisted sector
spectrum given by the 6d (1,1)U(1) vector multiplet, is S-dual to
the O5^- with 2 D5-branes on top, which realises the same localised
field content as an (2) vector multiplet in the open string
sector. Because of this kind of S-duality relations, we will focus our
discussion on the introduction of O5-planes, with additional NS5- and
D5-branes, and will not consider orbifold 5-planes any further.
§.§ Brane configurations
In order to describe Hanany-Witten brane configurations for systems of D3-branes ending on a set of NS5- and D5-branes in the presence of O5-planes, it is useful to consider the configuration in the covering space. There we have a _2 invariant system, with one stack of N semi-infinite D3-branes in x^3→∞ and its image stack in x^3→ -∞, ending on a (_2-invariant) `middle' configuration of NS5-, D5-branes, and D3-branes suspended among them, with the O5-plane sitting at x^3=0.
Thus the configuration is morally a back-to-back double copy of the kind of systems considered in previous sections. One key difference is that at the ETW boundary, which is given by the O5-plane (and possible additional 5-branes on top of it), the number of D3-branes need not vanish. This will be nicely reproduced in the gravity dual picture, as discussed later on. In general the number of D3-branes at the O5-plane location is different from the asymptotic value N, so we denote it by n in the following.
Given the above, most of the discussion regarding NS5- and D5-branes away from the O5-plane behave locally just like the configurations of NS5- and D5-branes in the previous sections. The only position at which the presence of the orientifold quotient is felt locally is precisely the O5-plane. Hence, we next describe different possible configurations of 5-branes on top of the O5-plane. Each of them can then be used to construct a general class of 3d theories defining boundary conditions, by simply sprinkling additional NS5- and D5-branes and their _2 images. We hence turn to study the different configurations of 5-branes on top of the O5-plane.
§.§.§ Boundary conditions for O5-planes with no NS5-branes
The simplest possibility is that the O5-plane does not have additional
NS5-branes on top. This possibility corresponds to the boundary
conditions reducing the gauge symmetry considered in
<cit.>. Hence, we locally have a stack
of n D3-branes in the double cover, intersected by an O5-plane, and
we can then use standard orientifold rules to read out the local
breaking of the (n) symmetry due to the orientifold
projection. In this respect, recall that the orientifold projection of
the O5-plane on the D3-branes is of the opposite kind (namely
vs ) as compared with the action on D5-branes, because the
corresponding mixed open string sector has 4 DN directions
<cit.>.
For instance, the boundary conditions defined by an O5^+-plane reduce the symmetry (n) to (n), which corresponds to Class I in <cit.>. In more detail, splitting the 4d =4 vector multiplet in terms of a local 3d =4 vector multiplet and a 3d =4 hypermultiplet in the adjoint of (n), the (n) part of the vector multiplet is even under the O5^+-plane action (and the remaining generators are odd), and the part (plus a singlet) of the hypermultiplet is even (while the part is odd). This provides the definition of Dirichlet or Neumann boundary conditions for the different fields.
Similarly, the O5^--plane boundary condition project the (n)
gauge symmetry down to (n) (recall that n is necessarily even
for this orientifold projection), which corresponds to Class II in
<cit.>. In more detail, in the covering space the
(n) part of the 3d =4 vector multiplet is even (and the
rest is odd), while the part of the hypermultiplet is even
(and the + 1 is odd).
For the O5^--plane, the configuration is
equivalent to the O5^--plane with one stuck D5-brane on top. Hence,
to the above boundary condition we must add one localised
half-hypermultiplet flavour (in the fundamental of the local (n)
symmetry and charged under the _2≃ O(1) on the stuck
D5-brane) arising from the D3-D5 open string sector. Finally, the
O5^+-plane behaves similarly to the O5^+-plane,
with minor modifications due to the non-trivial value of the RR axion
C_0=1/2 on top.
Clearly, the above configurations can be enriched by addition additional D5-brane pairs on top of the O5-plane. At the level of the gauge theory, the main effect is the addition of extra localised flavours of the D3-brane gauge theory. We will not discuss these possibilities explicitly, but they are implicitly included in our analysis. For instance, as already mentioned, by allowing for a pair of D5-branes on top of an O5^--plane we reproduce the physics of the S-dual orbifold 5-planes. In fact, such orbifolds provide a realisation of Class III in <cit.>; in terms of the O5^--plane with two D5-branes stack on them, one splits the stack of D3-branes as n=p+q, with the two stacks of p, q D3-branes ending on the two possible stuck D5-branes (equivalently, carrying opposite charges under the D5-brane (2)≃(1) worldvolume symmetry).
In addition, recall that the configurations can be completed by adding general configurations of NS5- and D5-branes obeying the rules of the configurations in previous sections (and their _2 images), to define general classes of quiver gauge theories providing Gaiotto-Witten BCFT_3 orientifold boundary conditions.
§.§.§ Boundary conditions for O5-planes with one stuck NS5-brane
A second possible local configuration corresponds to including one stuck NS5-brane on top of the O5-plane[This is possible for the O5^±-planes and the O5^--plane, but not for the O5^+-plane, because it requires C_0=1/2, and the orientifold image of an NS5-brane is not just an NS5-brane, but a (1,1) 5-brane.] (this was introduced in brane configurations in <cit.>, see <cit.> for review).
In the covering space, we have two _2 image D3-brane stacks ending on the NS5-brane from opposite sides in x^3, and the O5-plane crossing through their intersection. The D3-brane gauge symmetry in the covering space is (n)⊕(n) and it projects down to (n) after the orientifold. Hence, interestingly, in the quotient the (n) gauge symmetry is not reduced by the boundary conditions. We again can consider the different possible kinds of O5-plane in turn. They differ in how the orientifold acts on the 3d =4 hypermultiplet in the bifundamental (,) of the parent theory.
For the O5^+-plane, the bifundamental matter is projected down to a 2-index symmetric representation (plus a singlet) of the (n) symmetry. The simplest way to check this is to realise that giving a vev to this field corresponds to recombining the half D3-branes and moving the recombined infinite D3-brane away from the NS5-brane and along the O5-plane (i.e. the directions 789); for fields in the , the vev breaks the symmetry down to (n), in agreement with the expected symmetry on D3-branes intersecting an O5^+ with no NS5-brane at the intersection, see section <ref>.
Similarly, for the O5^--plane the bifundamental matter is projected down to a . For the O5^- (equivalently an O5^--plane with one stuck D5-brane), in addition to the we get an additional full hypermultiplet flavour in the fundamental. We get a full, rather than a half, hypermultiplet because in this case the D3-branes are split in two halves by the NS5-branes, and the orientifold swaps the the D5-D3 open strings for the two halves, in contrast with the previous section, where there was a single D3-D5 open string sector which was mapped to itself under the quotient, enforcing the orientifold projection down to a half-hypermultiplet. Of course this matches the group theoretical fact that the fundamental of (n), even when charged under the D5-brane O(1)≃_2, is not a pseudoreal representation, so it is not possible to have half-hypermultiplets. For the O5^+ we get a variant of the result of the O5^+, namely matter in the plus a singlet.
In conclusion, the brane configurations are therefore very similar to those considered in previous sections, with the only novelty of the new object in the game. We now turn to a quick discussion of their role in the 1-form symmetries of the system, and then turn to the discussion of the supergravity dual, and the SymTFT picture it produces.
§.§.§ The 1-form symmetries
In this section we quickly carry out the discussion of the impact of the O5-plane orientifold quotient in the 1-form symmetries of the system. Recall that the configurations with one stuck NS5-brane in section <ref> or without it in section <ref> are related by a Higgs mechanism by giving vevs to the localised hypermultiplets in the former. Thus, since 1-form symmetries are insensitive to vevs for local operators, the results are essentially identical and will thus depend mainly on the kind of O5-plane in the configuration. We can then study the 1-form symmetries in either of the two pictures, or carry out the discussion simultaneously.
The O5^--plane configurations and 1-form symmetries
Let us start discussing the configurations with an O5^--plane. In
the configuration with no stuck NS5-brane, we have seen in section
<ref> that the n D3-branes intersecting the
O5^--plane have boundary condition projecting the vector multiplets
down to (n) and the hypermultiplets down to the
. Considering the possible global structures of an (n)
theory[This can be made more precise by considering the
semi-infinite D3-branes to end on some faraway NS5-branes, so that
it describes a 3d genuine (n) theory. We will implicitly work
similarly in the discussion of 1-form symmetry in forthcoming
examples.]<cit.>, we can have the (n) or
((n)/_2)_± theories. The former admits Wilson lines in the
fundamental representation, while the latter do not (and admit
magnetic or dyonic lines, respectively for the ± choices). In
short, we have potential _2 electric or magnetic (or the dyonic
diagonal combination) 1-form symmetries, which are jointly described
by _2-valued 2-form background fields C_2, B_2. We will
recover this picture in the SymTFT arising from the holographic dual
in section <ref>.
We recover the same result for the configuration with one stuck NS5-brane, in which we have an (n) theory with a localised hypermultiplet in the , recall section <ref>. For simplicity, we start with the case of even n. The presence of the dynamical hypermultiplet implies that the potentially present electric 1-form symmetry of the (n) theory is broken to at most _2. Therefore the global structure of the gauge theory is either SU(n) with a _2 electric 1-form symmetry, admitting _2 valued Wilson lines, or (SU(n)/_n/2)_0,1 in two variants, admitting magnetic or dyonic Wilson lines. This matches the picture discussed above.
With one stuck NS5-brane, it is possible to have odd n. This is compatible with the Higgsing realised by moving the NS5-brane off the D3-brane stack because one of the D3-branes is dragged along with the NS5-brane; equivalently, the Higgsing with a hypermultiplet in the for odd n necessarily has one vev entry equal to zero. Hence, only in the limit of infinite separation we really connect with the case with no stuck NS5-brane and even n.
For finite separation, the presence of this extra D3-brane may modify the structure of symmetries in the theory. In fact, it is possible to tensor the dynamical matter in the to build dynamical objects in the fundamental representation, so that no leftover 1-form symmetry is present.
The O5^+-plane configurations and the non-BPS spinor
Let us now consider the configurations with the O5^+-plane. In principle one could expect this case to be fairly similar to the above, but there are two main (and related) differences. The first is that the orientifold projection on a set of n D3-branes intersecting the O5^+-plane leads to an (n) symmetry, whose pattern of possible global structures is more involved than for (n), due to the role of line operators in the spinor representations. The second is that, as we discuss next, on top of matter in two-index tensor representations, the O5^+-plane supports new dynamical degrees of freedom, precisely in the spinor representation. In the following we discuss the appearance of this spinor states.
Building on the characterisation in <cit.> of D-brane charges via K-theory, <cit.> proposed the
classification of D-brane charges localised on Op-planes based on equivariant K-theory on ^p+1× (^9-p/Ω I_9-p), where I_9-p flips the coordinates or ^9-p. In particular, localised (d+1)-dimensional branes on the (p+1)-dimensional Op^±-planes are classified by the real or symplectic K-theory groups KR(^9-p, ^p-d) and KH(^9-p, ^p-d). These are easily obtained by the isomorphisms
KR(^9-p, ^p-d)≃ KO(^2p-d-1) , KH(^9-p, ^p-d)≃ KO(^2p-d+3) .
In particular, upon using mod-8 Bott periodicity of KO groups, for an O5^+-plane the relevant K-theory groups are KO(^5-d). The non-trivial groups are
KO(^0)=KO(^4)= , KO(^1)= KO(^2)=_2 ,
which describe the BPS D5- and D1-brane, and a _2-charged non-BPS 4-brane and a 3-brane.
We are interested in the 3-brane, for which we now provide a
microscopic construction. To describe a _2 charged 3-brane in the
directions 0 789 (hence along the O5^+-plane), we consider a
D7-brane along 03 456 789 and at the origin in the directions 12, and
its orientifold image, namely an anti-D7-brane (which we denote by
D7) spanning 03 456 789 and also at the origin in the
directions 12. Away from the O5^+-plane they can both annihilate via
tachyon condensation, but the tachyon is odd under the projection and
must vanish at the O5^+-plane location, leading to a localised
3-brane charge. A simple way to check the above picture is to
(formally) perform T-duality along the directions 3456, so that the
O5^+-plane maps to an O9^+-plane, and the D7-D7 pair
maps into a D3-D3 pair, whose tachyon is projected out,
so it provides a stable non-BPS state. This non-BPS D3-brane in the
non-supersymmetric (32) theory in <cit.> was
identified in <cit.> with the class KSp(^6)=_2
and built explicitly with precisely the above microscopic
construction.
Let us now introduce a stack of n D3-branes intersecting the O5^+-plane. Using the above microscopic construction, it is easy to see that the open strings between the D7-D7 and the n D3-branes have 8 DN directions, and lead to fermion zero modes in the fundamental representation, so the resulting state transforms in the spinor representation of the (n) in the n D3-branes. We also observe that for even n one in fact gets spinors of both chiralities; this is in contrast with other instances of non-BPS spinor states (such as the type D0-brane) for which a worldvolume _2 gauge symmetry enforces a projection eliminating one of the chiralities <cit.>.
One may worry that the state is higher-dimensional, as it additionally extends in the directions 456 789, and would see to play no role in the discussion of symmetries of the 4d theory. However the relevant point is that it has a 1d intersection with the D3-branes, so it effectively defines a line operator in the spinor representation for most purposes[In the holographic dual in section <ref> and its SymTFT sector in section <ref> this will be more manifest, as the additional dimensions are actually compact in the near horizon geometry of the D3-branes.]. In particular, the global structure of the symmetry group must be compatible with the existence of objects transforming in the spinor representation. In addition, one may consider configurations where the non-BPS brane is compactified in the extra directions 789, and form the analogue of dynamical finite energy brane-antibrane pairs, which can be nucleated and break line operators in the spinor representation. It is in this sense that we refer to them as spinor states.
The presence of these spinor states will be key in our identification below of the 1-form symmetries of this system when regarded as a boundary configuration of the 4d theory. Incidentally, we point out that such spinor states are absent for the O5^- configurations, so that our identification of the 1-form symmetry structure in the earlier discussion is not modified.
The O5^+-plane configurations and 1-form symmetries: odd n
Consider now the configurations with the O5^+-plane, for which n may be even or odd. We start with the case of n odd and look first at the case with no stuck NS5-brane. As shown in section <ref>, the n D3-branes intersecting the O5^--plane have boundary condition projecting the vector multiplets down to (n) and the hypermultiplets down to . For odd n, the center of (n) is _2, with the _2 charge carried by the spinor representation. The possible global structures are Spin(n), admitting spinor Wilson line operators, and SO(n)_±, without them but admitting monopole or dyon line operators, respectively. Because of the existence of the non-BPS spinor discussed above, we are led to a Spin(n) global structure. This would seemingly lead to a _2 electric 1-form symmetry, but would be problematic, because the bulk (n) has no _2 subgroup of its center _n for odd n. Happily, the fact that the spinor is dynamical (i.e. can be used to break line operators in the spinor representation) saves the day, because it breaks the _2 1-form symmetry. Hence, there is no left-over 1-form symmetry in this case.
If there is one stuck NS5-brane, recalling section <ref>, the boundary conditions preserve an (n) symmetry but there is a localised hypermultiplet in the . Hence, it would seem that there is a potential _2 electric 1-form symmetry, with the _2 charge carried by line operators in the fundamental representation. This would however be at odds with the Higgsing to the (n) theory, where the _2 1-form symmetry is ultimately broken by the spinor state. However, since the latter is a stable state, it must survive in the unHiggsing to the (n) theory and be present even when the NS5-brane sits on top of the O5-plane. In this situation, although the presence of the NS5-brane prevents a fully microscopic description, it is possible to guess its (n) quantum numbers. In particular, two of them can be combined into a fundamental representation of (n) (just as two spinors can combine into a vector in the (n) subgroup), hence the _2 1-form symmetry is broken, matching the above result.
The O5^+-plane configurations and 1-form symmetries:
even n
We now quickly consider the case of even n, starting with the configuration with no stuck NS5-brane. As shown in section <ref>, the n D3-branes intersecting the O5^+-plane have boundary condition projecting the vector multiplets down to (n) and the hypermultiplets down to . The center C of (n) is _4 for n=4k+2 or _2×_2 for n=4k, so the global structures are either Spin(n) or Spin(n)/H with H a non-trivial subgroup of C. The former is the only structure admitting spinors of both chiralities, hence is the appropriate global structure in our case. The _4 or _2×_2 electric 1-form symmetry is however broken by the dynamical presence of the spinors themselves, so no non-trivial 1-form symmetry remains.
In the configuration with a stuck NS5-brane, as discussed in section <ref>, we have (n) boundary conditions with a localised hypermultiplet in the . As in the case of odd n, the would-be present _2 1-form symmetry is broken by the _2-charged non-BPS states, so no 1-form symmetry is preserved in this case.
The O5^±-plane configurations
Let us now consider the configurations with the O5^--plane, which can be regarded as the previous ones with the addition of one stuck D5-brane on top of the O5-plane. Clearly, the present of the extra fundamental flavours from the D3-D5 open string sector, the electric 1-form symmetry is completely broken. For the case of the O5^+-plane, we have a situation similar to the O5^+-plane, and we will not discuss it further.
§.§ Holographic dual
In this section we describe the gravitational dual of the configurations of 4d =4(N) SYM on half-space with orientifold boundary conditions of the kind considered in section <ref>. The discussion of the corresponding supergravity solutions require a slight generalisation beyond those in section <ref>, as we explain next.
§.§.§ Supergravity solutions with multiple AdS_5×^5 asymptotic regions
The supergravity solutions considered in <cit.> (see also <cit.> for further works) are actually the most general compatible with SO(2,3)× SO(3)× SO(3) symmetry and 16 supersymmetries. In fact, they describe the near horizon solution of stacks of D3-branes with general Hanany-Witten configurations of NS5- and D5-branes. These include configurations defining BCFT_3 boundary conditions for 4d =4(N) SYM, but also configurations in which several 4d =4(N_i) SYM sectors are separated by configurations of NS5- and D5-branes (with extra D3-branes suspended among them). We quickly review their basic structure, emphasising only the points necessary for the generalisation we need, and refer the reader to the literature for further details.
The corresponding gravity duals are given by
AdS_4×^2_1×^2_2 fibered over a Riemann surface,
which can be conveniently described as a disk. Its boundary is divided
into segments of two kinds, in which either ^2_1 or ^2_2
shrink. Segments of different kinds are separated by punctures,
which in general can correspond to asymptotic AdS_5×_5
regions, where the ^5 is realised by fibering
_1^2×_2^2 over an arc in the Riemann surface with one
endpoint in each of the two boundary segments, as we discussed in the
example in figure <ref>. We denote by N_i the RR 5-form
flux on the ^5 in the i^th puncture. In case this flux is
zero, the geometry at this point is actually smooth, and describes the
shrinking of ^5 to zero size, as in the case of the origin in
figure <ref>.
Within each segment, there may be additional punctures, which describe NS5- or D5-branes, according to the kind of segment on which it is located. These punctures describe local regions with geometry AdS_4×^2×^3, where the ^2×^3 is obtained by fibering _1^2×_2^2 over an arc in the Riemann surface with the two endpoints on both sides of the puncture in the boundary segment, as we discussed in the example in figure <ref>. There is an NSNS or RR 3-form flux over ^3 encoding the 5-brane charge, and an integral of the 2-form field (of the opposite kind) over ^2, encoding the 5-brane linking number (or worldvolume monopole charge). We note that the argument below (<ref>) applies also here, essentially relating the integer value of the 2-form field background to the integer value of the RR 5-form F̃_5 flux on ^2×^3.
A particular class is that of solutions with only one asymptotic AdS_5×^5 region, which we have studied in earlier sections. In this case, in our discussion the disk of the Riemann surface was distorted into the quadrant in figure <ref>, with the boundary corresponding to the horizontal and vertical semi-infinite lines, plus the point at infinity. The two axes correspond to segments of the two kinds, and they are separated by a puncture at the origin, with no 5-form flux on the ^5, hence describing a smooth point, and by a puncture at the point at infinity, with N units of 5-form flux, describing the asymptotic AdS_5×^5 dual to 4d (N) SYM theory on half-space. Along each segment/axis, there are 5-brane punctures which describe the BCFT_3 and which have been the focus of most of our study.
The previous paragraphs show that most of the ingredients of the supergravity solution in the general case are essentially already present in the case of an ETW configuration with a single AdS_5×^5 asymptotic region. Hence, most of our discussion of the SymTFT structure of the topological sector of the gravitational background extend to the general case.
The general structure is that of a collection of SymTrees, analogous to those introduced in <cit.> to discuss compactifications with several sectors associated to isolated singular points. In our case, we have a set of SymTFT_N_i's associated to the AdS_5×^5 asymptotic regions, which are separated by U(1) junction theories connecting them with the different SymTFT_^2×^3's associated to the 5-branes. We also have a retracted SymTree picture in which the SymTFT_^2×^3's are collapsed onto the junction theories to yield the explicit 5-brane probe theories. The coefficients of the topological couplings (<ref>) jump across the junctions, as dictated by the induced D3-brane charge on the 5-branes in the retracted SymTree picture, or equivalently by the coefficient of the sector (<ref>) of the corresponding SymTFT_^2×^3 in the full SymTree picture. We will not consider this general setup any further, but simply adapt it to the class of solutions necessary to describe the gravity dual of the orientifold boundary conditions in section <ref>, and to extract their Symmetry Theories.
§.§.§ Orientifold ETW configurations
The general class of solutions in the previous section includes the gravitational duals of the orientifolded brane configurations in section <ref>, when described in the double cover, as we explain in this section.
The appropriate class of solutions is that with two identical asymptotic AdS_5×^5 regions, with the same number N of RR 5-form flux units (and same value of other fields, e.g. the 10d axio-dilaton), and an intermediate region with a similarly _2 symmetric distribution of NS5- and D5-branes. For convenience we depict the Riemann surface Σ as a strip, parametrised by a coordinate z with Im z∈ [0,π/2], with the two points at infinity Re z→±∞ corresponding to the two asymptotic AdS_5×^5 regions, and with NS5-brane punctures (resp. D5-brane punctures) in the lower (resp. upper) boundary[The quadrant in the ETW configurations in the previous sections can be described as a strip by simply taking w=-e^-z, see (<ref>) later. The point w=0, i.e. z→∞, corresponds to a closed off puncture, with vanishing 5-form flux, hence no actual asymptotic AdS_5×^5 region at that point <cit.>.]. This strip corresponds to the disk mentioned in section <ref>, with the two boundaries of the strip corresponding to the two kinds of segment in the disk boundary, and the two points at infinity corresponding to punctures describing two asymptotic AdS_5×^5.
The geometry is of the kind explained in section <ref>, with metric (<ref>) and functions (<ref>) defined in terms of h_1,h_2 given by (see e.g. <cit.> for explicit expressions of this form)
h_1 = [ -iα̃sinh(z-β̃) - ∑_b γ̃_b log(tanh( iπ/4- z-δ̃_b/2) )] + c.c.
h_2 = [ αcosh(z-β) - ∑_a γ_a log(tanh(z-δ_a/2) )] + c.c.
These are analogous to (<ref>), but in terms of the strip coordinate z instead of the quadrant coordinate w, and with a slightly different asymptotic behaviour to include the second AdS_5×^5 region. Specifically, they expressions are related by
w=-e^-z , γ_a≡ 2 d_a , γ̃_b≡ 2d̃_b , k_a≡ e^-δ_a , l_b≡ e^-δ̃_b
The configuration is quotiented by Ω R, with R being inherited[This description of the orientifolding ignores the backreaction of the O5-plane on the geometry. The latter could be accounted for (at least slightly far away from the O5-plane location) by introducing an additional D5-brane source in the expressions of h_1,h_2. In our discussion we will ignore this backreaction, but its topological content will be included in our discussion of the SymTFT in section <ref>.] from its flat space avatar R:(x^3,x^4,x^5,x^6)→ (-x^3,-x^4,-x^5,-x^6). It can be expressed in terms of the ^5 in the internal space, by using its embedding in ^6 as
(x^4)^2+(x^5)^2+(x^6)^2+(x^7)^2+(x^8)^2+(x^9)^2=r^2 .
The fixed point set is hence AdS_4×_1^2. In the AdS_4×_1^2×_2^2 fibration over the strip, it acts as antipodal identification on _2^2 together with a reflection z→ - z on the complex coordinate on the strip. The fixed point set in the strip corresponds to the segment in the imaginary axis, but the only fixed point set in the whole geometry corresponds to the _1^2 sitting at the upper endpoint of this segment (times AdS_4). We thus have an O5-plane on AdS_4×_1^2, and located at z=iπ in the strip; this is precisely the geometry required to preserve the same supersymmetries as the D5-branes.
In order to admit this orientifold quotient, the 5-brane configurations have to be distributed on the corresponding boundaries of Σ symmetrically under this reflection with respect to the imaginary axis, see figure <ref>. For notational convenience, let us label the 5-brane stacks with indices a and b taking both positive and negative values, with the a^th and b^th stacks being orientifold images of the -a^th and -b^th (In case there are 5-branes on top of the O5-plane (for instance for the O5^--plane), we simply include them by using a label a=0 or b=0). Hence, for (<ref>) to be symmetric under z→- z, we must have
δ_a=-δ_-a , δ̃_b=-δ̃_-b , γ_a=γ_-a , γ̃_b=γ̃_-b ,
as well as β=0, β̃=0. Namely, the positions and multiplicities of 5-branes on the different stacks must respect the symmetry. In particular, in our ETW brane notation in previous sections, n_-a=n_a, m_-b=m_b.
This also ensures, although we will not be explicit about it, that the NSNS 2-form and RR 2- and 4-form backgrounds also respect the symmetry, implying that the assignment of 5-form flux on the asymptotic ^5's and the ^2×^3 on the 5-brane throats are _2 symmetric. Finally, noting that the NSNS 2-form is intrinsically odd under the orientifold action, and recalling (<ref>) we have L̃_-b=-L̃_b; on the other hand, the RR 2-form is intrinsically even under the orientifold, but the orientation of _1^2 flips, so recalling (<ref>) we have K_-a=-K_a. In case there are 5-branes on top of the O5-plane, we simply have K_0=L_0=0, in agreement with the fact that there is no flux jump due to the _2 symmetry.
From the perspective of the configuration after the quotient, we have a configuration that away from the location of the orientifold plane has the same structure as the solutions in section <ref>. The main new ingredient is thus the presence of the orientifold plane and it action on local fields and localised degrees of freedom. This will also be the only main new ingredient in our study of the SymTFT.
§.§ The SymTFT
§.§.§ General structure
The most efficient way to extract the structure of the Symmetry Theory in this class of models is to describe the supergravity solutions in the previous section in Poincaré coordinates of the 5d solution. As shown in figure <ref>, in the covering space we have a full 4d Poincaré holographic boundary, with a _2 symmetric fan of wedges separated by 5-branes spanning AdS_4 slices, and the O5-plane (possibly with 5-branes on top of it) sitting in the AdS_4 orthogonal to the 4d holographic boundary. As in previous sections, the Symmetry Theory picture can be extracted by taking the 4d holographic boundary to describe the 4d physical boundary, and the asymptotic infinity in the Poincaré radial direction to describe the topological boundary. Also, although we have described the configuration in the covering space, we can make the orientifold identification manifest, by simply folding the configuration along the vertical axis, and reach a configuration exhibiting an ETW boundary, which corresponds to the O5-plane (possibly with 5-branes).
The picture clearly shows that the structure of the Symmetry Theory is a variant of the SymTFT Fan encountered in previous sections. In fact, away from the O5-plane it is locally exactly of that kind. Namely, we have a SymTFT Fan built by putting together wedges containing branches of SymTFTs of the kind (<ref>) for different values of their coefficients, joined via junction theories to SymTFT_^2×^3's corresponding to the 5-branes, with fluxes dictated by flux conservation.
New features are however encountered in the local region around the
O5-plane, i.e. the ETW boundary. First, the 5-form flux does not go to
zero at the location of the O5-plane, we rather have a non-zero number
n of flux quanta, and correspondingly the ^5 does not shrink to
zero size. Hence, spacetime is ending due to the presence of the
O5-plane; this is similar to how type IIA theory can end on O8-planes,
as in each boundary of type I' theory. The second is the presence of a
new object, the O5-plane itself, whose role we discuss in the
following.
In order to understand the role of the O5-plane in the Symmetry Theory, the presence of additional 5-branes in the SymTFT Fan away from it is not relevant, so we may simply focus on the configuration in a small neighbourhood around the O5-plane. This sector of the Symmetry Theory in the covering space is therefore a 5d SymTFT_n spanning the whole non-compact 4d spacetime, times the interval between the physical and topological boundaries, but cut in two halves by an O5-plane (possibly with 5-branes on top). The integer n is the amount of flux remaining after the asymptotic value N has been peeled of by the 5-branes away from the O5-plane.
The above picture describes the retracted version of the full Symmetry Theory. A more detailed picture is obtained by activating the full SymTree structure, considering the two orientifold images of the SymTFT_n joining at a junction theory with the SymTFT_^2×^3's associated to the O5-plane (and the possible D5-branes on top of it). One may think that in the presence of an NS5-brane stuck on the O5-plane there should be an additional SymTFT_^2×^3 branch in the SymTree. However since the linking number of such NS5-brane is zero, as discussed above, or equivalently the integral of the RR 2-form over the ^2 vanishes, so does the total 5-form flux over the corresponding ^2×^3. Therefore this would-be SymTFT_^2×^3 is actually trivial. This is the SymTFT manifestation of the fact that the theories with a stuck NS5-brane can be Higgsed to those without it, with no modification of their 1-form symmetry structure.
These ingredients will be important in the discussion of the 1-form symmetries, to which we turn next.
§.§.§ The 1-form symmetries in the SymTFT
In this section we carry out the discussion of the 1-form symmetries in the SymTFT of the different O5-plane configurations. The discussion will parallel, and match, the discussion in section <ref>. For simplicity we restrict to configurations without stuck NS5-brane, since they simply add a topologically trivial sector to the SymTFT structure, as we have just explained.
The O5^--plane configuration and 1-form symmetries
Consider the configuration with an O5^--plane, hence we need n even. In the configuration with no stuck NS5-brane, we have two SymTFT_n theories joining at a junction with the O5^--plane. The latter may be regarded as an extra branch corresponding to a SymTFT_^2×^3 with -2 units of 5-form flux. The relevant observation is that, because gcd(n,2)=2, there is an unbroken _2 in the full theory. This is precisely the structure required to match the unbroken _2 1-form symmetry discussed in section <ref>.
One may argue that the factor of 2 arises because we are working in
the covering space, and wonder about a possible intrinsic description
in the orientifold quotient. Indeed, such a description is obtained by
simply realising that the orientifold makes some of the 2-form fields
_2-valued. For B_2, this simply follows from the fact that it
is odd under the orientifold action, hence it is projected down to a
_2-valued gauge field. For C_2, this can be seen by noticing
that two stuck D1 strings can become orientifold images of each other
and move away from the orientifold plane, showing the field under
which they are charged is _2-valued.
The O5^+-plane configuration and 1-form symmetries: odd n
Consider now the O5^+-plane configuration, starting with the case of an odd number n of RR 5-form flux units in the bulk SymTFT. In this case the SymTree connects two SymTFT_n theories with the SymTFT_^2×^3 associated to the O5-plane, which contains a term of the form (<ref>) with coefficient +2 in the covering space. Since now gcd(n,2)=1, there is no left-over 1-form symmetry, in agreement with the field theory analysis in section <ref>.
From the perspective of the theory in the quotient, the action of the
O5-plane on the 2-form fields is as in the case of the O5^--plane
above, hence it would seem that there is a _2 symmetry. However,
for odd n this is broken by the baryonic vertex of the SymTFT_n
theories. Alternatively, there is a way more intrinsic to the
orientifold configuration to explain the breaking of the _2
symmetry. This is given by a vertex corresponding to the gravitational
realisation of the (n) spinor introduced in section
<ref>. The microscopic construction is the
near horizon version of the flat space construction mentioned there,
i.e. we consider a D7-D7 pair spanning a 3d subspace
along the O5-plane volume and wrapped on the RP^3×^2
around the O5-plane (with a nontrivial _2 Wilson exchanging the
two objects along non-trivial 1-cycles H_1( RP^3,)=_2, to
account for the orientifold action). This describes a spinor state in
the theory, which breaks the above _2 symmetry as explained in
the field theory approach.
The O5^+-plane configuration and 1-form symmetries: even n
Consider now the case of an O5^+-plane with an even number n of RR 5-form flux units in the bulk SymTFT. In this case we have a SymTree with an unbroken _2 symmetry because gcd(n,2)=2. This is however broken by the presence of the spinor mentioned above, so no non-trivial 1-form symmetry remains, in agreement with the field theory analysis.
The O5^±-plane configurations
Let us quickly consider the O5^±-plane configurations. The O5^--plane corresponds to the O5^--plane with an extra D5-brane on top, hence its charge is +1 in the covering space. This means that, even if n is even, there is no _2 symmetry preserved in the SymTree because gcd(n,1)=1. This agrees with the field theory description. Equivalently, from the perspective of the orientifold quotient theory, the projection on 2-form fields is as in the O5^--plane, so one might think that there is a _2 symmetry. However, the electric 1-form symmetry is broken by the presence of explicit flavours in the fundamental due to the stuck D5-brane, in agreement with the above result. Either way, there is no non-trivial 1-form symmetry in this case.
For the case of the O5^+-plane, we have a situation similar to the O5^+-plane, and we will not discuss it further.
§ CONCLUSIONS
In this work we have disclosed the structure of the 5d SymTFT of 4d
=4(N) SYM on a space with boundary coupled to a
Gaiotto-Witten BCFT_3, by extracting the key topological information
from the gravitational dual of configurations of D3-branes ending on a
system of NS5- and D5-branes. The Symmetry Theory turns out to display
an extremely rich structure, consisting of a fan of SymTFTs
(<ref>) with different levels, coupled via (1) junction
theories to the SymTFTs of the 5-branes, in a SymTree-like
structure. The SymTFT Fan realises 0-form flavour symmetries and their
enhancement, and allows for the identification of unbroken 1-form
symmetries of the SYM theory with boundaries.
We have performed a systematic characterisation of the physical
effects undergone by different topological operators when moved across
different SymTFTs of the Symmetry Theory, by using the holographic
string theory realisation, truncated at the topological level. Subtle
field theory effects, such as the variation of the order of the
discrete symmetries, are very simply realized in terms of familiar
brane dynamics, such as Hanany-Witten brane creation effects and
Freed-Witten consistency conditions.
We have further discussed the introduction of 7-branes to provide
boundary conditions for the 5-brane theories, and studied their role
as duality interfaces of the physical gauge theory, implementing
SL(2,) duality transformation on topological operators moved
across the 7-brane branch cuts.
We have carried out a similar analysis for orientifold boundary conditions. By extracting the topological information of configurations including the different kinds of O5-planes in the gravitational duals, we have shown that the corresponding Symmetry Theories are obtained by enriching the SymTFT Fan with the local physics of the O5-plane neighbourhood. We have used string theory techniques to characterized the structure of 1-form symmetries for different kinds of orientifold boundary conditions.
Our work opens several interesting directions, some of which are:
* The use of 5-brane configurations to define boundary conditions allows to construct holographic dual pairs for boundaries of 4d theories with less supersymmetry. We expect our techniques to be helpful in extracting the corresponding SymTFTs for such constructions. For instance, as discussed in <cit.> the holographic duals of =3 S-fold <cit.> or =2 orbifold theories with boundaries can be obtained by considering quotients of the parent ETW configurations with symmetric sets of 5-branes. It is straightforward to apply our techniques to formulate the SymTFT Fans for these configurations, and it would be interesting to explore possible new features arising from the quotients.
* It is possible to construct 5-brane configurations with smaller supersymmetry but still providing BCFT_3 boundary conditions for the 4d (N) SYM theories, for instance by using rotated configurations of NS5- and D5-branes preserving 4 supersymmetries <cit.>. Such BCFT_3 5-brane configurations were considered in <cit.> and share many features of those in <cit.>. Hence, although they do not have a known supergravity dual, our techniques may well allow for the construction of their corresponding SymTFTs, hopefully uncovering new classes of SymTFT Fans, thus enlarging the classification of symmetry structures for supersymmetric boundary conditions of 4d (N) SYM theories.
* Several features of the Symmetry Theories we have considered seem related to the fact that we are dealing with conformal boundary conditions for a conformal field theory. In particular, the fan-like structure of junction theories in the holographic dual is directly related to conformal invariance. This suggests that the Symmetry Theories of general conformal field theories with conformal boundary conditions are given in terms of a suitable SymTFT Fan combining wedges of bulk 5d SymTFTs and flavour SymTFTs into a local SymTree structure. It would be interesting to explore a general formulation of this SymTFT Fans for other classes of CFTs.
* The characterisation of the topological sector of supergravity backgrounds with ETW branes is also relevant from the perspective of the Cobordism Conjecture <cit.>, as it provides a direct link between the topological sector in the bulk and that on the ETW branes. This provides a powerful tool in the characterisation of cobordism defects in general theories of Quantum Gravity. Our work is an important first step in explicitly developing this program in the context of holographic dual pairs, and we expect it to seed further developments relevant to swampland studies.
We hope to come back to these and other interesting questions in the future.
We thank Roberta Angius, Andrés Collinucci, Matilda Delgado, Miguel
Montero and Xingyang Yu for helpful discussions. I.G.E. and
J.H. thank Harvard University for hospitality during part of the
development of this work. I.G.E. is partially supported by STFC grants
ST/T000708/1 and ST/X000591/1 and by the Simons Foundation
collaboration grant 888990 on Global Categorical Symmetries. The work
by J.H. and A.U is supported through the grants CEX2020-001007-S and
PID2021-123017NB-I00, funded by MCIN/AEI/10.13039/ 501100011033 and by
ERDF A way of making Europe. The work by J. H. is also supported by
the FPU grant FPU20/01495 from the Spanish Ministry of Education and
Universities.
Data access statement. There is no additional research
data associated with this work.
JHEP
|
http://arxiv.org/abs/2409.03422v1 | 20240905110902 | Retrieving stellar parameters and dynamics of AGB stars with Gaia parallax measurements and CO5BOLD RHD simulations | [
"E. Béguin",
"A. Chiavassa",
"A. Ahmad",
"B. Freytag",
"S. Uttenthaler"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Lagrange, CS 34229, Nice, France
[email protected]
Theoretical Astrophysics, Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden
Institute of Applied Physics, TU Wien, Wiedner Hauptstraße 8-10, 1040 Vienna, Austria
The complex dynamics of asymptotic giant branch (AGB) stars and the resulting stellar winds have a significant impact on the measurements of stellar parameters and amplify their uncertainties. Three-dimensional (3D) radiative hydrodynamic (RHD) simulations of convection suggest that convection-related structures at the surface of AGB star affect the photocentre displacement and the parallax uncertainty measured by Gaia.
We explore the impact of the convection on the photocentre variability and aim to establish analytical laws between the photocentre displacement and stellar parameters to retrieve such parameters from the parallax uncertainty.
We used a selection of 31 RHD simulations with CO^5BOLD and the post-processing radiative transfer code Optim3D to compute intensity maps in the Gaia G band [320–1050 nm]. From these maps, we calculated the photocentre position and temporal fluctuations. We then compared the synthetic standard deviation to the parallax uncertainty of a sample of 53 Mira stars observed with Gaia.
The simulations show a displacement of the photocentre across the surface ranging from 4 to 13 % of the corresponding stellar radius, in agreement with previous studies. We provide an analytical law relating the pulsation period of the simulations and the photocentre displacement as well as the pulsation period and stellar parameters. By combining these laws, we retrieve the surface gravity, the effective temperature, and the radius for the stars in our sample.
Our analysis highlights an original procedure to retrieve stellar parameters by using both state-of-the-art 3D numerical simulations of AGB stellar convection and parallax observations of AGB stars. This will help us refine our understanding of these giants.
Gaia parallax uncertainties and RHD simulations
Retrieving stellar parameters and dynamics of AGB stars
with Gaia parallax measurements and CO^5BOLD RHD simulations
E. Béguin
1
A. Chiavassa1
A. Ahmad2
B. Freytag2
S. Uttenthaler3
Received April 4, 2024; accepted July 16, 2024
======================================================================================================================================================
§ INTRODUCTION
Low- to intermediate-mass stars (0.8-8 M_⊙) evolve into the asymptotic giant branch (AGB), in which they undergo complex dynamics characterised by several processes including convection, pulsations, and shockwaves. These processes trigger strong stellar winds <cit.> that significantly enrich the interstellar medium with various chemical elements <cit.>. These processes and stellar winds also amplify uncertainties of stellar parameter determinations with spectro-photometric techniques, like the effective temperature, which in turn impacts the determination of mass-loss rates <cit.>. In particular, Mira stars are peculiar AGB stars, showing extreme magnitude variability (larger than 2.5 mag in the visible) due to pulsations over periods of 100 to 1000 days <cit.>.
In <cit.>, 3D radiative hydrodynamics (RHD) simulations of convection computed with CO^5BOLD <cit.> reveal the AGB photosphere morphology to be made of a few large-scale, long-lived convective cells and some short-lived and small-scale structures that cause temporal fluctuations on the emerging intensity in the Gaia G band [320–1050 nm]. The authors suggest that the temporal convective-related photocentre variability should substantially impact the photometric measurements of Gaia, and thus the parallax uncertainty. In this work, we use 31 recent simulations to establish analytical laws between the photocentre displacement and the pulsation period and then between the pulsation period and stellar parameters. We combine these laws and apply them to a sample of 53 Mira stars from <cit.> to retrieve their effective stellar gravity, effective temperature, and radius thanks to their parallax uncertainty from Gaia Data Release 3[GDR3 website: https://www.cosmos.esa.int/web/gaia/data-release-3] (GDR3) <cit.>.
§ OVERVIEW OF THE RADIATIVE HYDRODYNAMICS SIMULATIONS
In this section, we present the simulations and the theoretical relations between the stellar parameters and the pulsation period. We also present how we compute the standard deviation of the photocentre displacement and its correlation with the pulsation period.
§.§ Methods
We used RHD simulations of AGB stars computed with the code CO^5BOLD <cit.>. It solves the coupled non-linear equations of compressible hydrodynamics and non-local radiative energy transfer, assuming solar abundances, which is appropriate for M-type AGB stars. The configuration is ‘star-in-a-box’, which takes into account the dynamics of the outer convective envelope and the inner atmosphere. Convection and pulsations in the stellar interior trigger shocks in the outer atmosphere, giving a direct insight into the stellar stratification. Material can levitate towards layers where it can condensate into dust grains <cit.>. However, models used in this work do not include dust.
We then post-processed a set of temporal snapshots from the RHD simulations using the radiative transfer code Optim3D <cit.>, which takes into account the Doppler shifts, partly due to convection, in order to compute intensity maps integrated over the Gaia G band [320–1050 nm]. The radiative transfer is computed using pre-tabulated extinction coefficients from MARCS models <cit.> and solar abundance tables <cit.>.
§.§ Characterising the AGB stellar grid
We used a selection of simulations from <cit.>, <cit.> [abbreviation: F17+C18], <cit.> [abbreviation: A23] and some new models [abbreviation: This work], in order to cover the 2000–10000 L_⊙ range. The updated simulation parameters are reported in Table <ref>.
In particular, 24 simulations have a stellar mass equal to 1.0 M_⊙, and seven simulations have one equal to 1.5 M_⊙. In the rest of this work, we denote by a 1.0 subscript the laws or the results obtained from the analysis of the 1.0 M_⊙ simulations; and by a 1.5 subscript those obtained from the analysis of the 1.5 M_⊙ simulations.
The pressure scale height is defined as H_p = k_B T_eff/μ g, with k_B the Boltzmann constant, T_eff the effective surface temperature, μ the mean molecular mass, and g the local surface gravity. The lower the surface gravity is, the larger the pressure scale height becomes, and so the larger the convective cells can grow (see Fig. <ref> and <cit.>).
Moreover, <cit.> estimated that the characteristic granule size scales linearly with H_P. Interplay between large-scale convection and radial pulsations results in the formation of giant and bright convective cells at the surface. The resulting intensity asymmetries directly cause temporal and spatial fluctuations of the photocentre. The larger the cells, the larger the photocentre displacement. This results in a linear relation between the photocentre displacement and the pressure scale height <cit.>.
However, <cit.> found it is no longer true for H_p greater than 2.24 × 10^10cm for both interferometric observations of red supergiant stars and 3D simulations, suggesting that the relation for evolved stars is more complex and depends on H_p (Fig. 18 in the aforementioned article).
Concerning the pulsation period, <cit.> performed a fast Fourier transform on spherically averaged mass flows of the CO^5BOLD snapshots to derive the radial pulsation periods. The derived bolometric luminosity–period relation suggests good agreement between the pulsation periods obtained from the RHD simulations and from available observations. They are reported in columns 9 and 10 in Table <ref>.
<cit.> found a correlation between the pulsation period and the surface gravity (g∝M_⋆/R_⋆^2, see Eq. (4) in the aforementioned article). Thus, the pulsation period increases when the surface gravity decreases. In agreement with this study, we found a linear relation between log(P_puls) and log(g), see Fig. <ref>. By minimising the sum of the squares of the residuals between the data and a linear function as in the non-linear least-squares problem, we computed the most suitable parameters of the linear law. We also computed the reduced χ̅^2 for the 1.0 M_⊙ simulations and for the 1.5 M_⊙ simulations: χ̅_1.0^2 = 2.2 and χ̅_1.5^2 = 0.4. The linear law found in each case is expressed as follows:
log(P_puls) = -0.84 ·log(g) + 2.14, for M_⋆ = 1.0 M_⊙
log(P_puls) = -0.78 ·log(g) + 2.20, for M_⋆ = 1.5 M_⊙
.
It is important to note that the χ̅_1.5^2 value is lower than χ̅_1.0^2 because there are only seven points to fit (i.e. seven RHD simulations at 1.5 M_⊙) and they are less scattered.
We compared the pulsation period, P_puls, with the effective temperature, T_eff. <cit.> showed that the pulsation period decreases when the temperature increases. A linear correlation is confirmed with our simulations, as is displayed in Fig. <ref>. However, we do not see any clear differentiation between the law found from the 1.0 M_⊙ simulations and the law from the 1.5 M_⊙ simulations so we chose to use all simulations to infer a law between P_puls and T_eff (Eq. (<ref>) and the black curve in Fig. <ref>). We computed the reduced χ̅^2 = 77 and the parameters of the linear law are expressed as follows:
log(P_puls) = -5.92 ·log(T_eff) + 23.02
.
<cit.> found a correlation between the pulsation period and the inverse square root of the stellar mean-density (M_⋆/R_⋆^3, see Eq. (4) in the aforementioned article). In agreement with this study, we found a linear correlation between log(P_puls) and log(R_⋆), with R_⋆ the stellar radius (Fig. <ref>). We computed with the least-squares method χ̅_1.0^2 = 2.0 and χ̅_1.5^2 = 0.4 and the parameters of the linear law are expressed as follows:
log(P_puls) = 1.68 ·log(R_⋆) - 1.59, for M_⋆ = 1.0 M_⊙
log(P_puls) = 1.54 ·log(R_⋆) - 1.36, for M_⋆ = 1.5 M_⊙
.
We compared our results with the relation found by <cit.> — log(P_puls) = -2.07 + 1.94 log(R_⋆/R_⊙) - 0.9 log(M_⋆/M_⊙) — and with the fundamental mode of long-period variables, Eq. (12), found by <cit.> (see Figs. <ref> and <ref>). We used the solar metallicity and helium mass function from <cit.> and the reference carbon-to-oxygen ratio from <cit.>. Overall, we have the same trends, but we notice that the laws we obtained are less steep than the relation from <cit.> and the fundamental mode from <cit.>, meaning that the pulsation periods of our simulations are shorter than expected.
§.§ Photocentre variability of the RHD simulations in the Gaia G band
For each intensity map computed (for example, Fig. <ref>), we calculated the position of the photocentre as the intensity-weighted mean of the x-y positions of all emitting points tiling the visible stellar surface according to
P_x = ∑ ^N _i=1∑ ^N _j=1 I(i,j) · x(i,j)/∑ ^N _i=1∑ ^N _j=1 I(i,j)
P_y = ∑ ^N _i=1∑ ^N _j=1 I(i,j) · y(i,j)/∑ ^N _i=1∑ ^N _j=1 I(i,j)
,
where I(i,j) is the emerging intensity for the grid point (i,j) with co-ordinates x(i,j),y(i,j) and N the number of points in each co-ordinate of the simulated box.
Large-scale convective cells drag hot plasma from the core towards the surface, where it cools down and sinks <cit.>. Coupled with pulsations, this causes optical depth and brightness temporal and spatial variability, moving the photocentre position <cit.>. Thus, in the presence of brightness asymmetries, the photocentre will not coincide with the barycentre of the star. Figure <ref> displays the time variability of the photocentre position (blue star) for three snapshots of the simulation st28gm05n028. The dashed lines intersect at the geometric centre of the image.
We computed the time-averaged photocentre position, ⟨ P_x ⟩ and ⟨ P_y ⟩, for each Cartesian co-ordinate in astronomical units, [AU], the time-averaged radial photocentre position, ⟨ P ⟩ as ⟨ P ⟩ = (⟨ P_x ⟩ ^2+⟨ P_y ⟩^2)^1/2, and its standard deviation, σ_P, in AU and as a percentage of the corresponding stellar radius, R_⋆ [% of R_⋆] (see Table <ref>).
Figure <ref> displays the photocentre displacement over the total duration for the simulation st28gm05n028, with the average position as the red dot and σ_P as the red circle radius (see additional simulations in Figs. B1 to B21, available on Zenodo[https://zenodo.org/records/12802110]).
We also computed the histogram of the radial position of the photocentre for every snapshot available of every simulation (Fig. <ref>). The radial position is defined as P=(P_x^2+P_y^2)^1/2 in % of R_⋆. We notice that the photocentre is mainly situated between 0.05 and 0.15 of the stellar radius.
§.§ Correlation between the photocentre displacement and the stellar parameters
After the correlations between P_puls and the stellar parameters (log(g), T_eff and R_⋆), we studied correlations between P_puls and the photocentre displacement, σ_P, displayed in Fig. <ref>. The pulsation period increases when the surface gravity decreases (Fig. <ref>); in other words, when the stellar radius increases. The photocentre gets displaced across larger distances <cit.>. We notice a correlation for each sub-group that we approximated with a power law, whose parameters were determined with a non-linear least-squares method (see Eq. (<ref>) and (<ref>)):
log(P_puls) = 3.38 · x^0.12, with x = σ_P or x = σ_ϖ
log(P_puls) = 3.44 · x^0.13, with x = σ_P or x = σ_ϖ
.
The resulting reduced χ̅^2 are χ̅_1.0^2∼ 161 and χ̅_1.5^2∼ 2. As before, we note a stark difference because there are fewer 1.5 M_⊙ simulations and they are less scattered.
§ COMPARISON WITH OBSERVATIONS FROM THE GAIA MISSION
From the RHD simulations, we found analytical laws between the pulsation period and stellar parameters and between the pulsation period and the photocentre displacement, which were used to estimate the uncertainties of the results. We used these laws with the parallax uncertainty, σ_ϖ, measured by Gaia to derive the stellar parameters of observed stars.
§.§ Selection of the sample
To compare the analytical laws with observational data, the parameters of the observed stars need to match the parameters of the simulations. <cit.> investigated the interplay between mass-loss and third dredge-up (3DUP) of a sample of variable stars in the solar neighbourhood, which we further constrained to select suitable stars for our analysis by following these conditions: (i) Mira stars with an assumed solar metallicity, (ii) a luminosity, L_⋆, lower than 10000 L_⊙, and σ_L_⋆ / L_⋆ < 50 %, σ_L_⋆ being the uncertainty on the luminosity, and (iii) the GDR3 parallax uncertainty, σ_ϖ, lower than 0.14 mas. We also selected stars whose (iv) renormalised unit weight error (RUWE) is lower than 1.4 <cit.>. This operation resulted in a sample of 53 Mira stars (Table <ref>).
The pulsation periods were taken mainly from <cit.> where available, or were also collected from VizieR. Preference was given to sources with available light curves that allowed for a critical evaluation of the period, such as the All Sky Automated Survey <cit.>. Since some Miras have pulsation periods that change significantly in time <cit.>, we also analysed visual photometry from the AAVSO[https://www.aavso.org/] database and determined present-day periods with the program Period04[http://period04.net/] <cit.>.
The analysis of <cit.> provided a period variability of the order of 2.4 % of the respective pulsation period for Miras in the solar neighbourhood.
The luminosities were determined from a numerical integration under the photometric spectral energy distribution between the B-band at the short end and the IRAS 60 μ m band or, if available, the Akari 90 μ m band. A linear extrapolation to λ=0 and ν=0 was taken into account. The photometry was corrected for interstellar extinction using the map of <cit.>. We adopted the GEDR3 parallaxes and applied the average zero-point offset of quasars found by <cit.>: -21 mas.
Two main sources of uncertainty on the luminosity are the parallax uncertainty and the intrinsic variability of the stars. The uncertainty on interstellar extinction and on the parallax zero-point were neglected.
The RUWE[https://gea.esac.esa.int/archive/documentation/GDR2/, Part V, Chapter 14.1.2] is expected to be around 1.0 for sources where the single-star model provides a good fit to the astrometric observations. Following <cit.>, we rejected stars whose RUWE is above 1.4 as their astrometric solution is expected to be poorly reliable and may indicate unresolved binaries.
Figure <ref> displays the location of stars in a pulsation period-luminosity diagram with the colour scale indicating the number of good along-scan observations — that is, astrometric_n_good_obs_al data from GDR3 (top panel) — or indicating the RUWE (bottom panel).
We investigated whether the number of times a star is observed, N_obs, or the number of observed periods, N_per, is correlated with RUWE. We do not see improvements of the astrometric solution when more observations are used to compute it.
Information about the third dredge-up activity of the stars is available from spectroscopic observations of the absorption lines of technetium <cit.>. Tc-rich stars have undergone a third dredge-up event and are thus more evolved and/or more massive than Tc-poor stars. Also, since ^12C is dredged up along with Tc, the C/O ratio could be somewhat enhanced in the Tc-rich compared to the Tc-poor stars. However, as our subsequent analysis showed, we do not find significant differences between Tc-poor and Tc-rich stars with respect to their astrometric characteristics; hence, this does not impact our results.
§.§ Origins of the parallax uncertainty
The uncertainty in Gaia parallax measurements has multiple origins: (i) instrumental <cit.>, (ii) distance <cit.>, and (iii) convection-related <cit.>. This makes the error budget difficult to estimate. In particular, only with time-dependent parallaxes with Gaia Data Release 4 will the convection-related part be definitively characterised <cit.>.
Concerning the distance and instrumental parts, <cit.> investigated the bias of the parallax versus magnitude, colour, and position and developed an analytic method to correct the parallax of these biases. For comparison, the parallax and the corrected parallax are displayed in Table <ref>, columns 8 and 9, respectively.
In this work, we assume that convective-related variability is the main contributor to the parallax uncertainty budget, which is already hinted at in observations; thus, σ_P is equivalent to σ_ϖ.
Indeed, optical interferometric observations of an AGB star showed the presence of large convective cells that affected the photocentre position. It has been shown for the same star that the convection-related variability accounts for a substantial part of the Gaia Data Release 2 parallax error <cit.>.
§.§ Retrieval of the surface gravity based on the analytical laws
In Section <ref>, we established an analytical law, Eq. (<ref>), between σ_P and P_puls with the 1.0 M_⊙ simulations. We used it to calculate the pulsation period, P_1.0, of the stars from observed σ_ϖ. We defined Δ P_1.0, the relative difference between the observed pulsation period, P_obs, and our results as Δ P_1.0 = P_obs-P_1.0/P_obs. This intermediate step gives an estimation of the error when computing the pulsation period and comparing it with observations. This error can then be used as a guideline to estimate the uncertainties of the final results; in other words, of the effective temperature, the surface gravity, and the radius of the stars in our sample.
The top panel of Figure <ref> displays P_obs versus σ_ϖ as dots and P_1.0 versus σ_ϖ as the dashed red curve.
The bottom panel displays the histogram of the relative difference, Δ P_1.0, and the cumulative percentage of the observed sample. The same colour scale is used in both panels: for example, light yellow represents a relative difference between P_1.0 and P_obs of less than 5 %, while purple represents a relative difference greater than 60 % (column 3 in Table <ref>).
Δ P_1.0 ranges from 0.4 % to 72 %, with a median of 16 % (i.e. from 1 to 68 days' difference). For 85 % of the sample stars, Δ P_1.0 is ≤ 30 %, and for 57 %, it is ≤ 20 %, suggesting we statistically have a good agreement between our model results and the observations.
We then combined Eqs. (<ref>) and (<ref>) to derive the surface gravity (log(g_1.0)) directly from σ_ϖ (column 6 in Table <ref>). The top left panel of Figure <ref> displays log(P_obs) versus log(g_1.0) and Eq. (<ref>) as the dashed red curve. We notice that the calculated log(g_1.0) values follow the same trend as log(g) from the simulations and are the most accurate when closest to the line.
We performed the same analysis with the analytical laws from the 1.5 M_⊙ simulations. The top panel of Figure <ref> displays the calculated P_1.5 versus σ_ϖ and the bottom panel displays a histogram of the relative difference between P_1.5 and P_obs defined as Δ P_1.5 (column 5 in Table <ref>). The same colour code is used in both panels. We combined Eqs. (<ref>) and (<ref>) to compute log(g_1.5). Fig. <ref> displays log(P_obs) from the simulations versus log(g_1.5).
For the 1.5 M_⊙ simulations, Δ P_1.5 ranges from 0.6 % to 62 %, with a median of 16 % (i.e. from 1 to 76 days' difference). For 81 % of the sample stars, Δ P_1.5 is ≤ 30 %, and for 70 %, it is ≤ 20 %.
Qualitatively, we observe two different analytical laws depending on the mass of the models used to infer the laws. However, a larger grid of simulations, covering the mass range of AGB stars, would help to confirm this trend, tailor the analytical laws to specific stars, and predict more precise stellar parameters.
§.§ Retrieval of the other stellar parameters
We repeated the same procedure to retrieve the stellar radius, R_1.0, and the effective temperature, T_1.0 (columns 8 and 10 in Table <ref>). Combining Eqs. (<ref>) and (<ref>), we computed the radius, R_1.0. The top central panel of Figure <ref> displays log(P_obs) versus log(R_1.0).
Combining the Eqs. (<ref>) and (<ref>), we computed the effective temperature, T_1.0 (Fig. <ref>, top right panel).
As in Section <ref> for log(g_1.0), the calculated log(R_1.0) follow the Eq. <ref> derived from the simulations, and log(R_1.0) follow Eq. <ref>.
We repeated the same procedure for the 1.5 M_⊙ simulations to retrieve R_1.5 and T_1.5 (columns 9 and 11 in Table <ref>). The results are displayed in the bottom central and right panels of Fig. <ref>.
For comparison, R Peg has been observed with the interferometric instrument GRAVITY/VLTI in 2017, which provided a direct estimation of its radius: R_Ross = 351^+38_-31 R_⊙ <cit.>. From our study, R_1.0,RPeg = 321^+5_-15 R_⊙ and R_1.5,RPeg = 373^+5_-14 R_⊙, which is in good agreement with the results of <cit.>. With future interferometric observations, we shall be able to further validate our results.
§ SUMMARY AND CONCLUSIONS
We computed intensity maps in the Gaia band from the snapshots of 31 RHD simulations of AGB stars computed with CO^5BOLD. The standard deviation of the photocentre displacement, σ_P, due to the presence of large convective cells on the surface, ranges from about 4 % to 13 % of its corresponding stellar radius, which is coherent with previous studies and is non-negligible in photometric data analysis. It becomes the main contributor to the Gaia parallax uncertainty σ_ϖ budget. The dynamics and winds of the AGB stars also affect the determination of stellar parameters and amplify their uncertainties. It becomes worth exploring the correlations between all these aspects to eventually retrieve such parameters from the parallax uncertainty.
We provided correlations between the photocentre displacement and the pulsation period as well as between the pulsation period and stellar parameters: the effective surface gravity, log(g), the effective temperature, T_eff, and the radius, R_⋆. We separated the simulations into two sub-groups based on whether their mass is equal to 1.0 or 1.5 M_⊙. Indeed, the laws we provided, and the final results, are sensitive to the mass. A grid of simulations covering a larger range of masses, with a meaningful number of simulations for each, would help confirm this observation and establish laws that are suitable for varied stars. This will be done in the future.
We then applied these laws to a sample of 53 Mira stars matching the simulations' parameters. We first compared the pulsation period with the literature: we obtained a relative error of less than 30 % for 85 % of the stars in the sample for the first case and 81 % for the second, which indicates reasonable results from a statistical point of view. This error can then be used as a guideline to estimate the uncertainties of the final results. We then computed log(g), T_eff and R_⋆ by combining the analytical laws (Table <ref>).
While mass loss from red giant branch stars should be mainly independent of metallicity, it has been suggested that this is less true for AGB stars <cit.>. Photocentre displacement and stellar parameters may be dependent on metallicity and this question needs to be further studied.
We argue that the method used for Mira stars presented in this article, based on RHD simulations, can be generalised to any AGB stars whose luminosity is in the 2000–10000 L_⊙ range and that have a Gaia parallax uncertainty below 0.14 mas. Figure <ref> sums up the analytical laws found in this work that can be used to calculate the stellar parameters.
Overall, we have demonstrated the feasibility of retrieving stellar parameters for AGB stars using their uncertainty on the parallax, thanks to the employment of state-of-the-art 3D RHD simulations of stellar convection. The future Gaia Data Release 4 will provide time-dependent parallax measurements, allowing one to quantitatively determine the photocentre-related impact on the parallax error budget and to directly compare the convection cycle, refining our understanding of AGB dynamics.
This work is funded by the French National Research Agency (ANR) project PEPPER (ANR-20-CE31-0002). BF acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme Grant agreement No. 883867, project EXWINGS. The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC). This work was granted access to the HPC resources of Observatoire de la Côte d’Azur - Mésocentre SIGAMM.
aa
§ TABLES
§ APPENDIX C: M DWARFS VERSUS AGB STARS PARALLAX UNCERTAINTY
Our key assumption is that the parallax uncertainty budget in the Mira sample is dominated by the photocentre shift due to the huge AGB convection cells. To test this assumption, one would need a comparison sample of stars with similar properties such as apparent G magnitude, distance, G_BP-G_RP colour, etc., but ideally without surface brightness inhomogeneities. M-type dwarfs could be useful for a comparison because they have similar G_BP-G_RP colour as our Miras. Therefore, we searched the SIMBAD database for M5 dwarfs with G<15 mag, which yielded a sample of 240 objects. The list was cross-matched with the Gaia DR3 catalogue. Obvious misidentifications between SIMBAD and Gaia with G>15 were culled from the list. A Hertzsprung-Russell diagram based on M_G vs. G_BP-G_RP revealed that the sample still contained several misclassified M-type giant stars. Removing them retained a sample of 99 dwarf stars that have comparable G_BP-G_RP colour to the Miras sample. However, as M dwarfs are intrinsically much fainter than Miras, the dwarf stars are much closer to the sun than the Miras: their distances vary between ∼7 and 160 pc, whereas our Mira sample stars are located between 300 and over 5000 pc from the sun. Furthermore, we noticed that a significant fraction of the dwarfs have surprisingly large parallax uncertainties. These could be related to strong magnetic fields on the surfaces of these dwarfs that are the cause of bright flares or large, dark spots, creating surface brightness variations similar to those expected in the AGB stars. A detailed investigations into the reasons for their large parallax uncertainties is beyond the scope of this paper. We therefore decided to not do the comparison with the M dwarfs.
Luckily, the contaminant, misclassified (normal) M giants in the Simbad search appear to be a much better comparison sample. They have overlap with the Mira stars in G magnitude and are at fairly similar distances, between ∼260 and 1700 pc. The only drawbacks are that the normal M giants are somewhat bluer in G_BP-G_RP colour than the Miras, and we found only ten suitable M giants in our limited search. Fig. <ref> illustrates the location of the M giants together with the Mira sample and the M dwarfs in an HR diagram.
Importantly, we note that the parallax uncertainties of the M giants are all smaller than those of the Miras. This is shown in Fig. <ref>, where the logarithmic value of the parallax uncertainty is plotted as a function of the logarithm of the distance (here simply taken as the inverse of the parallax). On average, the M giants have parallax uncertainties that are smaller by a factor of 3.5 than those of the Miras. As the M giants are more compact than the Miras and have smaller pressure scale heights, it is plausible that the larger parallax uncertainties of the Miras indeed result from their surface convection cells. We therefore conclude that our key assumption is correct.
|
http://arxiv.org/abs/2409.03218v1 | 20240905033239 | Application Research On Real-Time Perception Of Device Performance Status | [
"Zhe Wang",
"Zhen Wang",
"Jianwen Wu",
"Wangzhong Xiao",
"Yidong Chen",
"Zihua Feng",
"Dian Yang",
"Hongchen Liu",
"Bo Liang",
"Jiaojiao Fu"
] | cs.PF | [
"cs.PF",
"cs.LG"
] |
FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks
Brian Hyeongseok Kim
University of Southern California
Los Angeles, USA
Jingbo Wang
Purdue University
West Lafayette, USA
Chao Wang
University of Southern California
Los Angeles, USA
Received 16 July 2024; accepted 04 September 2024
==============================================================================================================================================================================================================================================================
§ ABSTRACT
In order to accurately identify the performance status of mobile devices and finely adjust the user experience, a real-time performance perception evaluation method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) combined with entropy weighting method and time series model construction was studied. After collecting the performance characteristics of various mobile devices, the device performance profile was fitted by using PCA (principal component analysis) dimensionality reduction and feature engineering methods such as descriptive time series analysis. The ability of performance features and profiles to describe the real-time performance status of devices was understood and studied by applying the TOPSIS method and multi-level weighting processing. A time series model was constructed for the feature set under objective weighting, and multiple sensitivity (real-time, short-term, long-term) performance status perception results were provided to obtain real-time performance evaluation data and long-term stable performance prediction data. Finally, by configuring dynamic AB experiments and overlaying fine-grained power reduction strategies, the usability of the method was verified, and the accuracy of device performance status identification and prediction was compared with the performance of the profile features including dimensionality reduction time series modeling, TOPSIS method and entropy weighting method, subjective weighting, HMA method. The results show that accurate real-time performance perception results can greatly enhance business value, and this research has application effectiveness and certain forward-looking significance.
§ INTRODUCTION
The efficient and stable operation of mobile devices plays a crucial role in the continuity and effectiveness of production and services in today's highly industrialized and automated era. The dynamic performance state of the device directly affects its work efficiency, reliability, and product quality. Therefore, accurately sensing the real-time performance state of the device and predicting its performance change trend has become a key factor in ensuring the normal operation of the device, optimizing experience strategies, and enhancing business value and production efficiency.
Due to the high precision, intelligence, and other characteristics of mobile devices, limitations such as computing resources, battery life, environmental temperature, and network conditions continue to pose a major challenge to achieving high-quality, smooth, and stable operation. Existing device performance evaluations only treat devices with uniform hardware configurations under the same model equally. However, in differentiated usage scenarios, the degree of device performance degradation varies, which limits the operational space for reducing maintenance costs for device performance stability, fine-tuning performance experience strategies, and predicting performance change trends. For example, a phone that has been purchased for one year and has been used continuously for eight hours will show significantly different performance data in operation compared to a brand-new phone of the same brand, model, and production period that has just been purchased and started to be used.
In order to improve the user experience of mobile devices, a new type of performance real-time perception model has been researched and developed. It is based on real-time performance indicators on mobile devices, using feature engineering methods such as dimensionality reduction and correlation analysis to construct an objective weighted multi-level evaluation model and time series model to evaluate and predict device performance status. The evaluation results can be used as a basis for production and business means adjustment, to reasonably intervene or optimize device performance, and achieve the best balance between device usage experience and real-time system performance status.
The research on real-time perception model of performance status mainly includes three parts: extraction and construction of performance characteristics, multi-level evaluation model with weighting, and time series-based prediction model.
During the feature construction process, directly applying features that affect the performance state of mobile devices can inevitably lead to overfitting during model training due to the diversity and complex attribution of the distributed features. Using brute force methods to directly reduce multiple features can lead to defects such as reduced anti-interference ability, decreased sensitivity, and redundant important information. Therefore, in addition to conducting feature correlation analysis and selecting appropriate features using principal component analysis, it is also necessary to conduct descriptive time-series analysis modeling of performance feature data with time-series characteristics, and aggregate and map them into device profile-like features to improve model input. In the multi-level evaluation model construction process, the comprehensive performance of the feature values of each performance module and the influencing factors within each module can more comprehensively describe the real-time performance status. Therefore, a hierarchical evaluation model is used to progressively evaluate each performance module and obtain a comprehensive performance status evaluation result. In addition, objectively determining the degree of influence of each feature factor on device performance and assigning weights to input evaluation models is the difficulty of real-time performance perception algorithms. The optimal and worst solutions are determined using the superiority and inferiority distance method<cit.>, and sorted based on distance. The entropy of the features is calculated<cit.> to objectively assign weights based on the relative degree of change in the feature's own value, which solves the difficulty of multi-level performance state evaluation modeling. In the final time series prediction model, considering that the performance state perception result at each instant is affected by the real-time performance feature value changes, it is aggregated into a non-stationary sequence of discrete numerical values. Accurate prediction of the next performance state of the device requires smoothing of the noise data and short-term fluctuations contained therein, and increasing sensitivity while reducing lag. Compared with ordinary weighted moving average, the Hull moving average (HMA)<cit.> is fast and smooth, eliminates lag while providing a smooth curve to reflect the trend of performance state changes.
§ ARCHITECTURE
The overall framework for real-time evaluation of device performance status is implemented on the device client, and is accessed through a common library using the media used in this study, namely the video playback software Douyin. This framework and model can be reused in any application software. The main functional modules are shown in Figure <ref>.
The main functional modules include three parts:
* Encapsulation management module: provides a simple and easy-to-use interface for upper layers, realizes initialization and configurable capabilities, manages internal modules, implements thread scheduling and runtime management.
* Feature Collection Module: Implements the ability to collect local and cloud features (requiring time series aggregation and portrait label extraction) and is responsible for aggregating feature items and providing a unified read interface.
* Performance rating module: implements the logic module for real-time calculation and management of ratings, calls the algorithm engine to obtain raw real-time evaluation scores, and implements state evaluation with multi-level time-domain sensitivity, as well as storage logic and persistent management.
The event-driven service is an important mechanism within the framework, which realizes the trigger timing and feature collection timing of real-time evaluation scoring. The timing can be flexibly configured, including built-in common trigger event points, and the ability to expand capabilities under personalized requirements can also be achieved by adding custom events externally, see that in Figure <ref>.
The injection of event-driven is applied at key nodes by injecting event-driven into the framework. For example, startup and playback events have globally unique string names and can carry parameters. The startup event can carry startup duration, and the playback event can carry playback-related information. When integrating with Douyin, some common events are already built-in, and the framework also provides interfaces for business partners to add custom events. The distribution of event-driven is achieved through registration, and events can be distributed to the main internal modules' drivers. The feature collection module and performance scoring module both listen for events that trigger their own operation. The event that triggers the scoring operation is configurable, and the event that triggers the feature collection is determined by the collection strategy, which external parties do not need to be concerned about. For example, the startup feature is collected at startup, and the CPU and memory features are collected before scoring.
§ ALGORITHM
The algorithm mainly consists of two parts: a dynamic multi-level fuzzy evaluation model of the device performance based on TOPSIS and entropy weight method, and the performance score sequence smoothed within a time window.
§.§ Real-time Performance State Evaluation Based on TOPSIS + Entropy Weight Method
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) was first proposed by C.L. Hwang and K. Yoon in 1981. It ranks a set of objects based on their proximity to the ideal values of a finite number of evaluation criteria, providing an overall assessment of the object under a certain set of criteria. TOPSIS is suitable for multi-objective decision analysis, as it eliminates dimensional influence through normalization and positive transformation of evaluation criteria, making full use of the original data set with large distribution differences and no distribution restrictions. The results accurately reflect the differences between evaluation criteria and the comprehensive impact on the evaluation object.
The algorithm process of TOPSIS is as follows:
* Positive Transformation of original data matrix:
Convert all evaluation indicators (i.e. features) to maximization type indicators, which gradually approach the optimal target value as the indicator value increases; forward normalize the original dataset matrix. The conversion functions have the following forms:
* Converting intermediate indicators to maximal indicators:
M = max{|x_i-x_best|}, x_i = 1-|x_i-x_best|/M
* Converting a minimal indicators into a maximal indicators: max-x
* Converting interval indicators to maximal indicators:
M = max{a-min{x_i}, max{x_i}-b}, x_i =
1-a-x/M, x<a
1 , a≤ x≤ b
1-x-b/M, x≥ d
, {x_i}, [a,b]
* Standardization of Positive Matrices:
The dimensions and distributions of each indicator differ, so standardization calculations are performed on the positively transformed data matrix to eliminate the impact of indicator dimensions on evaluation. Assuming there are n target objects to be evaluated, and m evaluation indicators make up the positively transformed matrix:
X=[ x_11 x_12 ... x_1m; x_21 x_22 ... x_2m; .
.
.; x_n1 x_n2 ... x_nm ]
The matrix obtained by standardizing each of its elements is denoted asZ. The standardization method for elements is:
z_ij = x_ij/√(∑_i=1^n
x_ij^2)
* Calculate the advantages and disadvantages score.
By calculating the distance between the evaluated object and the best and worst targets, the score of the evaluated object is obtained. The standardized matrix is as:
Z=[ z_11 z_12 ... z_1m; z_21 z_22 ... z_2m; .
.
.; z_n1 z_n2 ... z_nm ].
Record all the optimal indicators combination as the optimal goal Z^+:
Z^+=(Z_1^+,Z_2^+,...,Z_m^+)
Z^+=(max{z_11,z_21,...,z_n1}, max{z_12,z_22,...,z_n2}, ..., max{z_1m,z_2m,...,z_nm})
Record all the worst indicators as the optimal and worst targets Z^-:
Z^-=(Z_1^-,Z_2^-,...,Z_m^-)
Z^-=(min{z_11,z_21,...,z_n1}, min{z_12,z_22,...,z_n2}, ..., min{z_1m,z_2m,...,z_nm})
.
The distance between the ith evaluation object and the optimal target is:
D_i^+=√(∑_j=1^m (Z_j^+-z_ij)^2)
The evaluation object and the optimal target is:
D_i^-=√(∑_j=1^m (Z_j^–z_ij)^2)
The ith evaluation object's non-normalized score for overall performance is obtained
S_i = D_i^-/D_i^++D_i^-
When 0≤ S_i≤ 1, the larger the value of S_i, the larger the value of D_i^+, which means it is closer to the optimal target value.
The above is the evaluation score of the comprehensive evaluation object under the condition that the weight of each index is the same. In reality, it is often necessary to analyze the weight and PCA principal component analysis<cit.> of indicators with different degrees of discreteness or indicators that are difficult to obtain and have correlation and collinearity problems. In information theory, entropy is a description of the discreteness of indicators and can measure their uncertainty. Generally speaking, the larger the amount of information, the smaller the information entropy value, and the greater the discreteness of the indicator and the smaller the uncertainty, which can be understood as the greater the impact (i.e. weight) of the indicator on the comprehensive evaluation object. Therefore, the entropy weight method can guide the determination of the objective weight of indicators by calculating the relative degree of change of each indicator's overall impact on the evaluation object.
The algorithm process of entropy weight method is as follows:
* Standardization of Positive Indicators: Each index in the corresponding positive normalization matrix X is standardized using
x_ij'=[x_ij-min(x_1j,x_2j,...,x_nj)/max(x_1j,x_2j,...,x_nj)-min(x_1j,x_2j,...,x_nj)]*100
* Calculate the impact of the jth indicator on the ith evaluation object, that is, the weight of the indicator:
p_ij=X_ij/∑_i=1^nX_ij ,(i=1,2,...,n, j=1,2,...,m)
* The entropy value for the jth index is calculated as:
e_j=-k∑_i=1^np_ijln(p_ij)
which k>0 and k=1/ln(n), e_j≥0;
* The coefficient of variation for the jth indicator is calculated as:
g_j=1-e_j, (j=1,2,...,m)
* Calculate the weight coefficient of the jth indicator:
w_j=g_j/∑_j=1^mg_j ,(1≤ j≤ m)
* The score for the non-normalized evaluation of the ith evaluation object is calculated as follows:
s_i=∑_j=1^mw_j*p_ij, (i=1,2,...,n)
By integrating the above algorithm process, a comprehensive evaluation score of the evaluated object in the overall set can be obtained. Based on TOPSIS combined with entropy weight method, it can deeply understand and reflect the degree of dispersion and discrimination ability of each evaluation index, and then determine the global influence coefficient of the index on the evaluated object. It is an objective and comprehensive weighting method with high reliability and accuracy. However, this method also has its flaws. Due to the single algorithm process, it does not consider the correlation and hierarchical relationship between indicators. In the application scenario, business experience guidance is needed to assist in determining the value of indicator weights, otherwise the weights may be distorted due to differences in application scenarios. In addition, both methods only calculate the evaluation indicators, and have a strong dependence on the sample data of the indicators. If the sample data is dynamically changing, the indicator weights will fluctuate to a certain extent.
In response to the shortcomings of the algorithm plan mentioned above, we have incorporated an analysis of the correlation between indicator features in our research. Additionally, we have conducted multi-level evaluation calculations<cit.> and principal component analysis on indicators with hierarchical relationships. Furthermore, we have designed device profile features that describe the time sequence to ensure the stability of indicator sample data within a certain time window and to determine the stable and smooth indicator weights under the time dimension.
§.§ HMA Performance State Comprehensive Perception
In the research of comprehensive prediction with dynamic performance awareness, in order to provide diversified sensitivity (real-time, short-term, long-term) perception ability and assist in eliminating abnormal mutations in indicator data and evaluation scores, it is necessary to smooth the performance evaluation results. That is, in addition to real-time perception of device performance status, the average performance status of devices in different time windows should also be comprehensively evaluated.
Due to the fact that the performance indicator data of the equipment does not change smoothly within the specified time window, extreme values, null values, and other situations that require preprocessing exist, and there are also error values that are difficult to monitor and objectively distinguish. In addition, the performance of the equipment also depends on a series of objective factors such as environmental changes, usage patterns, and frequency. Therefore, in the process of performance indicator collection and performance status evaluation score calculation, there is a chance of generating noisy data or weight values and score values with large fluctuations. To compensate for such defects, in the short-term and long-term comprehensive performance status evaluation process, the study has adopted Hull Moving Average to quickly smooth the instant performance score curve within the time window and accurately fit the trend of performance status changes.
The Hull Moving Average (HMA) is a type of moving average proposed by Alan Hull in 2005. It uses weighted calculations to emphasize recent data changes. HMA improves fast smoothing capabilities while eliminating the lag of other moving averages, minimizing short-term noise and improving accuracy. The pseudocode for HMA calculation method is:
By comparing the experimental sliding average line chart in Figure<ref>, it can be seen that in a 150-point real-time device performance status scoring sequence (raw_acore), the simple sliding average (v_avg) remains stable during the process of increasing and decreasing the window, with the most severe lag, that is, in the trade-off between smoothness and sensitivity, it chooses smoothness and loses sensitivity, and does not have good capturing ability for the trend of performance scores; the weighted sliding average (v_mean) has a greater dependence on the selection of weight factors and window length, which requires subjective judgment and selection, and also has a certain lag, with poor sensitivity to changes in perception scores; the corrected weighted sliding average (v_mean_corr) has slightly optimized the lag and sensitivity of the simple sliding average, and has reduced the larger deviation in the initial stage compared to the ordinary weighted sliding average, but this method still cannot meet the expected sensitivity to perceive changes in scores; the Hull sliding average (v_hma) performs the best compared to other methods, with the biggest feature being a significant reduction in lag, while improving sensitivity and effectively improving the smoothness of the moving average.
By using HMA calculation, the average evaluation results of device performance status within a time window, periodic changes, and the changing trend of performance status at a certain moment can be obtained. In the research of prediction, because the performance status of the next moment depends on the evaluation score of the current moment and the past several moments of the device's performance status, a self-regression model is selected.
§.§ Performance state prediction based on ARIMA time series
The ARIMA (Autoregressive Integrated Moving Average model)<cit.> provides the ability to predict and analyze non-stationary time series, modeling non-seasonal time series based on historical values and historical forecast errors.
The modeling process of ARIMA can be summarized as the calculation process of its three hyperparameters p, d, and q. p is the order of the AR autoregressive model, which predicts the current value based on the previous i historical values. The definition of the AR model is X_t = c + ∑_i=1^p φ_i X_t-i+ ε_t, which means that the predicted current value is the sum of a linear combination of one or more historical values, a constant term c, and a random error ε_t. d is the minimum order of differencing required to make the time series stationary. Non-stationary series can be transformed into stationary ones by differencing, but a larger order of differencing will cause the time series to lose autocorrelation, making the AR autoregressive model unusable. q is the order of the MA moving average model, which predicts the error between the previous i historical values and the current value. The model definition of MA is x_t = μ + ε_t - θ_1 ε_t-1 - θ_2 ε_t-2 - ... - θ_i ε_t-i, where μ is the mean of the sequence, θ_1,..., θ_i are parameters, and ε_t,..., ε_t-i are all white noise.
After performing differential processing on the score sequence obtained from 50-300 consecutive evaluations of a certain device's performance, a stable time series is obtained by determining the differential order d. By using the above equation, the order p of the AR model and the order q of the MA model can be determined, and an ARIMA model can be constructed. The p and q parameters of the model can be verified by using information criteria such as AIC, AICc, and BIC to determine the order scheme, with smaller information criteria indicating better parameter selection.
§ EXPERIMENTAL ANALYSIS
The experimental process of the real-time perception model for performance status mainly includes two aspects: extraction and construction of performance characteristic indicators, and verification of multi-level evaluation model with weighting.
§.§ Data Description
The study selected Android system mobile phones with rich device models and performance that can produce significant differentiation during operation as the original dataset. The data collection process was distributed in four time periods: October and December 2023, April and July 2024, to ensure the impact of seasonal factors on the data. At the same time, objective environmental factors were controlled, and the collection was expanded nationwide. Using the video playback software Douyin as a medium, the performance characteristics of more than 3000 mobile phone models were collected without infringing on user privacy data.
The collection of performance characteristics is implemented internally in the framework as different collectors, each responsible for collecting a second-level category of features, including startup collectors, CPU collectors, memory collectors, power consumption collectors, and profile collectors. Each collector can be configured to collect multiple primary feature items. In the experimental process of the algorithm model, effective device performance profile features were fitted through descriptive time series analysis and principal component analysis methods. At the same time, primary feature items were continuously added or reduced, and the final collected data items were determined to be Table <ref>.
The network quality data and device profile data such as heating level in the table are descriptive time series analyzed and aggregated based on device ID and device model. The network quality value ranges from 0 to 8, with higher values indicating better network quality. The heating level value ranges from 0 to 8, with higher values indicating higher real-time heating temperature for the device as a whole. In addition, principal component analysis was performed on numerous secondary feature items within each primary category, and an appropriate number of feature items were selected for collection and experimentation. After preprocessing methods such as missing value and outlier handling and feature availability evaluation, a total of 12 million data points were used for experimental validation.
§.§ Feature Construction Validation
Fitting objective and effective performance features or device-level portrait features from collected raw data is an important part of this study. The hierarchical relationship and correlation between features largely affect the evaluation results of performance status. In the process of principal component analysis, determining correlation coefficients, and constructing time series features, all feature items should be input into the secondary evaluation model in modules and levels. The key operations include the following parts.
§.§.§ Feature Preprocessing
During the feature value preprocessing, the value range of each feature was determined based on its actual meaning, and data outside the value range was considered as outliers and discarded. Linear interpolation was performed on null and missing values in the time series sequence of the feature at the device level.
§.§.§ Portrait Feature Construction
Analyzing and experimentally verifying the construction of network quality portraits as an example.
In the original dataset of 12 million, network-related feature items were collected based on device granularity, and selected based on the correlation coefficient between each feature and the overall network quality: the network speed value of the device in 4G/5G networks, the network speed value of the device in Wi-Fi networks, the duration of the first frame rendering of video playback on the device, and the stuttering rate of the device during use (i.e. video playback behavior). The collected feature item data was analyzed for distribution as shown in Figure <ref> and Figure <ref>, while long and short-term time-domain sequences were constructed to observe the trend of network speed values over time.
The network quality label is defined in the Table <ref> based on the changes in the numerical values of the distributed combined time series. The logic of portrait fitting tags is shown in the Table <ref> and Table <ref>.
The original dataset of 12 million network quality feature data was divided into experimental and test datasets using a 4:1 time domain interval segmentation for fitting. After constructing the time series of the experimental dataset and fitting the labels according to the defined rules, the precision, recall, and other results of the predicted segmentation of the test dataset were verified. The results are shown in Table <ref> and Table <ref>.
By fitting the characteristics of the label and combining them with the above correlation coefficients, the network quality portrait label is obtained as one of the features input into the downstream performance state real-time evaluation module. Similarly, other descriptive time series device portraits such as battery heating level and device model score can be obtained. This objectively describes the performance state of the device and makes up for the shortcomings of similar features that are difficult to obtain, requiring subjective judgment and complex processing.
§.§ Real-time Performance Evaluation
The algorithm process combining the TOPSIS method with entropy weight method in Part 2 was used to evaluate the performance status of the devices in the original dataset. In order to distinguish the performance differences between different models and devices within the same model, the scoring range for overall performance was set to [0,100] in this experiment. According to the performance rating standards provided by Douyin for different models, which were normalized to (0,12], Figure <ref> shows the distribution density of model performance scores, and Figure <ref> shows the real-time distribution density of performance status ratings.
The two distribution charts show that the performance score distribution of model granularity and the dynamic performance score distribution of device granularity are roughly similar, presenting an overall normal distribution. There are fewer models and devices with extremely poor or excellent performance scores, while there are more models and devices in the middle performance state. As for the difference between model granularity and device dynamic score, it can be visually observed through Figure <ref>. The rectangular bars with the same color in the figure represent devices in the same model score interval, and the horizontal axis score is the dynamic performance score of device granularity.
For example, the distribution chart in Figure <ref> shows that devices with original model ratings in the (8,9] nterval have significant personalized differences in dynamic performance evaluation, with dynamic score thresholds ranging from a minimum of 23 points to a maximum of 76 points. Normalizing the distribution of model ratings in the entire dataset to the [0,100] interval range, the lowest score that can be mapped to the (8,9] model rating is 54.17 points, and the highest score is 76.05 points. This indicates that a more personalized and refined evaluation can be achieved in the evaluation process of dynamic performance. At the same time, the fact that the highest dynamic score is lower than the model rating also confirms the objective fact that the dynamic performance of mobile devices during use is lower than the static performance based on the original model configuration.
Validating the usability and effectiveness of dynamic performance scoring through practical application is a major challenge in this study. By using the numerical threshold values for high, medium, and low-end Android devices as defined by the official video playback software, i.e. low-end threshold (0,7.21], medium-end threshold (7.21,8.65], and high-end threshold (8.65,12.0), the distribution proportions of the three types of devices in the entire raw dataset were normalized to the range of [0,100]. The proportions of low-end, medium-end, and high-end devices were 13.45%, 39.66%, and 46.89%, respectively. This resulted in the low-end threshold being (0,28.67], the medium-end threshold being (28.67,56.82], and the high-end threshold being (56.82,100).
Firstly, understand how to verify the availability and effectiveness of performance dynamic segmentation through Figure <ref>. In this study, we designed a fine-tuned playback strategy for devices with differences between static machine type segmentation and dynamic performance segmentation to prove that the performance of devices has differentiated decline during video playback. The devices with differences between static machine type segmentation and dynamic performance segmentation are normalized and mapped to the score interval as shown in Figure <ref>. The devices with differences are the intersection of the area chart.
Design an AB experiment based on real-time data of video playback software Douyin. The control group A uses only low-end devices partitioned by static machine type (i.e. devices with machine type scores falling in the interval (0,7] with reduced power consumption strategies, while the experimental group B uses only low-end devices partitioned by dynamic performance scores (i.e. devices with dynamic performance scores falling in the threshold interval (0,28.67] with the same strategy. The number of dynamically assigned devices in the control and experimental groups remained the same throughout the experiment. After a one-month verification period, the degree of performance optimization under the same strategy between the two groups of devices can verify the availability and effectiveness of dynamic performance scores, as shown in Table <ref>.
§ CONCLUSION
In the application research of real-time perception of device performance status, we have deeply explored objective evaluation methods for dynamic performance, aiming to solve the key challenges of real-time evaluation of device performance in complex business scenarios. Through comprehensive analysis of relevant algorithms and technologies, we have realized a framework for objectively evaluating the comprehensive performance characteristics of devices that dynamically change during use, and continuously monitoring their performance status. Experimental results show that our method has achieved significant results in both timeliness and accuracy.
Specifically, this article has achieved the following important results in applied research:
*
By combining descriptive time-series analysis with feature correlation analysis, we fitted performance labels for device profiling. The presentation characteristics of various classification performances of the device over a period of time were objectively described, which compensated for the shortcomings of difficult feature acquisition, complex numerical processing, and subjective description without objective dependence.
*
A multi-level performance real-time evaluation model based on objective empowerment has been implemented. It can dynamically collect the performance characteristics of devices in real-time and accurately evaluate their performance through TOPSIS under the dimensionality reduction methods of entropy weighting and principal component analysis.
*
Realized the short-term comprehensive evaluation and prediction of device performance. Based on time series smoothing and prediction model construction, the average performance state of the device within a certain time window is comprehensively evaluated, and the performance state of the next period is predicted, realizing the early perception and warning of device performance and effectively capturing the subtle changes in performance to ensure the stable operation of the device.
In addition, there are still some limitations in this study. In future work, further iterations and improvements can be made. For example, expanding the scope of performance perception to cover more performance aspects and application scenarios to meet more refined business needs; strengthening the integration with deep learning technology to improve the intelligence level of performance perception and prediction; conducting larger-scale practical application verification to better evaluate the effectiveness of this solution and its adaptability in various business scenarios. In summary, this study provides valuable theoretical and practical references for the field of performance dynamic perception, laying a solid foundation for further refinement of device performance and operational stability experience adjustments.
unsrt
|
http://arxiv.org/abs/2409.03524v1 | 20240905133411 | Euclid preparation. Simulations and nonlinearities beyond $Λ$CDM. 4. Constraints on $f(R)$ models from the photometric primary probes | [
"Euclid Collaboration",
"K. Koyama",
"S. Pamuk",
"S. Casas",
"B. Bose",
"P. Carrilho",
"I. Sáez-Casares",
"L. Atayde",
"M. Cataneo",
"B. Fiorini",
"C. Giocoli",
"A. M. C. Le Brun",
"F. Pace",
"A. Pourtsidou",
"Y. Rasera",
"Z. Sakr",
"H. -A. Winther",
"E. Altamura",
"J. Adamek",
"M. Baldi",
"M. -A. Breton",
"G. Rácz",
"F. Vernizzi",
"A. Amara",
"S. Andreon",
"N. Auricchio",
"C. Baccigalupi",
"S. Bardelli",
"F. Bernardeau",
"C. Bodendorf",
"D. Bonino",
"E. Branchini",
"M. Brescia",
"J. Brinchmann",
"A. Caillat",
"S. Camera",
"V. Capobianco",
"C. Carbone",
"J. Carretero",
"M. Castellano",
"G. Castignani",
"S. Cavuoti",
"A. Cimatti",
"C. Colodro-Conde",
"G. Congedo",
"C. J. Conselice",
"L. Conversi",
"Y. Copin",
"F. Courbin",
"H. M. Courtois",
"A. Da Silva",
"H. Degaudenzi",
"G. De Lucia",
"M. Douspis",
"F. Dubath",
"C. A. J. Duncan",
"X. Dupac",
"S. Dusini",
"M. Farina",
"S. Farrens",
"S. Ferriol",
"P. Fosalba",
"M. Frailis",
"E. Franceschi",
"S. Galeotta",
"B. Gillis",
"P. Gómez-Alvarez",
"A. Grazian",
"F. Grupp",
"L. Guzzo",
"M. Hailey",
"S. V. H. Haugan",
"W. Holmes",
"F. Hormuth",
"A. Hornstrup",
"P. Hudelot",
"S. Ilić",
"K. Jahnke",
"M. Jhabvala",
"B. Joachimi",
"E. Keihänen",
"S. Kermiche",
"A. Kiessling",
"M. Kilbinger",
"B. Kubik",
"M. Kunz",
"H. Kurki-Suonio",
"P. B. Lilje",
"V. Lindholm",
"I. Lloro",
"G. Mainetti",
"D. Maino",
"E. Maiorano",
"O. Mansutti",
"O. Marggraf",
"K. Markovic",
"M. Martinelli",
"N. Martinet",
"F. Marulli",
"R. Massey",
"E. Medinaceli",
"S. Mei",
"M. Melchior",
"Y. Mellier",
"M. Meneghetti",
"E. Merlin",
"G. Meylan",
"M. Moresco",
"L. Moscardini",
"E. Munari",
"C. Neissner",
"S. -M. Niemi",
"C. Padilla",
"S. Paltani",
"F. Pasian",
"K. Pedersen",
"W. J. Percival",
"V. Pettorino",
"S. Pires",
"G. Polenta",
"M. Poncet",
"L. A. Popa",
"L. Pozzetti",
"F. Raison",
"A. Renzi",
"J. Rhodes",
"G. Riccio",
"E. Romelli",
"M. Roncarelli",
"R. Saglia",
"J. -C. Salvignol",
"A. G. Sánchez",
"D. Sapone",
"B. Sartoris",
"M. Schirmer",
"T. Schrabback",
"A. Secroun",
"G. Seidel",
"S. Serrano",
"C. Sirignano",
"G. Sirri",
"L. Stanco",
"J. Steinwagner",
"P. Tallada-Crespí",
"A. N. Taylor",
"I. Tereno",
"R. Toledo-Moreo",
"F. Torradeflot",
"I. Tutusaus",
"L. Valenziano",
"T. Vassallo",
"G. Verdoes Kleijn",
"A. Veropalumbo",
"Y. Wang",
"J. Weller",
"G. Zamorani",
"E. Zucca",
"A. Biviano",
"E. Bozzo",
"C. Burigana",
"M. Calabrese",
"D. Di Ferdinando",
"J. A. Escartin Vigo",
"G. Fabbian",
"R. Farinelli",
"F. Finelli",
"J. Gracia-Carpio",
"S. Matthew",
"N. Mauri",
"A. Pezzotta",
"M. Pöntinen",
"V. Scottez",
"M. Tenti",
"M. Viel",
"M. Wiesmann",
"Y. Akrami",
"S. Anselmi",
"M. Archidiacono",
"F. Atrio-Barandela",
"M. Ballardini",
"D. Bertacca",
"A. Blanchard",
"L. Blot",
"H. Böhringer",
"S. Bruton",
"R. Cabanac",
"A. Calabro",
"B. Camacho Quevedo",
"G. Cañas-Herrera",
"A. Cappi",
"F. Caro",
"C. S. Carvalho",
"T. Castro",
"K. C. Chambers",
"S. Contarini",
"A. R. Cooray",
"G. Desprez",
"A. Díaz-Sánchez",
"J. J. Diaz",
"S. Di Domizio",
"H. Dole",
"S. Escoffier",
"M. Ezziati",
"A. G. Ferrari",
"P. G. Ferreira",
"I. Ferrero",
"A. Finoguenov",
"A. Fontana",
"F. Fornari",
"L. Gabarra",
"K. Ganga",
"J. García-Bellido",
"T. Gasparetto",
"V. Gautard",
"E. Gaztanaga",
"F. Giacomini",
"F. Gianotti",
"G. Gozaliasl",
"C. M. Gutierrez",
"A. Hall",
"H. Hildebrandt",
"J. Hjorth",
"A. Jimenez Muñoz",
"S. Joudaki",
"J. J. E. Kajava",
"V. Kansal",
"D. Karagiannis",
"C. C. Kirkpatrick",
"J. Le Graet",
"L. Legrand",
"J. Lesgourgues",
"T. I. Liaudat",
"S. J. Liu",
"A. Loureiro",
"G. Maggio",
"M. Magliocchetti",
"F. Mannucci",
"R. Maoli",
"J. Martín-Fleitas",
"C. J. A. P. Martins",
"L. Maurin",
"R. B. Metcalf",
"M. Miluzio",
"P. Monaco",
"A. Montoro",
"A. Mora",
"C. Moretti",
"G. Morgante",
"C. Murray",
"S. Nadathur",
"Nicholas A. Walton",
"L. Pagano",
"L. Patrizii",
"V. Popa",
"D. Potter",
"P. Reimberg",
"I. Risso",
"P. -F. Rocci",
"M. Sahlén",
"E. Sarpa",
"A. Schneider",
"M. Sereno",
"A. Silvestri",
"A. Spurio Mancini",
"J. Stadel",
"K. Tanidis",
"C. Tao",
"N. Tessore",
"G. Testera",
"R. Teyssier",
"S. Toft",
"S. Tosi",
"A. Troja",
"M. Tucci",
"J. Valiviita",
"D. Vergani",
"G. Verza",
"P. Vielzeuf"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Simulations and nonlinearities beyond ΛCDM. 4. Constraints on f(R) models from the photometric primary probes
Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX, UK
Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, 52056 Aachen, Germany
Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK
Laboratoire Univers et Théorie, Observatoire de Paris, Université PSL, Université Paris Cité, CNRS, 92190 Meudon, France
Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal
Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, Edifício C8, Campo Grande, PT1749-016 Lisboa, Portugal
Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing (GCCL), 44780 Bochum, Germany
Universität Bonn, Argelander-Institut für Astronomie, Auf dem Hügel 71, 53121 Bonn, Germany
INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Piero Gobetti 93/3, 40129 Bologna, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Via Irnerio 46, 40126 Bologna, Italy
Dipartimento di Fisica, Università degli Studi di Torino, Via P. Giuria 1, 10125 Torino, Italy
INFN-Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy
INAF-Osservatorio Astrofisico di Torino, Via Osservatorio 20, 10025 Pino Torinese (TO), Italy
Higgs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3FD, UK
Institut universitaire de France (IUF), 1 rue Descartes, 75231 PARIS CEDEX 05, France
Institut für Theoretische Physik, University of Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany
Institut de Recherche en Astrophysique et Planétologie (IRAP), Université de Toulouse, CNRS, UPS, CNES, 14 Av. Edouard Belin, 31400 Toulouse, France
Université St Joseph; Faculty of Sciences, Beirut, Lebanon
Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, 0315 Oslo, Norway
Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL, UK
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, 40129 Bologna, Italy
INFN-Sezione di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy
Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, 08193 Barcelona, Spain
Institut de Ciencies de l'Espai (IEEC-CSIC), Campus UAB, Carrer de Can Magrans, s/n Cerdanyola del Vallés, 08193 Barcelona, Spain
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, USA
Institut de Physique Théorique, CEA, CNRS, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France
School of Mathematics and Physics, University of Surrey, Guildford, Surrey, GU2 7XH, UK
INAF-Osservatorio Astronomico di Brera, Via Brera 28, 20122 Milano, Italy
IFPU, Institute for Fundamental Physics of the Universe, via Beirut 2, 34151 Trieste, Italy
INAF-Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, 34143 Trieste, Italy
INFN, Sezione di Trieste, Via Valerio 2, 34127 Trieste TS, Italy
SISSA, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste TS, Italy
Institut d'Astrophysique de Paris, UMR 7095, CNRS, and Sorbonne Université, 98 bis boulevard Arago, 75014 Paris, France
Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching, Germany
Dipartimento di Fisica, Università di Genova, Via Dodecaneso 33, 16146, Genova, Italy
INFN-Sezione di Genova, Via Dodecaneso 33, 16146, Genova, Italy
Department of Physics "E. Pancini", University Federico II, Via Cinthia 6, 80126, Napoli, Italy
INAF-Osservatorio Astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy
INFN section of Naples, Via Cinthia 6, 80126, Napoli, Italy
Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal
Faculdade de Ciências da Universidade do Porto, Rua do Campo de Alegre, 4150-007 Porto, Portugal
Aix-Marseille Université, CNRS, CNES, LAM, Marseille, France
INAF-IASF Milano, Via Alfonso Corti 12, 20133 Milano, Italy
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Avenida Complutense 40, 28040 Madrid, Spain
Port d'Informació Científica, Campus UAB, C. Albareda s/n, 08193 Bellaterra (Barcelona), Spain
INAF-Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Italy
Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy
Instituto de Astrofísica de Canarias, Calle Vía Láctea s/n, 38204, San Cristóbal de La Laguna, Tenerife, Spain
European Space Agency/ESRIN, Largo Galileo Galilei 1, 00044 Frascati, Roma, Italy
ESAC/ESA, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spain
Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, UMR 5822, Villeurbanne, F-69100, France
Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland
Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Martí i Franquès 1, 08028 Barcelona, Spain
Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig de Lluís Companys 23, 08010 Barcelona, Spain
UCB Lyon 1, CNRS/IN2P3, IUF, IP2I Lyon, 4 rue Enrico Fermi, 69622 Villeurbanne, France
Department of Astronomy, University of Geneva, ch. d'Ecogia 16, 1290 Versoix, Switzerland
Université Paris-Saclay, CNRS, Institut d'astrophysique spatiale, 91405, Orsay, France
INFN-Padova, Via Marzolo 8, 35131 Padova, Italy
INAF-Istituto di Astrofisica e Planetologia Spaziali, via del Fosso del Cavaliere, 100, 00100 Roma, Italy
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France
Institut d'Estudis Espacials de Catalunya (IEEC), Edifici RDIT, Campus UPC, 08860 Castelldefels, Barcelona, Spain
FRACTAL S.L.N.E., calle Tulipán 2, Portal 13 1A, 28231, Las Rozas de Madrid, Spain
INAF-Osservatorio Astronomico di Padova, Via dell'Osservatorio 5, 35122 Padova, Italy
Universitäts-Sternwarte München, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstrasse 1, 81679 München, Germany
Dipartimento di Fisica "Aldo Pontremoli", Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy
Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK
Felix Hormuth Engineering, Goethestr. 17, 69181 Leimen, Germany
Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark
Cosmic Dawn Center (DAWN), Denmark
Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Department of Physics and Helsinki Institute of Physics, Gustaf Hällströmin katu 2, 00014 University of Helsinki, Finland
Aix-Marseille Université, CNRS/IN2P3, CPPM, Marseille, France
Université de Genève, Département de Physique Théorique and Centre for Astroparticle Physics, 24 quai Ernest-Ansermet, CH-1211 Genève 4, Switzerland
Department of Physics, P.O. Box 64, 00014 University of Helsinki, Finland
Helsinki Institute of Physics, Gustaf Hällströmin katu 2, University of Helsinki, Helsinki, Finland
NOVA optical infrared instrumentation group at ASTRON, Oude Hoogeveensedijk 4, 7991PD, Dwingeloo, The Netherlands
Centre de Calcul de l'IN2P3/CNRS, 21 avenue Pierre de Coubertin 69627 Villeurbanne Cedex, France
INFN-Sezione di Milano, Via Celoria 16, 20133 Milano, Italy
INFN-Sezione di Roma, Piazzale Aldo Moro, 2 - c/o Dipartimento di Fisica, Edificio G. Marconi, 00185 Roma, Italy
Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, via Piero Gobetti 93/2, 40129 Bologna, Italy
Department of Physics, Institute for Computational Cosmology, Durham University, South Road, DH1 3LE, UK
Université Paris Cité, CNRS, Astroparticule et Cosmologie, 75013 Paris, France
University of Applied Sciences and Arts of Northwestern Switzerland, School of Engineering, 5210 Windisch, Switzerland
Institut d'Astrophysique de Paris, 98bis Boulevard Arago, 75014, Paris, France
Institut de Física d'Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, 08193 Bellaterra (Barcelona), Spain
European Space Agency/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands
DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155, 2200 Copenhagen, Denmark
Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
Space Science Data Center, Italian Space Agency, via del Politecnico snc, 00133 Roma, Italy
Centre National d'Etudes Spatiales – Centre spatial de Toulouse, 18 avenue Edouard Belin, 31401 Toulouse Cedex 9, France
Institute of Space Science, Str. Atomistilor, nr. 409 Măgurele, Ilfov, 077125, Romania
Dipartimento di Fisica e Astronomia "G. Galilei", Università di Padova, Via Marzolo 8, 35131 Padova, Italy
Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago, Chile
Universität Innsbruck, Institut für Astro- und Teilchenphysik, Technikerstr. 25/8, 6020 Innsbruck, Austria
Satlantis, University Science Park, Sede Bld 48940, Leioa-Bilbao, Spain
Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Tapada da Ajuda, 1349-018 Lisboa, Portugal
Universidad Politécnica de Cartagena, Departamento de Electrónica y Tecnología de Computadoras, Plaza del Hospital 1, 30202 Cartagena, Spain
INFN-Bologna, Via Irnerio 46, 40126 Bologna, Italy
Kapteyn Astronomical Institute, University of Groningen, PO Box 800, 9700 AV Groningen, The Netherlands
Dipartimento di Fisica, Università degli studi di Genova, and INFN-Sezione di Genova, via Dodecaneso 33, 16146, Genova, Italy
Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA
INAF, Istituto di Radioastronomia, Via Piero Gobetti 101, 40129 Bologna, Italy
Astronomical Observatory of the Autonomous Region of the Aosta Valley (OAVdA), Loc. Lignan 39, I-11020, Nus (Aosta Valley), Italy
Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, UK
Junia, EPA department, 41 Bd Vauban, 59800 Lille, France
ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data e Quantum Computing, Via Magnanelli 2, Bologna, Italy
Instituto de Física Teórica UAM-CSIC, Campus de Cantoblanco, 28049 Madrid, Spain
CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
Departamento de Física Fundamental. Universidad de Salamanca. Plaza de la Merced s/n. 37008 Salamanca, Spain
Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy
Center for Data-Driven Discovery, Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan
Ludwig-Maximilians-University, Schellingstrasse 4, 80799 Munich, Germany
Max-Planck-Institut für Physik, Boltzmannstr. 8, 85748 Garching, Germany
Minnesota Institute for Astrophysics, University of Minnesota, 116 Church St SE, Minneapolis, MN 55455, USA
Institute Lorentz, Leiden University, Niels Bohrweg 2, 2333 CA Leiden, The Netherlands
Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Bd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
Department of Physics & Astronomy, University of California Irvine, Irvine CA 92697, USA
Department of Astronomy & Physics and Institute for Computational Astrophysics, Saint Mary's University, 923 Robie Street, Halifax, Nova Scotia, B3H 3C3, Canada
Departamento Física Aplicada, Universidad Politécnica de Cartagena, Campus Muralla del Mar, 30202 Cartagena, Murcia, Spain
Instituto de Astrofísica de Canarias (IAC); Departamento de Astrofísica, Universidad de La Laguna (ULL), 38200, La Laguna, Tenerife, Spain
Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK
CEA Saclay, DFR/IRFU, Service d'Astrophysique, Bat. 709, 91191 Gif-sur-Yvette, France
Department of Computer Science, Aalto University, PO Box 15400, Espoo, FI-00 076, Finland
Instituto de Astrofísica de Canarias, c/ Via Lactea s/n, La Laguna E-38200, Spain. Departamento de Astrofísica de la Universidad de La Laguna, Avda. Francisco Sanchez, La Laguna, E-38200, Spain
Univ. Grenoble Alpes, CNRS, Grenoble INP, LPSC-IN2P3, 53, Avenue des Martyrs, 38000, Grenoble, France
Department of Physics and Astronomy, Vesilinnantie 5, 20014 University of Turku, Finland
Serco for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
ARC Centre of Excellence for Dark Matter Particle Physics, Melbourne, Australia
Centre for Astrophysics & Supercomputing, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS, UK
Department of Physics and Astronomy, University of the Western Cape, Bellville, Cape Town, 7535, South Africa
ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, Brazil
IRFU, CEA, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France
Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, Stockholm, SE-106 91, Sweden
Astrophysics Group, Blackett Laboratory, Imperial College London, London SW7 2AZ, UK
INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy
Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy
Aurora Technology for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto, Portugal
HE Space for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
Dipartimento di Fisica - Sezione di Astronomia, Università di Trieste, Via Tiepolo 11, 34131 Trieste, Italy
Theoretical astrophysics, Department of Physics and Astronomy, Uppsala University, Box 515, 751 20 Uppsala, Sweden
Department of Physics, Royal Holloway, University of London, TW20 0EX, UK
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
Cosmic Dawn Center (DAWN)
Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen, Denmark
Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, 10010, New York, NY, USA
Euclid Collaboration: K. Koyama et al.
Constraints on f(R) models from the photometric primary probes
We study the constraint on f(R) gravity that can be obtained by photometric primary probes of the mission. Our focus is the dependence of the constraint on the theoretical modelling of the nonlinear matter power spectrum. In the Hu–Sawicki f(R) gravity model, we consider four different predictions for the ratio between the power spectrum in f(R) and that in : a fitting formula, the halo model reaction approach, and two emulators based on dark matter only N-body simulations, and . These predictions are added to the implementation to predict the angular power spectra for weak lensing (WL), photometric galaxy clustering and their cross-correlation. By running Markov Chain Monte Carlo, we compare constraints on parameters and investigate the bias of the recovered f(R) parameter if the data are created by a different model. For the pessimistic setting of WL, one dimensional bias for the f(R) parameter, , is found to be 0.5 σ when is used to create the synthetic data with =-5.301 and fitted by . The impact of baryonic physics on WL is studied by using a baryonification emulator . For the optimistic setting, the f(R) parameter and two main baryon parameters are well constrained despite the degeneracies among these parameters. However, the difference in the nonlinear dark matter prediction can be compensated by the adjustment of baryon parameters, and the one-dimensional marginalised constraint on is biased. This bias can be avoided in the pessimistic setting at the expense of weaker constraints. For the pessimistic setting, using the synthetic data for WL, we obtain the prior-independent upper limit of < -5.6. Finally, we implement a method to include theoretical errors to avoid the bias due to inaccuracies in the nonlinear matter power spectrum prediction.
preparation
Euclid Collaboration: K. [email protected]<ref>
S. Pamuk0009-0004-0852-8624<ref>
S. Casas0000-0002-4751-5138<ref>
B. Bose0000-0003-1965-8614<ref>
P. Carrilho0000-0003-1339-0194<ref>
I. Sáez-Casares0000-0003-0013-5266<ref>
L. Atayde0000-0001-6373-9193<ref>,<ref>
M. Cataneo0000-0002-7992-0656<ref>,<ref>
B. Fiorini0000-0002-0092-4321<ref>
C. Giocoli0000-0002-9590-7961<ref>,<ref>
A. M. C. Le Brun0000-0002-0936-4594<ref>
F. Pace0000-0001-8039-0480<ref>,<ref>,<ref>
A. Pourtsidou0000-0001-9110-5550<ref>,<ref>
Y. Rasera0000-0003-3424-6941<ref>,<ref>
Z. Sakr0000-0002-4823-3757<ref>,<ref>,<ref>
H.-A. Winther0000-0002-6325-2710<ref>
E. Altamura0000-0001-6973-1897<ref>
J. Adamek0000-0002-0723-6740<ref>
M. Baldi0000-0003-4145-1943<ref>,<ref>,<ref>
M.-A. Breton<ref>,<ref>,<ref>
G. Rácz0000-0003-3906-5699<ref>
F. Vernizzi0000-0003-3426-2802<ref>
A. Amara<ref>
S. Andreon0000-0002-2041-8784<ref>
N. Auricchio0000-0003-4444-8651<ref>
C. Baccigalupi0000-0002-8211-1630<ref>,<ref>,<ref>,<ref>
S. Bardelli0000-0002-8900-0298<ref>
F. Bernardeau<ref>,<ref>
C. Bodendorf<ref>
D. Bonino0000-0002-3336-9977<ref>
E. Branchini0000-0002-0808-6908<ref>,<ref>,<ref>
M. Brescia0000-0001-9506-5680<ref>,<ref>,<ref>
J. Brinchmann0000-0003-4359-8797<ref>,<ref>
A. Caillat<ref>
S. Camera0000-0003-3399-3574<ref>,<ref>,<ref>
V. Capobianco0000-0002-3309-7692<ref>
C. Carbone0000-0003-0125-3563<ref>
J. Carretero0000-0002-3130-0204<ref>,<ref>
M. Castellano0000-0001-9875-8263<ref>
G. Castignani0000-0001-6831-0687<ref>
S. Cavuoti0000-0002-3787-4196<ref>,<ref>
A. Cimatti<ref>
C. Colodro-Conde<ref>
G. Congedo0000-0003-2508-0046<ref>
C. J. Conselice0000-0003-1949-7638<ref>
L. Conversi0000-0002-6710-8476<ref>,<ref>
Y. Copin0000-0002-5317-7518<ref>
F. Courbin0000-0003-0758-6510<ref>,<ref>,<ref>
H. M. Courtois0000-0003-0509-1776<ref>
A. Da Silva0000-0002-6385-1609<ref>,<ref>
H. Degaudenzi0000-0002-5887-6799<ref>
G. De Lucia0000-0002-6220-9104<ref>
M. Douspis0000-0003-4203-3954<ref>
F. Dubath0000-0002-6533-2810<ref>
C. A. J. Duncan<ref>
X. Dupac<ref>
S. Dusini0000-0002-1128-0664<ref>
M. Farina0000-0002-3089-7846<ref>
S. Farrens0000-0002-9594-9387<ref>
S. Ferriol<ref>
P. Fosalba0000-0002-1510-5214<ref>,<ref>
M. Frailis0000-0002-7400-2135<ref>
E. Franceschi0000-0002-0585-6591<ref>
S. Galeotta0000-0002-3748-5115<ref>
B. Gillis0000-0002-4478-1270<ref>
P. Gómez-Alvarez0000-0002-8594-5358<ref>,<ref>
A. Grazian0000-0002-5688-0663<ref>
F. Grupp<ref>,<ref>
L. Guzzo0000-0001-8264-5192<ref>,<ref>
M. Hailey<ref>
S. V. H. Haugan0000-0001-9648-7260<ref>
W. Holmes<ref>
F. Hormuth<ref>
A. Hornstrup0000-0002-3363-0936<ref>,<ref>
P. Hudelot<ref>
S. Ilić0000-0003-4285-9086<ref>,<ref>
K. Jahnke0000-0003-3804-2137<ref>
M. Jhabvala<ref>
B. Joachimi0000-0001-7494-1303<ref>
E. Keihänen0000-0003-1804-7715<ref>
S. Kermiche0000-0002-0302-5735<ref>
A. Kiessling0000-0002-2590-1273<ref>
M. Kilbinger0000-0001-9513-7138<ref>
B. Kubik0009-0006-5823-4880<ref>
M. Kunz0000-0002-3052-7394<ref>
H. Kurki-Suonio0000-0002-4618-3063<ref>,<ref>
P. B. Lilje0000-0003-4324-7794<ref>
V. Lindholm0000-0003-2317-5471<ref>,<ref>
I. Lloro<ref>
G. Mainetti0000-0003-2384-2377<ref>
D. Maino<ref>,<ref>,<ref>
E. Maiorano0000-0003-2593-4355<ref>
O. Mansutti0000-0001-5758-4658<ref>
O. Marggraf0000-0001-7242-3852<ref>
K. Markovic0000-0001-6764-073X<ref>
M. Martinelli0000-0002-6943-7732<ref>,<ref>
N. Martinet0000-0003-2786-7790<ref>
F. Marulli0000-0002-8850-0303<ref>,<ref>,<ref>
R. Massey0000-0002-6085-3780<ref>
E. Medinaceli0000-0002-4040-7783<ref>
S. Mei0000-0002-2849-559X<ref>
M. Melchior<ref>
Y. Mellier<ref>,<ref>
M. Meneghetti0000-0003-1225-7084<ref>,<ref>
E. Merlin0000-0001-6870-8900<ref>
G. Meylan<ref>
M. Moresco0000-0002-7616-7136<ref>,<ref>
L. Moscardini0000-0002-3473-6716<ref>,<ref>,<ref>
E. Munari0000-0002-1751-5946<ref>,<ref>
C. Neissner0000-0001-8524-4968<ref>,<ref>
S.-M. Niemi<ref>
C. Padilla0000-0001-7951-0166<ref>
S. Paltani0000-0002-8108-9179<ref>
F. Pasian0000-0002-4869-3227<ref>
K. Pedersen<ref>
W. J. Percival0000-0002-0644-5727<ref>,<ref>,<ref>
V. Pettorino<ref>
S. Pires0000-0002-0249-2104<ref>
G. Polenta0000-0003-4067-9196<ref>
M. Poncet<ref>
L. A. Popa<ref>
L. Pozzetti0000-0001-7085-0412<ref>
F. Raison0000-0002-7819-6918<ref>
A. Renzi0000-0001-9856-1970<ref>,<ref>
J. Rhodes0000-0002-4485-8549<ref>
G. Riccio<ref>
E. Romelli0000-0003-3069-9222<ref>
M. Roncarelli0000-0001-9587-7822<ref>
R. Saglia0000-0003-0378-7032<ref>,<ref>
J.-C. Salvignol<ref>
A. G. Sánchez0000-0003-1198-831X<ref>
D. Sapone0000-0001-7089-4503<ref>
B. Sartoris0000-0003-1337-5269<ref>,<ref>
M. Schirmer0000-0003-2568-9994<ref>
T. Schrabback0000-0002-6987-7834<ref>
A. Secroun0000-0003-0505-3710<ref>
G. Seidel0000-0003-2907-353X<ref>
S. Serrano0000-0002-0211-2861<ref>,<ref>,<ref>
C. Sirignano0000-0002-0995-7146<ref>,<ref>
G. Sirri0000-0003-2626-2853<ref>
L. Stanco0000-0002-9706-5104<ref>
J. Steinwagner0000-0001-7443-1047<ref>
P. Tallada-Crespí0000-0002-1336-8328<ref>,<ref>
A. N. Taylor<ref>
I. Tereno<ref>,<ref>
R. Toledo-Moreo0000-0002-2997-4859<ref>
F. Torradeflot0000-0003-1160-1517<ref>,<ref>
I. Tutusaus0000-0002-3199-0399<ref>
L. Valenziano0000-0002-1170-0104<ref>,<ref>
T. Vassallo0000-0001-6512-6358<ref>,<ref>
G. Verdoes Kleijn0000-0001-5803-2580<ref>
A. Veropalumbo0000-0003-2387-1194<ref>,<ref>,<ref>
Y. Wang0000-0002-4749-2984<ref>
J. Weller0000-0002-8282-2010<ref>,<ref>
G. Zamorani0000-0002-2318-301X<ref>
E. Zucca0000-0002-5845-8132<ref>
A. Biviano0000-0002-0857-0732<ref>,<ref>
E. Bozzo0000-0002-8201-1525<ref>
C. Burigana0000-0002-3005-5796<ref>,<ref>
M. Calabrese0000-0002-2637-2422<ref>,<ref>
D. Di Ferdinando<ref>
J. A. Escartin Vigo<ref>
G. Fabbian0000-0002-3255-4695<ref>,<ref>
R. Farinelli<ref>
F. Finelli0000-0002-6694-3269<ref>,<ref>
J. Gracia-Carpio<ref>
S. Matthew0000-0001-8448-1697<ref>
N. Mauri0000-0001-8196-1548<ref>,<ref>
A. Pezzotta0000-0003-0726-2268<ref>
M. Pöntinen0000-0001-5442-2530<ref>
V. Scottez<ref>,<ref>
M. Tenti0000-0002-4254-5901<ref>
M. Viel0000-0002-2642-5707<ref>,<ref>,<ref>,<ref>,<ref>
M. Wiesmann0009-0000-8199-5860<ref>
Y. Akrami0000-0002-2407-7956<ref>,<ref>
S. Anselmi0000-0002-3579-9583<ref>,<ref>,<ref>
M. Archidiacono0000-0003-4952-9012<ref>,<ref>
F. Atrio-Barandela0000-0002-2130-2513<ref>
M. Ballardini0000-0003-4481-3559<ref>,<ref>,<ref>
D. Bertacca0000-0002-2490-7139<ref>,<ref>,<ref>
A. Blanchard0000-0001-8555-9003<ref>
L. Blot0000-0002-9622-7167<ref>,<ref>
H. Böhringer0000-0001-8241-4204<ref>,<ref>,<ref>
S. Bruton0000-0002-6503-5218<ref>
R. Cabanac0000-0001-6679-2600<ref>
A. Calabro0000-0003-2536-1614<ref>
B. Camacho Quevedo0000-0002-8789-4232<ref>,<ref>
G. Cañas-Herrera0000-0003-2796-2149<ref>,<ref>
A. Cappi<ref>,<ref>
F. Caro<ref>
C. S. Carvalho<ref>
T. Castro0000-0002-6292-3228<ref>,<ref>,<ref>,<ref>
K. C. Chambers0000-0001-6965-7789<ref>
S. Contarini0000-0002-9843-723X<ref>
A. R. Cooray0000-0002-3892-0190<ref>
G. Desprez0000-0001-8325-1742<ref>
A. Díaz-Sánchez0000-0003-0748-4768<ref>
J. J. Diaz<ref>
S. Di Domizio0000-0003-2863-5895<ref>,<ref>
H. Dole0000-0002-9767-3839<ref>
S. Escoffier0000-0002-2847-7498<ref>
M. Ezziati0009-0003-6065-1585<ref>
A. G. Ferrari0009-0005-5266-4110<ref>,<ref>
P. G. Ferreira0000-0002-3021-2851<ref>
I. Ferrero0000-0002-1295-1132<ref>
A. Finoguenov0000-0002-4606-5403<ref>
A. Fontana0000-0003-3820-2823<ref>
F. Fornari0000-0003-2979-6738<ref>
L. Gabarra0000-0002-8486-8856<ref>
K. Ganga0000-0001-8159-8208<ref>
J. García-Bellido0000-0002-9370-8360<ref>
T. Gasparetto0000-0002-7913-4866<ref>
V. Gautard<ref>
E. Gaztanaga0000-0001-9632-0815<ref>,<ref>,<ref>
F. Giacomini0000-0002-3129-2814<ref>
F. Gianotti0000-0003-4666-119X<ref>
G. Gozaliasl0000-0002-0236-919X<ref>,<ref>
C. M. Gutierrez0000-0001-7854-783X<ref>
A. Hall0000-0002-3139-8651<ref>
H. Hildebrandt0000-0002-9814-3338<ref>
J. Hjorth0000-0002-4571-2306<ref>
A. Jimenez Muñoz0009-0004-5252-185X<ref>
S. Joudaki0000-0001-8820-673X<ref>
J. J. E. Kajava0000-0002-3010-8333<ref>,<ref>
V. Kansal0000-0002-4008-6078<ref>,<ref>
D. Karagiannis0000-0002-4927-0816<ref>,<ref>
C. C. Kirkpatrick<ref>
J. Le Graet0000-0001-6523-7971<ref>
L. Legrand0000-0003-0610-5252<ref>
J. Lesgourgues0000-0001-7627-353X<ref>
T. I. Liaudat0000-0002-9104-314X<ref>
S. J. Liu0000-0001-7680-2139<ref>
A. Loureiro0000-0002-4371-0876<ref>,<ref>
G. Maggio0000-0003-4020-4836<ref>
M. Magliocchetti0000-0001-9158-4838<ref>
F. Mannucci0000-0002-4803-2381<ref>
R. Maoli0000-0002-6065-3025<ref>,<ref>
J. Martín-Fleitas0000-0002-8594-569X<ref>
C. J. A. P. Martins0000-0002-4886-9261<ref>,<ref>
L. Maurin0000-0002-8406-0857<ref>
R. B. Metcalf0000-0003-3167-2574<ref>,<ref>
M. Miluzio<ref>,<ref>
P. Monaco0000-0003-2083-7564<ref>,<ref>,<ref>,<ref>
A. Montoro0000-0003-4730-8590<ref>,<ref>
A. Mora0000-0002-1922-8529<ref>
C. Moretti0000-0003-3314-8936<ref>,<ref>,<ref>,<ref>,<ref>
G. Morgante<ref>
C. Murray<ref>
S. Nadathur0000-0001-9070-3102<ref>
Nicholas A. Walton0000-0003-3983-8778<ref>
L. Pagano0000-0003-1820-5998<ref>,<ref>
L. Patrizii<ref>
V. Popa0000-0002-9118-8330<ref>
D. Potter0000-0002-0757-5195<ref>
P. Reimberg0000-0003-3410-0280<ref>
I. Risso0000-0003-2525-7761<ref>
P.-F. Rocci<ref>
M. Sahlén0000-0003-0973-4804<ref>
E. Sarpa0000-0002-1256-655X<ref>,<ref>,<ref>
A. Schneider0000-0001-7055-8104<ref>
M. Sereno0000-0003-0302-0325<ref>,<ref>
A. Silvestri0000-0001-6904-5061<ref>
A. Spurio Mancini0000-0001-5698-0990<ref>,<ref>
J. Stadel0000-0001-7565-8622<ref>
K. Tanidis<ref>
C. Tao0000-0001-7961-8177<ref>
N. Tessore0000-0002-9696-7931<ref>
G. Testera<ref>
R. Teyssier0000-0001-7689-0933<ref>
S. Toft0000-0003-3631-7176<ref>,<ref>
S. Tosi0000-0002-7275-9193<ref>,<ref>
A. Troja0000-0003-0239-4595<ref>,<ref>
M. Tucci<ref>
J. Valiviita0000-0001-6225-3693<ref>,<ref>
D. Vergani0000-0003-0898-2216<ref>
G. Verza0000-0002-1886-8348<ref>,<ref>
P. Vielzeuf0000-0003-2035-9339<ref>
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In 1998, astronomers made a surprising discovery that the expansion of the Universe is accelerating, not slowing down <cit.>.
This late-time acceleration of the Universe has become the most challenging problem in theoretical physics. The main aim of ongoing and future cosmological surveys is to address the key questions about the origin of the late-time acceleration. Is the acceleration driven by a cosmological constant or dark energy that evolves with the expansion of the Universe? Alternatively, there could be no dark energy if General Relativity (GR) itself is in error on cosmological scales. There has been significant progress in developing modified theories of gravity and these have been developed into tests of GR itself via cosmological observations <cit.>. Testing gravity is one of the main objectives of stage-IV dark energy surveys <cit.>.
Of particular importance in these surveys is the mission <cit.>. The satellite undertakes a spectroscopic survey of galaxies and an imaging survey (targeting weak lensing which can also be used to reconstruct galaxy clustering using photometric redshifts). The combination of these two probes is essential for cosmological tests of gravity <cit.>.
Modified gravity models typically include an additional scalar degree of freedom that gives rise to a fifth-force. To satisfy the stringent constraints on deviations from GR in the solar system, many modified gravity models utilise screening mechanisms to hide modifications of gravity on small scales <cit.>. This is accomplished by nonlinearity in the equation that governs the dynamics of the scalar degree of freedom. This significantly complicates the nonlinear modelling of matter clustering in these models as the nonlinear equation for the scalar mode coupled to nonlinear density fields needs to be solved. A systematic comparison of N-body simulations in f(R) gravity and normal-branch Dvali–Gabadadze–Porratti (nDGP) models was done in <cit.>. Since then, new simulations have been developed, including those using approximate methods to accelerate simulations <cit.>. A fitting formula <cit.> and emulators for the nonlinear matter power spectrum have been developed <cit.>. At the same time, a semi-analytic method based on the halo model to predict the nonlinear matter power spectrum for general dark energy and modified gravity models has been developed <cit.>. These nonlinear predictions were used to study their modifications of the Weak Lensing (WL) observables <cit.>, cross-correlation of galaxies and cosmic microwave background <cit.> for example in f(R) gravity.
In Euclid:2023tqw, Fisher Matrix forecasts were performed to predict 's ability to constrain f(R) gravity models. In the Hu–Sawicki f(R) model, it was shown that in the optimistic setting defined in , and for a fiducial value of = 5 × 10^-6, alone will be able to constrain the additional parameter at the 3% level, using spectroscopic galaxy clustering alone; at the 1.4% level, using the combination of photometric probes on their own; and at the 1% level, using the combination of spectroscopic and photometric observations. The forecast for photometric probes used the fitting formula for the nonlinear matter power spectrum obtained in Winther:2019mus. Further Fisher Matrix forecasts have been done for other modified gravity models with linearly scale-independent growth <cit.>.
For real data analysis, it is imperative to check the effect of the accuracy of the theoretical modelling. This article is part of a series that collectively explores simulations and nonlinearities beyond the Λ Cold Dark Matter () model:
* Numerical methods and validation <cit.>
* Results from non-standard simulations <cit.>
* Cosmological constraints on non-standard cosmologies from simulated probes (D'Amico et al. in prep.)
* Constraints on f(R) models from the photometric primary probes (this work)
In , the comparison of N-body simulations performed in Winther:2015wla in the Hu–Sawicki f(R) and nDGP models was extended to add more simulations. The comparison was done for the matter power spectrum and the halo mass function. The measurements of these quantities were done using the dedicated pipeline developed by . In , additional simulations have been analysed in addition to those used in . In this paper, we utilise these developments and compare several predictions for the nonlinear matter power spectrum in f(R) gravity. We will perform Markov Chain Monte Carlo (MCMC) simulations using synthetic data for photometric probes, and assess the impact of using different nonlinear models for the matter power spectrum at the level of parameter constraints. In addition, we will add baryonic effects using a baryonification method, which was not included in the Fisher Matrix forecast. Although this paper focuses on f(R) gravity, for which multiple public codes are available to predict the nonlinear matter power spectrum, the methodology and code developed in this paper are readily applicable to other modified gravity models. The validation of the nonlinear models for spectroscopic probes has been done in Euclid:2023bgs and D'Amico et al. (paper 3) will perform a similar analysis to this paper's for the spectroscopic probes.
This paper is organised as follows. In <ref>, we summarise theoretical predictions for observables based on and Euclid:2023tqw. In <ref>, we introduce the Hu–Sawicki f(R) gravity model and summarise the four nonlinear models for the matter power spectrum. We then discuss the implementation of these models in the code introduced in Euclid:2023pxu. In <ref>, we compare predictions for the nonlinear matter power spectrum with N-body simulations and study their impact on the angular power spectra for photometric probes. Forecasts for errors from the combination of photometric probes considered in will be presented based on the synthetic data created by four different nonlinear models. We also compare the result with the Fisher Matrix forecast. We then study the bias in the recovered f(R) gravity parameter when the synthetic data are created by a different nonlinear model. <Ref> is devoted to the study of baryonic effects using the baryonification method. We show how the bias is affected by baryonic effects and obtain the upper bound on |f_R0| using the synthetic data. In <ref>, we implement theoretical errors to take into account the uncertainties of theoretical predictions. We conclude in <ref>.
§ THEORETICAL PREDICTIONS FOR EUCLID OBSERVABLES
In this section, we discuss how moving away from the standard GR assumption impacts the predictions for the angular power spectra C(ℓ) that will be compared with the photometric data survey. The observables that need to be computed and compared with the data are the angular power spectra for weak lensing (WL), photometric galaxy clustering () and their cross-correlation ().
In , these were calculated using the Limber and flat-sky approximations in a flat Universe, as
C^XY_ij(ℓ) = c ∫_z_ min^z_ max z W_i^X(z) W_j^Y(z)/H(z) r^2(z)P_δδ(k_ℓ,z) ,
where W_i^X(z) is the window function in each tomographic redshift bin i with X={ L,G} corresponding to WL and , respectively, k_ℓ=(ℓ+1/2)/r(z), r(z) is the comoving distance to redshift z, P_δδ(k_ℓ,z) is the nonlinear power spectrum of matter density fluctuations, δ, at wave number k_ℓ and redshift z, in the redshift range of the integral from z_ min=0.001 to z_ max=4 and H(z) is the Hubble function.
However, when abandoning the assumption of GR, one has to account for changes in the evolution of both the homogeneous background and cosmological perturbations. For WL, it is the Weyl potential ϕ_W that determines the angular power spectrum. The power spectrum of the Weyl potential is related to P_δδ as <cit.>
P_ϕ_W(k,z) = [-3 (H_0/c)^2 (1+z) Σ(k,z)]^2P_δδ(k,z) ,
where Σ(k,z) is a phenomenological parameterisation to account for deviations from the standard lensing prediction. is the density parameter of matter and H_0 is the Hubble parameter – both at the present time. Here we assumed a standard background evolution of the matter component, i.e. ρ_ m(z)=ρ_ m,0(1+z)^3.
We can, therefore, use the recipe of <ref> with H, r and P_δδ provided by a Boltzmann solver, but with the new window functions <cit.>
W_i^ G(k,z) = 1/c b_i(k,z) n_i(z) H(z) ,
W_i^ L(k,z) = 3/2 (H_0/c)^2(1+z) r(z) Σ(k,z)
×∫_z^z_ max z' n_i(z) r(z')-r(z)/r(z') + W^ IA_i(k,z) ,
where n_i(z) is the normalised galaxy number-density distribution in a tomographic redshift bin i such that ∫_z_ min^z_ max n_i(z) dz = 1, and W^ IA_i(k,z) encodes the contribution of intrinsic alignments (IA) to the WL power spectrum. We follow in assuming an effective scale-independent galaxy bias, constant within each redshift bin, and its values b_i are introduced as nuisance parameters in our analysis, with their fiducial values determined by b_i = √(1+z̅_i), where z̅_i is the mean redshift of each redshift bin.
The IA contribution is computed following the nonlinear alignment model with a redshift dependent amplitude ,
in which
W^ IA_i(k,z)=-𝒜_ IA 𝒞_ IA ℱ_ IA(z)/δ(k,z) / δ(k,0) n_i(z) H(z)/c ,
where
ℱ_ IA(z)=(1+z)^η_ IA.
The parameters 𝒜_ IA and η_ IA are the nuisance parameters of the model, and 𝒞_ IA is a constant accounting for dimensional units. The galaxy distribution is binned into 10 equipopulated redshift bins with an overall distribution following
n(z)∝(z/z_0)^2 exp[-(z/z_0)^3/2] ,
with z_0=0.9/√(2) and the normalisation set by the requirement that the surface density of galaxies is n̅_ g=30 arcmin^-2 .
Changes in the theory of gravity impact the IA contribution, introducing a scale dependence through the modified perturbations' growth. This is explicitly taken into account in <ref> through the matter perturbation δ(k,z), while the modifications on the clustering of matter in the case are accounted for in the new P_δδ(k_ℓ,z).
Finally, we present the likelihood that we will use. We follow the approach presented in Euclid:2023pxu. We first construct an (N^G+N^L)× (N^G+N^L) angular power spectrum matrix for each multipole, where the different N correspond to the number of redshift bins for each probe (WL and ):
C_ℓ = [ [ C^LL_ij(ℓ) C^GL_ij(ℓ); C^LG_ij(ℓ) C^GG_ij(ℓ) ]],
where lower-case Latin indexes i,… run over all tomographic bins. Similarly, the noise contributions can also be condensed into one noise matrix N:
N_ℓ = [ [ N^LL_ij(ℓ) N^GL_ij(ℓ); N^LG_ij(ℓ) N^GG_ij(ℓ) ]],
where the noise terms N_ij^AB(ℓ) are given by
N_ij^ LL(ℓ) = δ_ij^ K/n̅_iσ_ϵ^2 ,
N_ij^ GG(ℓ) = δ_ij^ K/n̅_i ,
N_ij^ GL(ℓ) = 0 ,
where σ_ϵ^2=0.3^2 is the variance of observed ellipticities. We can further define Ĉ_ℓC_ℓ + N_ℓ. This is the covariance of the spherical multipole moments a_lm.
In the Gaussian approximation, it can be shown that the description in , using the covariance of the angular power spectra, is equivalent to the description found in Euclid:2023pxu, using the covariance of the a_ℓ m. The likelihood then can be expressed in terms of the observed covariance
Ĉ^obs_ij(ℓ) = 1/2 ℓ+1 ∑_m=-ℓ^ℓ(a_ℓ m)_i (a_ℓ m)^*_j ,
and the theoretically predicted one Ĉ^th_ij(ℓ) as
χ^2 = f^sky ∑_ℓ = ℓ_min^ℓ_max (2ℓ+1) [d^mix/d^th + ln(d^th/d^obs) - N] ,
where the determinants d are defined as
d^th(ℓ) = [Ĉ^th_ij(ℓ)],
d^obs(ℓ) = [Ĉ^obs_ij(ℓ)],
d^mix(ℓ) = ∑_k^N[ {[ Ĉ^obs_ij(ℓ) for k = j; Ĉ^th_ij(ℓ) for k ≠ j ]. ] .
Here N is the number of bins and thus either (N^G+N^L) for multipoles where we include the cross correlation, or the respective N for multipoles where we treat the two probes separately. The additional factor f^ sky comes from an approximation to account for having less available independent ℓ modes due to partial sky coverage.
In this paper, we set the observed covariance Ĉ^ obs to the theoretically predicted one computed at the fiducial cosmology.
Following and Euclid:2023pxu, we do not include super-sample covariance (SSC) in this work. The SSC impact was shown to be non-negligible and will need to be included in future analyses as shown in Euclid:2023ove.
We consider two different scenarios: an optimistic and a pessimistic case. In the optimistic case, we consider ℓ_ max=5000 for WL, and ℓ_ max=3000 for and . Instead, in the pessimistic scenario, we consider ℓ_ max=1500 for WL, and ℓ_ max=750 for and .
In the smallest photometric redshift bin, the galaxy number density distribution n(z) peaks at around z=0.25. Under the Limber approximation for our fiducial cosmology, the corresponding maximum values of k evaluated in the power spectrum corresponding to the pessimistic and optimistic scenario for are k_ max=[0.7, 2.9] h Mpc^-1, respectively, while for the WL, maximum wavenumbers probed are k_ max=[1.4, 4.8] h Mpc^-1 at the peak redshift z=0.25 <cit.>. Here h denotes the dimensionless Hubble parameter h H_0 / (100 ). For smaller values of z, the values of k at a given ℓ increase, but the window functions in <ref> suppress the power spectrum and we set it to zero after a fixed k_ max=30 h Mpc^-1.
We list the specific choices of scales and settings used for each observable in <Ref>.
Although the currently foreseen specification of the Euclid Wide Survey differs from
that assumed in , for example in terms of the survey area, we use their results to allow for
comparison with earlier forecasts.
§ IMPLEMENTATION
In this section, we describe the implementation of the likelihood discussed in <ref> in the code developed in Euclid:2023pxu, in the Hu–Sawicki f(R) gravity model.
§.§ Hu–Sawicki fR gravity
Modified gravity f(R) models <cit.> are models constructed by promoting the Ricci scalar R in the Einstein–Hilbert action to a generic function of R→ R + f(R), i.e.
S = c^4/16π G∫^4 x √(-g) [R+f(R)] + S_ m[g_μν, Ψ_ m] ,
where g_μν is the metric tensor, g its determinant and S_ m represents the matter sector with its matter fields Ψ_ m. For further discussions we refer to Sotiriou_2010,Clifton_2012 and <cit.>.
How this modification changes gravity is more easily seen by formulating the theory in the so-called Einstein frame by performing a conformal transformation g_μν = g̃_μνA^2(ϕ) where A(ϕ) = √(1 + f_R) = e^ϕ/√(6) to obtain
S = c^4/16π G∫^4 x √(-g̃)[R̃ + 1/2(∂ϕ)^2 - V(ϕ)]
+ S_ m[A^2(ϕ) g̃_μν, Ψ_ m] ,
with the potential V = [f_R R - f(R)]/2(1+f_R)^2 and f_R f(R)/ R. This demonstrates that the theory reduces to standard GR together with an extra scalar field that is coupled to the matter sector giving rise to a fifth-force. This fifth-force has a finite range λ_C and within this range, it mediates a force that has a strength that, in the linear regime, corresponds to 1/3 of the usual gravitational force:
F_ fifth = 1/3 G Mm/r^2 (1 + r/λ_C) e^-r/λ_C .
This effectively changes G→43G on small scales (r≪λ_C) while keeping the usual G on large scales. Such a large modification would be ruled out by observations if it was not for the fact that the theory possesses a screening mechanism <cit.> which hides the modifications in high-density regions. This screening mechanism effectively suppresses the fifth-force by a factor ∝ f_R / Φ_ N where Φ_ N is the Newtonian gravitational potential.
Not all f(R) models one can write down possess such a screening mechanism as this places some restrictions on their functional form <cit.>. One concrete example of a model that has all the right ingredients is the model proposed by Hu:2007nk which in the large-curvature limit is defined by
f(R) = - 6 H_0^2/c^2 + |f_R0| R̅_0^2/R ,
where
R̅_0 = 3 H_0^2/c^2(1+ 4 /) ,
is the Ricci scalar in the cosmological background and is the density parameter of dark energy at the present time. The first term in <ref> corresponds to a cosmological constant and the only free parameter is |f_R0|. In the limit |f_R0| → 0, we recover GR and the model. This parameter controls the range of the fifth-force and, in the cosmological background, we have λ_C≃ 32√(|f_R0|/10^-4) Mpc at the present time. Solar System constraints require |f_R0| ≲ 10^-6, cosmological constraints currently lie around 10^-6–10^-4 depending on the probe in question <cit.> while astrophysical constraints at the galaxy scale can be as tight as ≲ 10^-8 <cit.>, but see <cit.> for a recent note of caution on such galactic-scale constraints.
The energy density of the scalar field contributes in general to the expansion of the Universe, however for viable models, like the one we consider here, this contribution is tiny (of the order |f_R0| Ω_ DE, 0) apart from the constant part of the potential which is indistinguishable from a cosmological constant. The background evolution of such models is therefore very close to . Since f(R) models have a conformal coupling, light deflection is weakly affected as follows
Σ(z)=1/1+f_R(z) .
Since the maximum value of |f_R(z)| is given by |f_R0|, for the values of |f_R0| we consider in this paper, we can ignore this effect. Thus gravitational lensing is also not modified in the sense that the lensing potential is sourced by matter in the same way as in GR (though the underlying density field will of course be different). The main cosmological signatures of the model therefore come from having a fifth-force, acting only on small scales r ≲λ_C, in the process of structure formation. The main effect of the screening mechanism is that the prediction for the amount of clustering will generally be much smaller than what naive linear perturbation theory predicts.
§.§ Nonlinear modelling
We implement Ξ(k,z) defined as
Ξ(k,z) P_f(R)(k,z)/P_(k,z) ,
to obtain the nonlinear f(R) matter power spectrum. For the power spectrum P_(k,z), we use the `Takahashi' prescription <cit.> following Euclid:2023tqw.
It includes the minimum mass for massive neutrinos in P_(k,z), but ignores the effect of massive neutrinos on Ξ(k,z). This approximation was shown to be well justified for the minimum mass of neutrinos in <cit.> using data from <cit.>. We describe below four models for Ξ(k,z) used in this paper. We note that we can use any power spectrum prediction in our approach such as <cit.> and <cit.>.
The exercise we perform in this paper is a comparison of the different prescriptions for Ξ(k, z). Given this, the nonlinear spectrum prescription does not matter and it is common to all nonlinear models.
§.§.§ Fitting formula
A fitting formula for Ξ(k,z) was developed in <cit.> and describes the enhancement in the power spectrum compared to a nonlinear power spectrum as a function of the parameter |f_R0|. This fitting function has been calibrated using the <cit.> N-body simulations run by <cit.> and the N-body simulations <cit.> run by <cit.>. The main approximation is that the cosmological parameter dependence of Ξ(k,z) is ignored. In <cit.>, this assumption was checked using simulations with different σ_8, , as well as the mass of massive neutrinos. <cit.> also corrected the fitting formula to account for additional dependence on these parameters. In this paper, we do not include these corrections as the previous forecast paper <cit.> used the fitting formula without these corrections.
The fitting function has in total 54 parameters for the full scale, redshift and |f_R0| dependence.
This fitting formula is not defined outside the range 10^-7 < < 10^-4. The code is publicly available (https://github.com/HAWinther/FofrFittingFunctiongithub).
§.§.§ Halo model reaction
We further consider the halo model reaction approach of Cataneo:2018cic which combines the halo model and perturbation theory frameworks to model corrections coming from non-standard physics. The nonlinear power spectrum is given by
P_ NL(k,z) = ℛ(k,z) P^ pseudo_ NL(k,z) ,
where P^ pseudo_ NL(k,z) is called the pseudo-power spectrum and is defined as a nonlinear spectrum with initial conditions tuned such that the linear clustering at the target redshift z matches the beyond- case. This choice ensures the halo mass functions in the beyond- and pseudo-universes are similar, giving a smoother transition of the power spectrum over inter- and intra-halo scales. We model the pseudo-cosmology nonlinear power spectrum using <cit.> by supplying the code with the linear f(R) power spectrum.
The halo model reaction, ℛ(k,z), models all the corrections to the pseudo spectrum coming from nonlinear beyond- physics. We refer the reader to Cataneo:2018cic,Bose:2021mkz and <cit.> for the exact expressions for this term. The halo model reaction can be computed efficiently using the publicly available <cit.> code.
Despite being highly efficient, having been used in previous real cosmic shear analyses <cit.>, it is still too computationally expensive for the number of tests we wish to perform. To accelerate our inference pipeline, we create a neural network emulator using the package <cit.> for the halo model reaction-based boost
Ξ(k,z) = ℛ(k,z) P_ HMCode2020^ pseudo(k,z)/P_(k,z) ,
where P_(k,z) and P_ HMCode2020^ pseudo(k,z) are calculated using <cit.> while ℛ(k,z) is calculated using .
We chose the to model the pseudo-power spectrum as it has been shown to have improved accuracy and does not show suppression of power with respect to (Ξ <1) at high redshifts, which is not expected.
To train the emulator, we follow the procedure of SpurioMancini:2023mpt but widen the parameter priors. We produce ∼10^5 boost predictions in the range k ∈ [0.01, 3] h Mpc^-1 and z ∈ [0,2.5]. We take cosmologies from the Latin hypercube given by the ranges in <ref> and <ref>, with the massive neutrino density parameter today's range being ∈ [0.0, 0.00317]. Emulation of the boost speeds up the computation by 4 orders of magnitude and we find similar accuracy of our emulator as found in SpurioMancini:2023mpt. The emulator is publicly available (https://github.com/nebblu/ReACT-emusgithub). Finally, we note another small difference between our emulator and that of SpurioMancini:2023mpt. In this work, we assume P_ in <ref> with equal to the total of the true cosmology, whereas in SpurioMancini:2023mpt they assume = +, and being the baryon and cold dark matter density parameters today respectively. The emulators give the same output for = 0, which is what we assume in this work for Ξ.
§.§.§
The emulator was introduced in Arnold:2021xtm. It is based on simulations for 50 combinations of |f_R0|, , σ_8^ and h with all other parameters fixed. We note that σ_8^ is the σ_8 we would obtain in a model with the same cosmological parameters and initial amplitude A_ s and not σ_8 in an f(R) gravity model. The emulator accuracy is better than 2.5 % around the centre of the explored parameter space, up to scales of k = 10 h Mpc^-1. f(R) simulations are performed by a modified version of the code, <cit.> that solves the nonlinear scalar field equation using a relaxation method. See for more details.
The emulation was made for the ratio between the power spectrum in f(R) and the prediction in . We note that this is different from Ξ(k,z) as the power spectrum in models in these simulations can have slight deviations from the prediction. To account for this effect, the authors provided the ratio of the power spectrum in a reference model to the prediction. This simulation uses =0.31315, h=0.6737, σ_8^=0.82172.
This can be used to obtain Ξ(k,z). However, this correction is provided only in this model. Thus, the assumption here is that this correction is independent of cosmological parameters. The latest version of the simulations has counterparts to the f(R) simulations using the same seed. These simulations were analysed in . Hence, it is in principle possible to emulate directly Ξ(k,z) from these simulations. However, as this is not publicly available, we opted for using the original emulator (https://bitbucket.org/arnoldcn/forge_emulator/src/master/bitbucket) for this paper. We will use one of these simulations for validation in <ref>.
§.§.§
We also consider the predictions given by the emulator presented in <cit.>, which can predict the ratio Ξ(k,z) between the nonlinear matter power spectrum in f(R) gravity and in .
The predictions are calibrated from a large suite of N-body simulations run with the code <cit.>, a modified gravity version of the Adaptive Mesh Refinement (AMR) N-body code <cit.>.
The simulation suite covers the {|f_R_0|, , σ_8^} parameter space with 110 cosmological models sampled from a Latin hypercube <cit.>.
The remaining parameters, h, n_ s and remain fixed, which means that their impact on Ξ(k,z) is ignored.
In saez-casares_2023, it was shown that the error made by this approximation is at the sub-percent level.
The quantity Ξ(k,z) is measured from pairs of f(R) and simulations, both run with the same initial conditions, which leads to a large cancellation of cosmic variance and numerical resolution errors.
The emulation is done through a Gaussian Process regression <cit.>.
The emulator can give predictions for redshifts z ∈[0, 2], wavenumbers k ∈[0.03, 10] h Mpc^-1 and cosmological parameters in the following range: |f_R_0| ∈[10^-7, 10^-4], ∈[0.2365, 0.3941] and σ_8^∈[0.6083, 1.0140].
The estimated accuracy of , including emulation errors and systematic errors in the training data, is better than 3 % for scales k≲ 7 h Mpc^-1 and across the whole parameter space although in most cases, the accuracy is of order 1 %. The code is publicly available (https://gitlab.obspm.fr/e-mantis/e-mantisgitlab).
§.§ implementation
Our implementation of the likelihood is based on the implementation described in Euclid:2023pxu. For a more detailed explanation of the numerical implementation, we refer to that work. For our purposes, we have modified this code to include our prediction for Ξ(k,z) to obtain the nonlinear power spectrum in the f(R) gravity model. As the different implementations have different ranges in cosmological parameters, wavenumbers, and redshifts, we have chosen an extrapolation scheme to unify the ranges. We note that the exact implementation does not affect the final results strongly. We checked that the contributions from high redshifts and high wavenumbers which used extrapolations were sub-dominant.
For the modified gravity parameter |f_R0|, we use flat priors in terms of to stay within the validity range of each model. For the largest scales beyond the range of the emulators, we set Ξ = 1. This is because the effect of the fifth-force has to vanish on these scales. For small scales, we do a power-law extrapolation. To obtain the spectral index for the extrapolation, we proceed as follows: we calculate Ξ on a grid close to the edges of the emulators; we then fit, for fixed z, a linear function in the log-log space onto this grid. This is done to average out any numerical noise at the edge. We do the same for high redshifts by fitting the power law for fixed wavenumbers. For regions of both high k and z, we do a constant extrapolation of the spectral index. To keep the extrapolated function Ξ(k, z) from going to non-physical values, we set a hard lower bound of Ξ = 1 and an upper bound of Ξ = 2. The ranges of the different emulators are found in <Ref>. We do a constant extrapolation in the parameters. This can be done because during a typical MCMC the majority of the suggested points are within the ranges of our emulators.
The validity range of cosmological parameters is summarised in <ref>.
We also add the effect of baryonic feedback in the form of an additional correction Ξ^BFM to our power spectrum prediction. The physics of the baryonic feedback effects is discussed in <ref>. We refer the reader to Schneider:2019xpf and Mead:2020vgs for further details. We obtain the correction from by Giri_2021. This emulator is only trained for redshifts below z=2 and wavenumbers k<12.5 h Mpc^-1. In this case, we use constant extrapolation for both k and z. We checked that this had little effect on our results. This is because the main probes are mostly sensitive to redshifts around z∼ 1. For these redshifts, the extrapolation only affects very high multipoles ℓ≳ 4500. Thus, the extrapolation has negligible effect on the angular power spectrum. We estimate the overall effect this has on our result to be at most at the percent level. In the absence of modified gravity, to obtain the nonlinear baryonic feedback power spectrum, we multiply the nonlinear power spectrum by Ξ^BFM. When adding the effect of modified gravity, we combine both boosts as
P_f(R)^BFM(k,z) = Ξ^BFM(k, z) Ξ(k,z) P_(k,z) .
We can make this approximation if both effects are independent. This was shown to be the case for small deviations from in f(R) gravity models considered in this paper using hydrodynamical simulations by Arnold:2019vpg and <cit.>.
Our final addition to the code is the inclusion of theoretical errors. For this, we have adjusted the prescription of Audren_2013. The theory and numerical implementation is further discussed in <ref>.
§ COMPARISON OF DIFFERENT NONLINEAR MODELS
In this section, we compare four different predictions for the nonlinear dark matter power spectrum in terms of Ξ(k, z) defined in <ref> introduced in <ref>. We start from a comparison of the matter power spectrum with N-body simulations. We then compare the angular power spectra in the reference cosmology. We perform MCMC simulations to compare errors and investigate bias due to the difference in the prediction of the nonlinear matter power spectrum using the settings defined in .
§.§ Comparison of predictions
§.§.§ Comparison with N-body simulations
<Ref> shows a comparison of the ratio of the power spectra between f(R) gravity and measured from N-body simulations with the theoretical predictions for Ξ.
These simulations use the same initial conditions and the ratio removes the cosmic variance and the effect of mass resolution. The left-hand side plot shows a comparison using the measurements from . This is based on the comparison project in Winther:2015wla. These simulations were run in a cosmology with = 0.269, h = 0.704, n_s = 0.966 and σ_8 = 0.801. The simulations have N_p = 512^3 particles of mass M_ p≃ 8.756 × 10^9 h^-1 M_⊙ in a box of size B = 250 h^-1 Mpc and start at redshift z = 49. We picked a model called F5 with |f_R0| = 10^-5 and showed the result at z=0.667 that is presented in . In the comparison, we included the measurements from and , as is based on , while is based on . The prediction of agrees with very well. On the other hand, the prediction deviates from as well as . We note that the prediction is corrected using the simulation (Node 0) in the simulation suite with =0.31315, h=0.6737, σ_8^=0.82172 to obtain Ξ. We find that if we use =0.31315 and h=0.6737 in the prediction, the agreement with is much better.
To further investigate this issue with , we use the measurement of the power spectrum in one of the nodes in the simulation suite (Node 13) run by that is closest to the reference cosmology that we will use in this paper with non-zero |f_R0|. This simulation has the following cosmological parameters: = 0.34671, h = 0.70056 and σ_8 = 0.78191 and we show a comparison at z=0.652. Both and f(R) simulations with |f_R0| = 10^-4.90056 are available so that we can measure Ξ directly. We note that the pipeline developed in measures the power spectrum only up k =3 h Mpc^-1. In this case, both and agree with within 1 % up to k =3 h Mpc^-1. The fitting formula and agree with within 1 % up to k =1 h Mpc^-1. Given this result, the large discrepancy between and is likely to be caused by emulation errors as well as calibrations using the power spectra predictions.
§.§.§ Comparison in the reference cosmology
In this paper, we consider the model called HS6 in Euclid:2023tqw, which has the following parameters
Θ_ fid ={, , h, n_ s, σ_8, } ,
HS6:Θ_ fid ={ 0.32, 0.05, 0.67, 0.96, 0.853, -5.301} .
The cosmological parameters are the same ones adopted in .
Euclid:2023tqw showed that this value of |f_R0| = 5 × 10^-6 can be well constrained by the photometric probes. Also the range of |f_R0| covered by the four models is wider than the errors predicted in Euclid:2023tqw. Our fiducial cosmology includes massive neutrinos with a total mass of ∑ m_ν=0.06 eV, but we keep ∑ m_ν fixed in the following analysis.
<Ref> shows a comparison of the power spectrum and the angular power spectrum for WL, and their cross-correlation . In these plots, we show the ratio to the prediction and error bars from the diagonal part of the covariance matrix. For the power spectrum comparison at z=0.5, we see that and agree best. This is not surprising as these two predictions are based on N-body simulations ran by the same code . On the other hand, overestimates Ξ at k= 0.1 h Mpc^-1. This is similar to the deviation we find in the comparison with the N-body simulation from although the deviation is smaller at the 2 % level.
This is likely due to the fact that in HS6 is closer to in the fiducial simulation (=0.31315). On the other hand, underestimates Ξ at k < 0.1 h Mpc^-1 and even predicts Ξ <1. As we discussed in the previous section, we enforce Ξ≥ 1.
§.§ Forecasting errors and biases
§.§.§ Forecasting errors for WL
We first compare errors obtained by running MCMC using the synthetic data created by one of the four nonlinear models and fitting it by the same model. In this case, by definition, we recover the input parameters that were used to create the synthetic data. Our interests are constraints on the |f_R0| parameter and cosmological parameters. We first consider the WL-only case. In this case, we impose a tight Gaussian prior on the spectral index, n_ s, taken from the Planck results <cit.> and on the baryon density parameter, using Big Bang nucleosynthesis constraints <cit.>:
n_ s = 0.96 ± 0.004, h^2 = 0.022445 ± 0.00036 ,
as we do not expect to obtain strong constraints on these parameters from WL alone. The parameters that are used in the MCMC runs are summarised in <Ref>, including their fiducial values and prior ranges. As a convergence criterion, we use a Gelman–Rubin <cit.> value of R-1 < 0.01 for each individual sampling parameter using . For post-processing chains, we use
<cit.>.
<Ref> shows the 2D contours of the constraints on parameters,
and <Ref>
and <Ref>
summarise constraints on for the optimistic and pessimistic settings. The constraints on cosmological parameters are consistent among the four different nonlinear models, while we see some notable differences in the constraints on |f_R0|. In the case of the optimistic setting, gives the tightest constraints, which is also closer to Gaussian. The fitting formula agrees with for small but has a weaker constraint for larger . Constraints from agree with the fitting formula for large , but give a weaker constraint for small . The degeneracy between , h^2 and ln(10^10 A_ s) also presents some notable differences. The degeneracy for larger is different for the fitting formula when compared with and . This could be attributed to the fact that the fitting formula does not include any cosmological parameter dependence in the prediction for Ξ(k,z). We observe similar agreements and disagreements for the pessimistic setting, but the agreement among , and the fitting formula is better, particularly for small . gives a weaker constraint on , but the constraints on cosmological parameters are consistent among the four different models. This is due to the weak cosmology dependence of Ξ(k,z), as discussed in <cit.>, and the fact that the constraint on cosmological parameters is coming from the power spectrum, which is common in all these four models.
We compare the constraints from the MCMC analysis with the Fisher Matrix forecast and show this in <ref>. The pipeline can be used as a Fisher Matrix forecast tool <cit.>, which was shown to agree very well with previous Fisher Matrix forecasts given by . We note that in Euclid:2023tqw, no prior was imposed on n_ s and h^2, thus no direct comparison is possible. Instead, we use as a Fisher Matrix forecast tool and compare the result with the MCMC analysis to validate the Fisher Matrix forecast. To be consistent with Euclid:2023tqw, we use the fitting formula as the nonlinear model. The 1 σ error is very consistent: the Fisher Matrix forecast gives 0.111 while the MCMC analysis gives 0.116. The constraint from MCMC is non-Gaussian and the 1D posterior has a slightly longer tail for large . The constraints on cosmological parameters agree very well between the Fisher Matrix forecast and the MCMC result.
§.§.§ Assessing biases for WL
Next, we create the synthetic data using and we fit it by different nonlinear models to assess the biases in the recovered parameters due to the difference in the nonlinear modelling. We selected to create the data because it has a relatively narrow prior range for |f_R0| and we encountered a problem with that range when using as the model to fit when including baryonic effects. We should note that
has a larger discrepancy from other nonlinear models. As we discussed before, this could be attributed to emulation errors and calibration with . In particular, this choice is disadvantageous to as the difference of the prediction for Ξ from is the largest.
Thus, the estimation of bias in recovered parameters presented in this section is conservative, particularly for
<Ref> shows the 2D contours of the constraints on parameters,
and <Ref>
and <Ref>
summarise constraints on for the optimistic and pessimistic settings.
We first start from the optimistic setting. Since is valid only up to
k = 3 h Mpc^-1, we will not include in this case.
The fitting formula and recover the input parameters within 1σ. In the case of , cosmological parameters are well recovered, but there is a slight bias in the recovered . On the other hand, a slight bias appears in h in the case of the fitting formula. This may be attributed to the fact that the fitting formula does not include cosmological parameter dependence in the prediction of Ξ. To quantify the bias, we define a 1D bias as
B_ 1D= μ - μ_/σ_ ,
where μ and σ are the mean and 1 σ error computed from the 1D marginalised posteriors. If μ > μ_ (μ < μ_), we use the upper (lower) 68.3% confidence interval to obtain σ_.
The 1D bias for is B_ 1D=0.273 and 0.602 for the fitting formula and , respectively.
We see a similar result in the pessimistic setting for and the fitting formula. In this case, the input is well within 1σ although h is again slightly biased. The 1D bias for is B_ 1D=0.441 and 0.518 for the fitting formula and , respectively while
the 1D bias for h is 0.988 and 0.812 for the fitting formula and , respectively.
On the other hand, we find a bias in the recovered when is used. As mentioned above, we should bear in mind that the choice of as a fiducial is disadvantageous for . In the case of , the difference from leads to biases also in the cosmological parameters leading to larger h^2 and smaller
ln(10^10 A_ s) and h. This is due to the different k dependence of Ξ between and and this is compensated by adjusting cosmological parameters as well as leading to a stronger bias. The 1D bias for reaches B_ 1D=3.11. We will discuss how to mitigate this bias by correcting the prediction for the fiducial cosmology and including theoretical errors in <ref>.
Since the prediction deviates from the other three models, we also performed the same analysis using as fiducial data for the pessimistic setting. We show the results in Appendix A. Qualitatively, we obtain consistent results. As we can see from <ref>, the 2D contours overlap in the same way as the case where FORGE is used as fiducial, while the mean of obtained by is closer to the input value. However, we still find that the 1D bias is at 3 sigma level due to the smaller errors of compared with .
§.§.§ 3x2pt analysis
We consider the constraints from statistics by adding and its cross-correlation to WL. In this paper, we use a simple model of the scale-independent linear bias as in . This assumption needs to be reexamined in f(R) gravity as the scale-dependent growth will lead to a scale-dependent bias. However, the linear bias assumption also needs to be relaxed even in and it is beyond the scope of this paper to implement a more complete bias description. For this reason, we consider the pessimistic case only and focus our attention on a comparison with the Fisher Matrix forecast and check if the bias when is used to create the data becomes worse with the increasing statistical power. We also do not include as we already observe a significant bias with WL only.
We have 10 scale-invariant bias parameters, one for each redshift bin. For the analysis, we vary n_ s, but we still impose a tight Gaussian prior on h^2
as it is unlikely to get a tighter constraint than the Big Bang nucleosynthesis constraints.
The parameters used in addition to the WL analysis are listed in <Ref>.
We did not change the fiducial bias parameters from as there is no prediction of the linear bias in f(R) gravity. The fiducial bias adopted in will need to be improved even in . Our prime focus is the study of the effect of nonlinear models and we vary the linear bias in the MCMC analysis resulting in different constraints depending on the nonlinear model used. The observed covariance is built from synthetic data. Thus the effect of f(R) gravity is included in the covariance. We independently checked that the constraints on parameters do not change by using the covariance. Thus we expect that the effect of f(R) on galaxy bias has also negligible effects on the covariance.
The left-hand side of <ref> shows the 2D contours of the constraints on parameters, and <Ref>
summarises constraints on where the same nonlinear model is used for the data and the model.
The right-hand side of <ref> shows the 2D contours of the constraints on parameters, and <Ref>
summarises constraints on as well as 1D bias for when is used to create the data.
In the case where the same nonlinear model is used for the data and the model, errors are consistent between the fitting formula and , although we still observe the same longer tail for larger for the fitting formula as we observe for WL. gives slightly tighter constraints on . Constraints on cosmological parameters are very consistent among the three different nonlinear models. The Fisher Matrix forecast
using the fitting formula is also very consistent with the MCMC results. We note that errors from analysis in the pessimistic setting are comparable to or better than WL alone in the optimistic setting. This demonstrates the strength of the analysis although we should bear in mind the limitation of the bias model used in this analysis.
We find that the increased statistical power does not degrade the agreement between and .
When is used to create the data, the input parameters are well recovered by , although h is slightly biased as in the WL-only case. The 1D bias for is given by 0.150. In the case of the fitting formula, the bias in cosmological parameters becomes worse compared with WL. This could be attributed to the fact that the fitting formula does not take into account the cosmological parameter dependence of Ξ as mentioned before. The 1D bias for also becomes slightly larger, with a value of B_ 1D =0.667.
§ BARYONIC EFFECTS
§.§ Adding baryonic effects using
In this section, we study the impact of baryonic effects on the constraints on the f(R) parameter. We use the seven-parameter emulator of baryonic effects called <cit.> and we assume that the baryonic effects and the modified gravity effects can be treated independently as discussed in <ref> <cit.>. The parameters govern the gas profiles and stellar abundances in haloes. It is not our intention to study in detail the baryonic effects. We use the default values for the baryonic parameters provided with and use the full prior range. We choose not to include any redshift dependence. This is because we need to impose a tight prior range for these parameters at z=0 to keep them within the prior range and we may miss important degeneracies between baryonic parameters and .
We found that two baryonic parameters, M_ c^ Giri et al. and θ_ ej, have the strongest effects on the matter power spectrum and they are well constrained in the presence of the f(R) parameter. Thus, we only vary these two parameters.
In , the gas profile is modelled as a cored double power law. M_ c^ Giri et.al. controls the dark matter halo mass dependence of the logarithmic slope of the first-cored power law. It allows the profile to become less steep than the Navarro_96 one for M<M_ c^ Giri et.al.. The parameter θ_ ej determines the scale radius (with respect to the virial radius) of the second-cored power law. <Ref> summarises the fiducial values and priors for baryon parameters. We will use the dimensionless parameter M_ c that is related to the one defined in <cit.> via M_ c M_ c^ Giri et.al. / M_⊙ where M_⊙ is the solar mass.
§.§ Effects of adding baryons
<Ref> shows the effect of changing , h and two baryon parameters on the ratio of the WL angular power spectrum to the one. Baryonic effects introduce scale-dependent modifications to the angular power spectrum that are similar to the effect of f(R) at ℓ > 100. On the other hand, h changes the overall amplitude of the angular power spectrum as it changes . We will see that the interplay between these parameters leads to interesting degeneracies.
<Ref> shows the 2D contours of the WL constraints on parameters for the optimistic settings with baryons and <Ref> summarises constraints on .
In the case of the optimistic setting, when is used as the data and the model, the two baryonic parameters degrade the constraints on : the 1σ error becomes 0.281 while it was 0.088 without baryons. However, it is important to stress that and the two baryonic parameters are still constrained well within the prior range of these parameters. It means that the impact of modified gravity on the high-k tail of the matter power spectrum is not washed out by the baryonic feedback, and we can still distinguish between the effect of f(R) and baryons.
Nevertheless, if is used as the data, the difference between and in the matter power spectrum can be absorbed by the shift in M_ c and h, and the constraint on is shifted along the degeneracy direction between and M_ c as well as h. This can be understood from <ref>. The difference of the scale dependence between and at ℓ >100 can be adjusted by decreasing and decreasing M_ c. This leads to a lower amplitude that can be adjusted by decreasing h. Due to the combination of these effects, the best-fit becomes smaller. We note that
does not include the h dependence in the prediction for Ξ(k ,z) explicitly, but it depends on h implicitly through σ_8^. However, this dependence of h on Ξ(k,z) is very weak. To compute the power spectrum in f(R) we use
P_f(R)
= Ξ(k,z) P_(k,z).
The emulator provides us with the first factor, Ξ(k,z), and a emulator provides the last factor. The h dependence comes from the power spectrum not from Ξ(k,z).
As a result, we obtain a 95.5% upper limit of as < -5.477, which is incompatible with the input value of = -5.301. This is partly due to prior volume effects shifting the contour to lower values of , caused by the strong degeneracy between and M_ c as well as h.
To confirm this, we obtain a profile likelihood for by fixing and finding a minimum χ^2 by varying other parameters. The profile likelihood was obtained by
<cit.>. This is shown in <ref>.
The global best-fit value for is obtained as =-5.765, which is larger than the mean of the 1D marginalised posterior =-6.087. <Ref> also shows the prediction of with the best-fit values, which agrees well with . The Δχ^2 for the input value of = -5.301 is found as Δχ^2 = 2.657, thus it is still within 2 σ. We also observe that the Δχ^2 curve becomes flat for smaller values of .
We also tested the fitting formula as a model to fit to the data generated by . In this case, the posterior is highly non-Gaussian and the chains did not converge well.
This implies that we can break the degeneracy between f(R) gravity and baryonic effects, but we need to have an accurate nonlinear model for the f(R) gravity model. Otherwise, we could obtain a significantly biased result in terms of the 1D marginalised constraint. We note that if we have a physical prior on M_ c from baryonic physics, we could break the degeneracy between and M_ c. For example, in the case where was used as the data and they were modelled by , the mean of M_ c is significantly shifted to a smaller value as we can see in <ref>. If we had a prior on M_ c to prevent this, this bias could be avoided.
To test this idea, we also ran an analysis imposing a prior on M_ c and showed the result in <ref>. We used a Gaussian prior with the width of 0.2, which was estimated from the weak lensing informed gas and stellar mass fraction measurements of massive haloes by <cit.>. This prior is consistent with the error on M_ c that we obtained by using e-Mantis both for data and model. We observe that the bias is relaxed slightly, however, the degeneracy between M_ c and is quite strong and this prior is not enough to alleviate the bias in the marginalised constraints. An improved prior from external data is needed to fully break this degeneracy.
We assumed that the effect of f(R) gravity and baryonic effect can be treated independently.
Recently, the coupling between baryonic feedback and cosmology has been studied showing that the combined effect of baryonic and non-baryonic suppression mechanisms is greater than the sum of its parts for decaying dark matter <cit.>. The effect of this coupling on the degeneracy needs to be studied in the future.
Next, we consider the pessimistic setup. <Ref> shows the 2D contours of the WL constraints on parameters for the optimistic settings with baryons and <Ref> summarises constraints on . In this case, due to the weaker constraining power, the 99.7% confidence level upper bound on reaches the prior boundary of =-7. Also, the 95.5% confidence level upper bound of θ_ ej is bounded by the prior. If the data are generated by , strong degeneracies appear among and two baryonic parameters. The input is consistent within 1σ for the fitting formula and although the prior bound =-7 is reached at 95.5% confidence level lower bound. On the other hand, for , the input value is outside the 95.5% confidence level upper bound. As we already observed without baryons, h is biased to a lower value.
We also checked the case where fiducial data in generated by . As we can see in <ref>, unlike the case without baryons, the agreements between , and are improved significantly compared with the case where is used for fiducial data. Particularly, for , the mean of is now consistent with the input value within 1σ.
§.§ Constraints on fR0 with the data
Based on these results, we consider the pessimistic setting and use to obtain an upper limit of in the presence of baryonic effects to be conservative.
<Ref> shows the 2D contours of the WL constraints on parameters for the pessimistic settings where is used to create the data.
We find that the recovered cosmological parameters are consistent with the input values. The mean of 1D marginalised constraints of baryonic parameters are slightly biased due to the strong degeneracy between them, but it is still consistent within 1 σ. We obtain the 95.5% confidence level upper bound on as
< - 5.21 .
We note that this bound depends on the prior =-7. To obtain the prior-independent bound, we follow the approach presented in Gordon:2007xm, Piga:2022mge and <cit.>.
The ratio of the marginalised posterior and prior is given by
b(x | d, p) = 𝒫(x | d, p)/p(x) ,
where x is the parameter we are interested in (i.e. ), d is the data and p is the prior, and 𝒫 is the Posterior. The Bayes factor B(x_1, x_2), which quantifies the support of the models with x=x_1 over the models with x=x_2 is given by
B(x_1, x_2) = b(x_1 | d, p) /b(x_2 | d, p)
= ℒ(d | x_1)/ℒ(d| x_2),
where ℒ(d| x) is the marginalised likelihood of the data for parameter x.
For our purpose, we choose x = and fix x_1 to be the upper bound of the prior, x_1 = -7.
Following Gordon:2007xm, we then use B(x_1, x_2) =2.5 to find x_2 so that the model with x=x_1 is favoured compared to the model with x=x_2 at 95.5% confidence level. This gives the 95.5% confidence interval of that does not depend on the prior. We note that this method applies only to a ℒ(d| x) that is a monotonic function of x. Employing this technique, we obtain
< - 5.58 .
Finally, in <ref>, we present the constraints on and so that we can compare these with those in the literature <cit.>. It is not possible to make a direct comparison due to various differences in the setting. Nonetheless, our constraint is comparable to the one presented in Schneider:2019xpf [< -5.3] where a similar analysis was done by combining and the fitting formula. We cannot also make a comparison with Harnois-Deraps:2022bie as baryonic effects are not included in their analysis, but again the constraint is comparable [< -5.24].
On the other hand, our constraint is much weaker than the one found in SpurioMancini:2023mpt that used with . We refer the readers to SpurioMancini:2023mpt for possible explanations. In our analysis, tended to prefer a lower value of when is used as data and this might be one of the reasons for this difference. Finally, a recent study by Tsedrik:2024cdi used similar settings to forecast cosmic shear constraints on model-independent parametrisations of modified gravity theories with scale-independent linear growth. They found that a much better understanding of baryonic feedback is needed in order to detect a screening transition.
§ THEORETICAL ERROR
Any theoretical prediction will in general come with an associated error and should be included in the likelihood if it is not subdominant to the statistical error. In this section, we discuss the implementation of theoretical errors in our pipeline. To be conservative, we will estimate the theoretical errors using and as these two give the most discrepant results. We will apply this method to
for the pessimistic case of WL to check if we can remove the parameter biases that we observe for the data created by at the expense of enlarging error bars.
§.§ Adding uncorrelated theoretical errors to the likelihood
The implementation presented in this work is based on the work of Audren_2013. We have adjusted the recipe to the full likelihood, although we only show the result for the WL analysis in this paper. Here we will discuss the idea behind the formulation. We calculate the angular power spectrum from the power spectrum using <ref>. If there is some uncertainty in the modelling of the nonlinear power spectrum, this propagates to the angular power spectrum. As <ref> is a linear functional of the nonlinear matter power spectrum, ℱ[P_δδ], we can propagate the error on the power spectrum, Δ P_δδ, and find
ℱ[P_δδ+Δ P_δδ] = ℱ[P_δδ] + ℱ[Δ P_δδ] C_ij^XY(ℓ) + E_ij^XY(ℓ) ,
where we have defined the angular power spectrum error E_ij^XY(ℓ). Following Audren_2013, we define the relative error on the power spectrum
Δ P_δδ(k,z) α(k,z) P_δδ(k,z) .
We take the most conservative approach and assume that the error is uncorrelated between different values of ℓ.
To account for this uncorrelated theoretical error, we add a new nuisance parameter ε_ℓ for each multipole and define the shifted covariance as
C^XY_ij(ℓ) Ĉ^XY_ij(ℓ) + ε_ℓ L^1/2 E^XY_ij(ℓ) .
The free parameters ε_ℓ have the following meaning: they quantify how many standard deviations the shifted covariance matrix is from the theoretically predicted one. We treat them as a random Gaussian variable with zero mean and a standard deviation of one. The normalisation
factor L^1/2 where L=ℓ_max-ℓ_min+1 will become clear soon. The log-likelihood function becomes
χ̃^2 (ε_ℓ) = ∑_ℓ =ℓ_min^ℓ_max[ (2ℓ+1) f_ sky( d̃^ mix_ℓ(ε_ℓ)/d̃^ th_ℓ(ε_ℓ) + lnd̃^ th_ℓ(ε_ℓ)/d^ obs_ℓ-N) + ε_ℓ^2 ].
The quantities d̃^ th and d̃^ mix are constructed in the same way as d^ th and d^ mix
[see eq:ds1eq:ds3] using the shifted covariance matrix C^XY_ij(ℓ). To include the theoretical error, we vary ε_ℓ and marginalise over them. To a very good approximation, this is equivalent to minimising the χ̃^2 with respect to ε_ℓ at the level of the likelihood. Thus, we define our new χ^2 as the minimum of the χ̃^2 with respect to ε_ℓ:
χ^2 min_ε_ℓ∈ℝ^Lχ̃^2.
The normalisation factor L^1/2 can be explained as follows. If we were to measure Ĉ^obs(ℓ) = Ĉ^th(ℓ) + E(ℓ), the minimisation would find ε_ℓ = L^-1/2. The resulting χ^2 = ∑_ℓε_ℓ^2 = ∑_ℓ L^-1 =1 would match our expectation that a one-sigma theoretical error for each ℓ results in an increase of the χ^2 by one.
The main ingredient of this formulation is the relative error function α(k,z) defined in <ref>.
The construction of α(k,z) using and will be discussed in <Ref>.
§.§ Numerical implementation
The numerical computation of the theory error covariance E^XY_ij(ℓ) follows the prescription presented by Euclid:2023pxu. For the minimisation, we can use the fact that all the free ε_ℓ are independent of each other, and we can do the minimisation for each multipole separately. We use Newton's method to find the minimum. For this, we have to compute the first and second derivatives of the likelihood with respect to ε_ℓ. For any single multipole, we find
dχ̃/dε_ℓ = (2 ℓ+1) f_ sky ((d̃^mix_ℓ)^' + (d̃^th_ℓ)^'/d̃^th_ℓ - d̃^mix_ℓ (d̃^th_ℓ)^'/(d̃^th_ℓ)^2) + 2 ε_ℓ .
The derivatives of the determinants are computed using Jacobi's formula. This gives, for example
(d̃^th_ℓ)^' = ( Ĉ_ℓ^th) L^1/2 [ (Ĉ_ℓ^th)^-1 E_ℓ] ,
and a similar expression for (d̃^mix_ℓ)^'. We compute the inverse of the covariance numerically and obtain the second derivatives numerically from the first derivative by doing a double-sided three-point stencil. The minimisation would then need to be done for each multipole. For the pessimistic settings, this would correspond to a minimisation in a 1500-dimensional parameter space. To save time, we only do the minimisation on a logarithmically-spaced grid with 100 discrete values. The other values are obtained from an interpolating function. We checked that the effect of the interpolation does not change the results by more than 1%. The obtained ε_ℓ tend to vary continuously with ℓ, as they try to mimic the effects of changing other theory parameters to the observed power spectrum.
§.§ Application to
We first apply this method to the case where the data are created by and the parameter fitting is done by . Due to the significant bias in the recovered parameters for both |f_R0| and cosmological parameters, we find that the inclusion of theoretical errors is not enough to mitigate the bias. Therefore, we additionally correct the prediction of by for the fiducial cosmology as
Ξ_
= Ξ_(
Ξ_/Ξ_)_ fiducial.
The constraints on cosmological parameters are shown in the left panel of <ref>.
In this case, as expected, we recover the input parameters in an unbiased way when the data are created by . We see that theoretical errors affect mainly and h. In order to check a non-trivial case, next we create the data with . The result is shown in the right panel of <ref>.
Without theoretical errors, the means of 1D marginalised constraints for , h^2 and ln(10^10 A_ s) are slightly biased compared with the input values. The inclusion of the theoretical errors largely resolves these biases not only by enlarging the error bars but also by making the means closer to the input values. We note that the inclusion of theoretical errors in this case is important to justify the rescaling of Ξ in <ref>. We also note that the theoretical error included here overestimates the errors significantly now that we corrected the prediction of by for the fiducial cosmology.
Finally, we study the impact of adding baryonic effects in the presence of theoretical errors. We again use as the data. The result is shown in <ref>. Due to the enlarged errors, constraints on parameters are not affected significantly by baryonic effects. Also, the means of the 1D marginalised constraint remain consistent with the input parameters. However, the inclusion of the theoretical errors changes the degeneracies between cosmological parameters, baryon parameters and . This leads to a tighter lower bound on compared with the case where the same data are fitted by itself. This reinforces our conclusion that the 1D marginalised constraint on is sensitive to degeneracies among parameters, and the difference in the theoretical predictions strongly affects the constraint.
The result shown here is just an illustration of the inclusion of theoretical errors and their effects on the parameter constraints. Our implementation is flexible and it can be applied to any theoretical error described by the relative error function α(k,z) defined in <ref>.
§ CONCLUSION
In this paper, we studied the effect of using different nonlinear predictions for the dark matter power spectrum on the parameter constraints in the Hu–Sawikci f(R) gravity model obtained from primary photometric probes. We implemented four different models in the pipeline to predict angular power spectra for weak lensing (WL), photometric galaxy clustering () and their cross-correlation (). Comparing with the N-body simulation data obtained in , we found that agreed very well with that was used to run simulations to construct the emulator, while had larger errors compared with the simulation. The agreement is better for one of the N-body simulations used to construct obtained in run by . This indicates that the difference between and is larger than the one in the baseline N-body simulations (i.e. and ) mainly due to the way was constructed. In the reference cosmology, gives a larger Ξ, the ratio between the power spectrum in f(R) and in , on all scales compared with and the fitting formula at the 2 % level. underestimates Ξ compared with more than and the fitting formula at intermediate k.
We used the fiducial value of |f_R0| = 5 × 10^-6 (= - 5.301) and ran MCMC in the fiducial cosmology defined in . For the fitting formula, the Fisher Matrix forecast and MCMC results generally agree well although the MCMC result is non-Gaussian. This is partly caused by a lack of cosmological parameter dependence of Ξ, which affects the degeneracy between and cosmological parameters for large . gives more Gaussian constraints with smaller errors. When is used to create the data, the 1D mean of is not strongly biased in the case of and the fitting formula and the 1D bias is at most 0.6 σ. Even for the analysis including all the probes and their cross-correlations, the 1D bias is 0.15 σ for in the case of .
The impact of baryonic physics on WL was studied by using a baryonification emulator . For the optimistic setting, the f(R) parameter and two main baryon parameters are well constrained despite the degeneracies among these parameters. However, the difference in the nonlinear dark matter prediction can be compensated by the adjustment of baryon parameters as well as cosmological parameters, and the 1D marginalised constraint on is biased. This bias can be avoided in the pessimistic setting at the expense of weaker constraints. For the pessimistic setting, using the synthetic data for WL, we obtained the prior-independent bound of < -5.6 using .
shows a large bias in as well as cosmological parameters when was used to create the data.
This is because the prediction of is furthest away from . We implemented a method to include uncorrelated theoretical errors proposed in Audren_2013 to address this issue. The method is based on the relative error function for the nonlinear dark matter power spectrum. We estimated this using the difference between and . We found that the inclusion of theoretical errors alone was not enough to mitigate the bias. We then corrected the prediction of with for the fiducial model. We applied this model to the data created by . We found that theoretical errors, in this case, helped reduce the bias not only by enlarging errors but also by making the means of 1D marginalised constraint closer to the input values. When we added baryonic effects with , errors were not significantly affected and the input values were still recovered. However, the lower bound on is tighter than the one obtained by applying itself. This reinforces our conclusion that the 1D marginalised constraint on is sensitive to the degeneracies among parameters.
Based on the result of this paper, we draw the following conclusions:
* It is important to check the agreement of different N-body codes that are used to create theoretical predictions. This is not only the code itself, but also accuracy settings. In , we found that the accuracy setting such as the refinement criteria in the adaptive mesh refinement method has a large effect on the power spectrum. With the controlled accuracy setting, it is possible to realise 1 % agreements between different N-body codes in terms of Ξ(k, z) in the Hu–Sawicki f(R) gravity model.
* We then need to check the accuracy of emulators. We found that suffers from larger emulation errors and this leads to a larger difference between and compared with the difference in their baseline N-body code and in some cosmologies. Improving the emulation technique will be able to make the agreement better.
* Including baryonic effects can worsen the bias in the 1D marginalised constraint on . This is because the difference in the nonlinear dark matter power spectrum prediction can be compensated by the adjustment of baryon parameters, and the best-fit values are biased. In addition, the degeneracy between baryon parameters, cosmological parameters and leads to a stronger volume effect. This bias can be avoided if there is a prior on baryon parameters from external data sets for example.
* To account for the uncertainty in the theoretical prediction for the nonlinear power spectrum, it is safer to include theoretical errors. In this paper, we used a conservative error estimation using and . An improvement in emulators will make the theoretical error smaller leading to better constraints. However, we still need to check whether or not the inclusion of baryons will change this conclusion.
The pipeline developed in this paper can be used to test the readiness of the nonlinear power spectrum prediction for the application to real data from in other extended cosmologies. For the Data Release 1 of and future data releases, we plan to improve the
emulator and make the pipeline ready for the data analysis. We also plan to extend the analysis to the models studied in Euclid:2023rjj.
Finally, we note that our forecasts have ignored observational systematics such as shear and redshift measurement biases. For Stage-IV surveys like , it is known that these need to be very well characterised in order to get accurate cosmological parameter estimates. This is a challenge that needs to be addressed for both ΛCDM and exotic cosmologies. A detailed study of this issue is beyond the scope of our paper, but we refer to EuclidSkyOverview for details. In summary, the mean galaxy redshifts within the bins need to be known with an accuracy better than ∼ 0.002(1+z) if errors in cosmological parameters are not to be degraded. On the other hand, <cit.> incldued a parameter that shifts the mean of the redshift distribution in each redshift bin and found that these parameters are not strongly degenerate with the f(R) parameter.
K. K. is supported by STFC grant ST/W001225/1.
B. B. is supported by a UK Research and Innovation Stephen Hawking Fellowship (EP/W005654/2).
A. P. is a UKRI Future Leaders Fellow [grant MR/X005399/1]. P. C. is supported by grant RF/ERE/221061.
M. C. acknowledges the financial support provided by the Alexander von Humboldt Foundation through the Humboldt Research Fellowship program, as well as support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research.
F. P. acknowledges partial support from the INFN grant InDark and the Departments of Excellence grant L.232/2016 of the Italian Ministry of University and Research (MUR). FP also acknowledges the FCT project with ref. number PTDC/FIS-AST/0054/2021.
GR’s research was supported by an appointment to the NASA Postdoctoral Program administered by Oak Ridge Associated Universities under contract with NASA. GR was supported by JPL, which is run under contract by the California Institute of Technology for NASA (80NM0018D0004).
Numerical computations were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPNet and the University of Portsmouth.
This work has made use of the Infinity Cluster hosted by Institut d'Astrophysique de Paris.
We acknowledge open libraries support <cit.>, <cit.>, <cit.>, and <cit.>.
For the purpose of open access, we have applied a Creative Commons Attribution (CC BY) licence to any
Author Accepted Manuscript version arising.
Supporting research data are available on reasonable request from the corresponding author.
§ FIDUCIAL DATA
In this Appendix, we show the result of the cases where we use as fiducial data for the pessimistic setting of WL with and without baryons. <Ref> and <ref> show the results without baryons while the <Ref> and <ref> show the results with baryons. We observe that the agreements of the three models are generally better as expected although without baryons, 1D bias is still at the 3σ level for due to smaller errors of . On the other hand, in the case with baryons, 1D bias is reduced to less than 1σ.
§ CONSTRUCTION OF THE RELATIVE ERROR
To construct the relative error function, α(k,z), we can use the fact that Ξ(k, z) is constructed as the ratio to the same nonlinear power spectrum calculated with the Halofit `Takahashi' prescription in all models. This means that any deviations of the power spectrum from N-body simulations is also absorbed in Ξ. We can thus write the relative error as
α(k,z) = Δ P_δδ(k,z)/ P_δδ(k,z) = ΔΞ(k,z)/Ξ(k,z) .
To construct Δ P_δδ we follow the basic idea that all predictions for Ξ are equally accurate in calculating the modified-gravity power spectrum. The true power spectrum is thus only known up to the spread of the predictions. To be conservative with our forecast, we choose to calculate the error from the difference between the predictions and the predictions, which is the largest among these predictions. This is shown in <ref>. The prescription underpredicts the power spectrum at intermediate scales and overpredicts slightly at smaller scales. This is most likely due to handling the one- and two-halo power spectra separately. predictions for the pseudo-cosmology non-linear power spectrum also contribute to these inaccuracies as we can see by comparing the left and right panels of Figure 4 in Cataneo:2018cic.
To simplify the exact difference between these two codes, we argue that the theoretical error should plateau at the maximum deviation. This is because at nonlinear scales, the modes of different wave numbers are no longer independent of each other. By plateauing, we essentially say that we can no longer be more precise in the computation of the power spectrum on smaller scales.
Finally, we construct α using only the difference of the predictions at a fiducial of |f_R0| = 5 × 10^-6 in order not to bias our results. Our final fit has the form
α(k,z) = A(z) x^2+x/x^2+x+1 ,
A(z) = A_1/exp(z-A_2/A_3)+1+A_4 ,
k_ p(z) = B_1 exp[tanh(z-B_2/B_3)] ,
where we use x k / k_ p. We separately fit the amplitude function A(z) to the maximum deviation and the plateau wavenumber k_ p(z) to the wavenumber of maximum deviation. The best-fit values can be found in <Ref> and are shown as the shaded region in <ref>. We see that the zero line is within the 68.3% confidence bounds most of the time. This is in accordance with our construction that the two emulators should differ by the theoretical error.
|
http://arxiv.org/abs/2409.02437v1 | 20240904042714 | Fuzzy Logic Control for Indoor Navigation of Mobile Robots | [
"Akshay Kumar",
"Ashwin Sahasrabudhe",
"Sanjuksha Nirgude"
] | cs.RO | [
"cs.RO"
] |
USV-AUV Collaboration Framework for Underwater Tasks under Extreme Sea Conditions
Jingzehua Xu1^,+,
Guanwen Xie1^,+,
Xinqi Wang2,
Yiyuan Yang3,
Shuai Zhang4
1Tsinghua Shenzhen International Graduate School, Tsinghua University, China
2College of Information Science and Electronic Engineering, Zhejiang University, China
3Department of Computer Science, University of Oxford, United Kingdom
4Department of Data Science, New Jersey Institute of Technology, USA
Email: {xjzh23, xgw24}@mails.tsinghua.edu.cn, [email protected], [email protected], [email protected]
^+ These authors contributed equally to this work.
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
§ ABSTRACT
Autonomous mobile robots have many applications in indoor unstructured environment, wherein optimal movement of the robot is needed. The robot therefore needs to navigate in unknown and dynamic environments. This paper presents an implementation of fuzzy logic controller for navigation of mobile robot in an unknown dynamically cluttered environment. Fuzzy logic controller is used here as it is capable of making inferences even under uncertainties. It helps in rule generation and decision making process in order to reach the goal position under various situations. Sensor readings from the robot and the desired direction of motion are inputs to the fuzzy logic controllers and the acceleration of the respective wheels are the output of the controller. Hence, the mobile robot avoids obstacles and reaches the goal position.
Keywords: Fuzzy Logic Controller, Membership Functions, Takagi-Sugeno-Kang FIS, Centroid Defuzzification
§ INTRODUCTION
Autonomous navigation systems have distinct approaches to trajectory generation, path planning, control and required computation to execute the tasks for self-driving vehicles and mobile robotic platforms. Unlike self-driving vehicles, autonomous mobile robots for indoor as well as outdoor applications do not have a specific road-like path to maintain while moving ahead, thereby having the independence to plan and track any feasible and easy path to the target location while avoiding obstacles and satisfying other dynamic constraints.
Over the years, several control techniques have been deployed for efficient performance of these mobile robotic platforms. These techniques range from classical methods like PID control, trajectory control and position control to sophisticated methods like Model Predictive Control and Fuzzy Logic Controller <cit.>, <cit.>. PID Control technique is the easiest of all, but suffers from issues of tuning and robustness; the major deterrent to its use in any real-time high fidelity demanding problem, like mobile robot platforms. Other techniques like trajectory and position control work in environments without disturbances and/or unprecedented possibilities. However, Receding Horizon-Model Predictive Control shows promising results but it is mathematically quite expensive, making the implementation tough.
To accommodate such limitations, we used the Fuzzy Logic Controller <cit.>, <cit.> for navigation of a mobile robot platform of TurtleBot2 in Gazebo simulation environment. A fuzzy control system runs on fuzzy logic (no hard decisions) by considering analog inputs as continuous logical variables ranging between 0 and 1 instead of strictly 0 or strictly 1. It essentially means asserting conditions to be "partially true/false" instead of "true/false".
§.§ Literature Survey
Fuzzy Logic controller has been used many times for control of mobile robots. In <cit.> both the navigation and obstacle avoidance approaches are used. The method is applied on a non-holonomic mobile robot. In the paper <cit.> the authors have used fuzzy logic controllers with various types of inputs like sonar, camera and stored map. An application of fuzzy logic controller is proposed for indoor navigation in the paper <cit.>. In this paper wheeled mobile robots(WMR) are used. Another application of the fuzzy logic controller for indoor navigation is presented in the paper <cit.>.In this paper visual sensors are used to guide the robot to the target, but they do not use FLC for obstacle avoidance.
§ FUZZY LOGIC
The section is divided into two subsections - subsection A discusses the general approach to fuzzy control while subsection B explains the exact techniques for inference and processing used to implement the proposed controller.
§.§ Fuzzy Control Approach
Fuzzy logic theory is a solution to control mobile robots. The basic structure of a fuzzy logic controller is composed of three steps. The first step is fuzzification which transforms real values inputs and outputs into grade membership functions for fuzzy control terms. An example membership function generation setup is shown in Figure <ref>. The second step is the inference which combines the facts acquired from the fuzzification step and conducts a reasoning process. The basic fuzzy rules depend on the information acquired which is then reasoned using the 'If-antecedents-then-conclusion' rule. The last step is the defuzzification which transforms the subsets of the outputs which are calculated by the inference step.
We use a combination of two fuzzy logic controllers to complete our task. For navigation a Tracking Fuzzy Logic Controller (TFLC) would be used and an Obstacle Avoiding Fuzzy Logic Controller(OAFLC) would be used for avoiding unknown obstacles in the cluttered environment. The lack of information of the environment makes it a challenging problem to navigate. The TFLC and OAFLC are combined to navigate the robot to the target along a collision free path. The algorithm starts with TFLC and whenever there is an obstacle in the path, it switches to OAFLC. The output of this algorithm are velocities of left and right wheels.
TFLC helps to move the robot to the target smoothly by taking the distance and the angle between the robot and the target as its inputs. OAFLC is used to generate a control signal in order to avoid obstacles. The inputs to the OAFLC are the distances from the obstacles at certain angles from the robot. These distances are acquired from the depth sensor of Kinect Sensor on TurtleBot. The velocities of the left and right wheels are calculated using the defuzzification step.
§.§ Fuzzy Techniques
Here, we discuss the techniques used for the two important implementations of the Fuzzy Logic Controller - the Fuzzy Inference system and the defuzzification technique. The Takagi-Sugeno-Kang fuzzy inference technique and the Centroid defuzzification methods are used to implement our proposed controller. The TSK approach computes the output of the If-Else rules as a linear expression made up of weighted conditional components. Elaborately, the FIS setup processes all If-Else conditional statements with the weights generated on the basis of the membership functions and computes a new weight for execution of the condition. Further, the Centroid defuzzification process computes a normalized weight distribution for conditions and thereafter their weighted sum to generate final numerical output values. These techniques have similar implementation for the Tracking FLC as well as the Obstacle Avoidance FLC.
§ METHODOLOGY
In order to implement Fuzzy Logic Controller on a mobile robot platform, the TurtleBot2 robot platform is being used. Gazebo simulator with ROS support is being used for simulation, testing and environment creation platform.
This methodology section has been further divided into subsections that explain the hardware setup, the software design, environment setup, fuzzification of sensor data, controller implementation methodology and final implementation nuances of the proposed system.
§.§ TurtleBot2 Hardware
Figure <ref> shows the CAD specifications of the TurtleBot2 mechanical model and design.
The TurtleBot2 is an extended work placed atop a standard differential drive mobile base from Kobuki. It has several sensors like the bump sensor and cliff sensors on the base. The IMU sensor on the base observes the angular heading and senses the variations over motion. Table 1 mentions the several hardware specifications of the Kobuki base being used.
The TurtleBot2 version used here has a Asus Xion Pro Live mounted for perception. We use the depth sensing and consequent conversion of the same into a 2D laser scan to learn about the presence of obstacles for navigation. It has 58.5^o and 48.0^o horizontal and vertical angular ranges of view respectively. Its linear range of view is 80cm to 4m in far mode and 40cm to 3m in near mode. Given the large area of observation, we are able to create several levels of fuzzy logic for control.
§.§ TurtleBot2 Software
Since the robot supports ROS(Robot Operating System) to communicate and execute instructions, the primary mode of information exchange is the Subscriber/Publisher technique where the controller node reads subscribes to topics which have information about the surroundings and publishes data for other nodes as per requirement. The Twist message from 'geometry_msgs' message type publishes messages to '/cmd_vel_mux/input/navi' topic to control the movement of the robot's Kobuki base.
Table 2 shows the messages used to maneuver the robot around.
The controller node subscribes to several topics, namely, 'camera/depth/image_raw', '/scan' and
'/camera/depth/points' providing the continuous depth cloud data points, horizontal laser scan data(array with distances of obstacle in the range of view) and complete point cloud visualization information respectively. It also fetches knowledge about the robot's current position in the world and its previous motion from related topics like 'joint_states' and 'gazebo/link_states'.
Figure <ref> shows the information obtained from Depth Cloud and Laser Scan data obtained from the sensor.
The '/joint_states' topic provides total distance traveled by each of the wheels(based on the revolutions) and the velocities of each individual wheel. The fuzzy logic controller is essentially supposed to determine the Cartesian velocities for the robot and feed the corresponding angular velocities to the wheels. However, since the robot already supports taking commands in Cartesian coordinates, the controller does not need to make those conversions. Finally, the position and orientation is published to the '/gazebo/link_states' topic which has messages like Twist, Pose and reference frame information.
§.§ Environmental Setup
In order to test our controller performance we defined a customized environment in the Gazebo simulator as shown in Figure <ref>. The environment consists of objects of different shapes and sizes in order to increase the complexity of the data input from the Xion sensor. The objects are placed randomly. We can spawn our robot at any point in the environment and provide it with different goal positions to check the robustness of our controller.
§.§ Fuzzification of Kinect/Xion data
Figure <ref> shows the fuzzification process for data coming from the Kinect/Xion sensor available on the TurtleBot2. Data is discretized based on the angular subsections of the depth image scan at every 3^∘. Thus, the total depth image is discretized in 20 subsections. Further, the depth values are discretized in subsections of around 0.5m . The depth data is thus divided in 5 sections ranging from 0.4m to 3m.
This discretized data is used to decide on the linear velocity values for left and right wheels which correspond to linear and angular velocity for the TurtleBot2 in our case.
§.§ Implementation of the Fuzzy Logic
As shown in Figure <ref>, the implementation of our controller involves simultaneously running Tracking FLC and Obstacle Avoidance FLC. Each of them contribute towards the final decision on the linear velocities and the current direction of heading.
While the TFLC tries to adjust the robot's heading in the direction of the target and set a linear speed that makes the robot move towards the goal, the OAFLC runs If-Else inference conditions on the obstacles encountered, depending upon their distance and angular position in robot's field of view.
The OAFLC adjusts the heading and the linear speed together to just be able to dodge the obstacle with minimal effect in the previous speed/heading. This iterative process terminates after the robot reaches the target position.
The final control signals could be obtained from the mathematical equation as:
(ẋ, ω̇_z) = x*(ẋ, ω̇_z)_TFLC + (1-x)*(ẋ, ω̇_z)_OAFLC
where, ẋ and ω̇_z represent the linear velocity in X direction and the angular velocity about the Z axis, for the robot. The equation provides the final commands to be sent to the robot which is a weighted sum of the same generated independently by the TFLC and the OAFLC.
§.§.§ Obstacle Avoidance FLC
The Obstacle avoidance Fuzzy Logic Controller works based on the sensor data for distance from obstacles. In case of TurtleBot 2, we use the Depth camera of Kinect to extract depth values at certain angles. The incoming depth values are also discretized as shown in Figure <ref> with several depths and angle discretizations forming an angular grid.
The discretization in depth can also be changed based on the application and complexity of problem. This however changes the number of If-Else rules that are created based on the conditions imposed on each depth reading.
The proposed implementation here divides the angular range in 3 sections of 20 degrees (as the Kinect sensor has a range of 60^o) each and depth is also discretized into 3 sections named as Very Near, Near, Far. Final inference rules change based on the combination of the three depth values and sections in which each of them falls. The If-Else conditions thus obtained are shown in Figure <ref> where columns A, B and C are represent the distance between the robot and the obstacle in those angular sections.
§.§.§ Tracking FLC
Given that the TFLC takes the distance between the robot and target and the angular deviation between the robot's current heading and the line joining the robot to the target, the TFLC does not need any extra sensing setup. Proprioception from odometry data gives the current angular heading of the robot and the distance between the robot and target.
Angular heading deviation was fuzzified into 5 sections named Negative Right (-90^o), Negative Thirty (-30^o), Aligned ( 0^o), Positive Thirty (30^o) and Positive Right (90^o) while the distance was fuzzified into simpler Zero, Near and Far sections. The resultant If-Else rules generated are shown in Figure <ref>
The final behavior fusion from the two FLCs is shown in Figure <ref> which shows final weighted sum of the same as the final commands sent to the robot for its motion.
§ EXPERIMENTAL RESULTS
The proposed FLC methodology delivered satisfactory results during implementation in the simulation environment. Figure <ref> shows variations in linear velocities while the robot tries to traverse from a far off start point to reach the target.
As evident in the results, over time, the TFLC predicts highest linear velocity when the robot is far off and then gradually decreases as it nears the target. The peaks in OAFLC predicted linear velocity distribution suggests that it encountered obstacles at those time-steps and thus predicted a changed linear velocity at a changed angular orientation to dodge the obstacle.
The devised controller was tested for varying complexity in simulation environment as well as different start and target positions. TurtleBot2 was able to reach those locations within very acceptable time period. The controller performance was satisfactory for all the testing situations. Two of the results have been recorded and the video showing the same can be found here at <https://www.youtube.com/watch?v=GUEN4Orpb2A> and <https://www.youtube.com/watch?v=4fj4q-swg0U>
Since the proposed technique does not make the robot track any pre-defined trajectory or constraints, the results do not have any comparative graphical content but rather the above videos show complete implementations.
§ FUTURE WORK
We propose using techniques like Genetic algorithm and Particle-Swarm optimization to improve the performance of our system. Genetic algorithm is an evolutionary algorithm that uses biological operators like mutation, crossovers, elitism and culling. The algorithm tunes the fuzzy control rules and tries to make the system resemble an ideal control system. The tuning method fits the fuzzy rules' membership functions with the FIS (Fuzzy Inference System) and the defuzzification process. In the end, the method extracts best membership functions for the process. Particle-Swarm optimization technique is another similar iterative evolutionary algorithm that improves a candidate solution by making it "fly" through the problem space following the current best solution.
§ CONCLUSION
We were able to get satisfactory performance for robot navigation in unknown environments with no prior knowledge about obstacles. The proposed FLC implementation
using native localization from the simulation environment which could be eliminated easily. We also faced the following problems in the project.
§.§ Problems Faced
As we are using the Kinect sensor on the TurtleBot2 we are facing the following range limitations:
* The Xion gives an angular range of 58.5 degrees divided with a central axis. Therefore it limits the visibility range and cannot detect obstacles out of that angular range. Therefore, unlike the LIDAR on TurtleBot3, TurtleBot2 lacks a 360 degree view and the controller shall have limited sensing which might affect the performance we fear.
* We get the depth image from the Xion which gives data from range 40cm to 3 meters in the near mode. Hence, any obstacle between this range can be detected. But this creates a limitation for detection of obstacles nearby the robot, at a distance less than 40cm which becomes a blind spot and makes the controller limited performance on sudden appearance of obstacles, difficult.
plain
|
http://arxiv.org/abs/2409.02110v1 | 20240903175952 | Estimating the coherence of noise in mid-scale quantum systems | [
"Pedro Figueroa-Romero",
"Miha Papič",
"Adrian Auer",
"Inés de Vega"
] | quant-ph | [
"quant-ph"
] |
[email protected]
IQM Quantum Computers, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany
IQM Quantum Computers, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics,
Ludwig-Maximilians-Universität München, Theresienstr. 37, 80333 Munich, Germany
IQM Quantum Computers, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany
IQM Quantum Computers, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics,
Ludwig-Maximilians-Universität München, Theresienstr. 37, 80333 Munich, Germany
§ ABSTRACT
While the power of quantum computers is commonly acknowledged to rise exponentially, it is often overlooked that the complexity of quantum noise mechanisms generally grows much faster. In particular, quantifying whether the instructions on a quantum processor are close to being unitary has important consequences concerning error rates, e.g., for the confidence in their estimation, the ability to mitigate them efficiently, or their relation to fault-tolerance thresholds in error correction. However, the complexity of estimating the coherence, or unitarity, of noise generally scales exponentially in system size. Here, we obtain an upper bound on the average unitarity of Pauli noise and develop a protocol allowing us to estimate the average unitarity of operations in a digital quantum device efficiently and feasibly for mid-size quantum systems. We demonstrate our results through both experimental execution on , a 5-qubit superconducting quantum computer, and in simulation with up to 10 qubits, discussing the prospects for extending our technique to arbitrary scales.
The main obstacle for quantum computing undeniably remains the presence of noise, causing a multitude of errors that limit the usefulness of currently available quantum processors. That is to say, not all quantum noise is made equal, and ironically, figures of merit that are efficient to access in an experiment –such as the fidelity of a quantum operation– are often by definition insensitive to many of the details of such noise. It is thus imperative to have a plethora of protocols with certain practical desiderata that can probe different aspects of noise, and that can then, in turn, provide actionable information to existing methods to suppress, mitigate, or fully correct potential errors.
Quantum noise can be described as an undesirable quantum transformation mapping the ideal noiseless output state of a quantum computation into some other valid, albeit unexpected, output quantum state. Focusing on the regime where such noise is effectively Markovian, i.e., it only depends on the input state and no other external context variables, a comprehensive classification of all possible errors can be made <cit.>, but in particular it is extremely important to be able to distinguish between coherent and incoherent error contributions.
Coherent noise can be described as a deterministic, undesired unitary transformation, e.g., it can arise due to imperfect control or calibration of quantum gates, and also due to certain types of crosstalk <cit.>. Incoherent or stochastic noise, on the other hand, is described by non-unitary transformations and arises purely statistically, e.g., a bit-flip occurring with a certain non-zero probability, or so-called depolarizing noise, whereby a state gets maximally mixed with a certain probability but remains intact with the complementary probability. Generally, Markovian quantum noise will be described by a quantum operation that contains both coherent and stochastic noise contributions. The reasons why distinguishing between these is important, include:
* Coherent errors can, for example, be effectively suppressed by good experimental quantum control, while incoherent errors in isolation can be efficiently mitigated <cit.> and fully corrected <cit.>.
* Coherent errors can accumulate in a much more detrimental way and provide error-rate estimates, such as average gate-fidelity, that can differ by orders of magnitude from worst-case estimates (i.e., fault-tolerance thresholds) <cit.>.
* The interpretation of average gate fidelity as a reliable and meaningful error rate depends heavily on the noise having a low coherent contribution <cit.>.
The average coherence of a noisy quantum gate can be understood as a measurement of the rate at which noise shrinks the n-dimensional Bloch ball, as depicted in Fig. <ref>, and can be measured by its unitarity, which in turn can be estimated operationally by urb <cit.>. Despite being efficient and robust to spam errors, urb is not scalable in system size, as it requires estimating an exponential amount of expectation values within a rb-like protocol. The reason for this is not urb per se, but rather that fundamentally, the unitarity is a second-order functional of the noise, akin to the case of the so-called purity for a quantum state.
In general, the complexity of characterizing quantum noise comprehensively typically scales exponentially with system size <cit.>. Even the most efficient of protocols estimating a single figure of merit, such as those within the family of rb <cit.>, fail to do so at scale. However, with the quantum industry inevitably moving towards larger systems, there is a pressing need to find ways to generalize such tools to larger systems.
Recently, scalable rb-based techniques such as mrb <cit.> and birb <cit.>, among others <cit.>, have been developed, which can estimate the average fidelity of sets of quantum operations in a scalable, efficient and spam-robust way, for a large class of gate sets. On the other hand, rm techniques, and so-called classical shadows, have enabled the efficient estimation of properties of many-body quantum systems <cit.> (including state fidelity and purity), linear properties of gate sets <cit.>, and characterization of processes with memory <cit.>.
We harness the results of <cit.>, together with a scalable rb-inspired protocol, to develop a standalone technique allowing to simultaneously estimate the average fidelity and average unitarity of layers of operations in mid-scale quantum systems. Moreover, we first identify an interval for the unitarity of so-called stochastic Pauli noise in terms of average fidelity, enabling certification of whether average noise has a low coherent contribution and establishing meaningful error budgets via the average unitarity.
In <ref> we introduce the main technical background, while in <ref> we present a new upper bound on the unitarity of Pauli noise. In <ref> we summarize existing results regarding rms, before presenting our protocol in <ref>, which in <ref> we demonstrate with experimental execution on <cit.>. In the remaining <ref> we discuss the bottlenecks for a scalable estimation of average coherence of noise, and finally, we draw some conclusions and prospects to overcome mid-scalability in <ref>.
§ AVERAGE COHERENCE OF NOISE AND ITS RELATION WITH PURITY AND FIDELITY
It is well known that no physical system can be perfectly isolated. In particular, the loss of coherence of quantum states is one of the fundamental practical problems facing any quantum technology. In general, the dissipation of information to an external environment results in an increasingly mixed classical probability distribution over a set of possible quantum states. Such mixedness or uncertainty can be quantified by the purity of the respective density matrix ρ, given by (ρ^2). The purity takes extremal values of 1 for a pure state, and 2^-n for a maximally mixed state on n-qubits, and it can also be written as the Schatten 2-norm ρ_2^2, where X_2=[(XX^)]^1/2.
Logical operations or gates G in a quantum computer are described by unitary operators, which by definition preserve the purity of quantum states, i.e., for G(ρ_𝗂𝗇)=ρ_𝗈𝗎𝗍, we have (ρ_𝗂𝗇^2)=(ρ_𝗈𝗎𝗍^2). The corresponding, real noisy operation, G_𝗇𝗈𝗂𝗌𝗒, however, is described by a more general cp map, such that for G_𝗇𝗈𝗂𝗌𝗒(ρ_𝗂𝗇)=ρ_𝗈𝗎𝗍, we have (ρ_𝗈𝗎𝗍^2)≤(ρ_𝗂𝗇^2). The reduction in purity of states when acted on by a noisy channel is related to the unitarity of such channel, a measure of “how far” G_𝗇𝗈𝗂𝗌𝗒 is from being unitary. Because locally we can always relate a noisy channel with an ideal gate by G_𝗇𝗈𝗂𝗌𝗒:=E∘G (<cit.>) for some cp map E, the unitarity of the noisy gate equivalently measures how far E is from being unitary.
The average unitarity of a quantum channel can be defined as in <cit.>, by
(E) := 2^n2^n-1_ψ∼𝖧𝖺𝖺𝗋E^'(ψ)_2^2,
where E^'(·) := E( ·-1/2^n), with 1 being an n-qubit identity operator, and the normalization 2^n/(2^n-1) ensuring that 0≤(E)≤1, and with (E)=1 if and only if E —and hence the noisy operation G_𝗇𝗈𝗂𝗌𝗒— is unitary. Since the purity of a noisy output state may be written in the form (ρ_𝗈𝗎𝗍^2)=E(ρ_𝗂𝗇)_2^2, the definition in Eq. (<ref>) implies that the average purity of a noisy state is, in general, a combination of average unitarity and trace-decrease and/or non-unital terms [ See, e.g., Eq. (<ref>) for an explicit expression.].
The unitarity only quantifies how unitary G_𝗇𝗈𝗂𝗌𝗒 is, but not whether it is the target unitary, i.e., it does not measure whether G_𝗇𝗈𝗂𝗌𝗒 is far from the ideal G. A figure of merit aiming to quantify this distinction is the average gate fidelity,
𝖥(G_𝗇𝗈𝗂𝗌𝗒,G) := _ψ∼𝖧𝖺𝖺𝗋[G_𝗇𝗈𝗂𝗌𝗒(ψ)G(ψ)]
= _ψ∼𝖧𝖺𝖺𝗋⟨ψ|E(ψ)|ψ⟩ := 𝖥(E),
where implicitly we denote 𝖥(E):=𝖥(E,I), the average gate fidelity of E with respect to the identity I (and in a slight abuse of notation, ψ=|ψ⟩⟨ψ|). The average gate fidelity satisfies 1/(2^n+1)≤𝖥(E)≤1, with 𝖥(E)=1 if and only if E=I (equivalently if and only if G_𝗇𝗈𝗂𝗌𝗒=G). With respect of , in <cit.> it is shown that the average unitarity satisfies (E) ≥f(E)^2, where
f(E) = 2^n(E)-12^n-1,
is the so-called average polarization of E, and with saturation occurring for a depolarizing (purely incoherent) channel.
The unitarity can also be written in terms of fidelity as (E) = (2^n/(2^n-1))𝖥(E^' E^'), where E^' is the adjoint map of E^'[ The adjoint map Φ^ of a cp map Φ is defined by [AΦ(B)]=[Φ^(A)B], or equivalently by conjugating the Kraus operators of Φ.]. Thus, the average unitarity can also be understood as a type of second-order function of the fidelity of E^', quantifying on average how distinguishable is the composition E^' E^' from the identity. Under the assumptions that E approximately models the average noise of a gate set {G_i} in a temporally uncorrelated (Markovian), time-independent and gate-independent way, 𝖥(E) can be estimated through rb, and the unitarity (E) can be estimated through urb <cit.>. Neither technique is scalable in the number of qubits, however, recently the mrb and birb variants were developed in <cit.> allowing efficient and scalable estimation of at least 𝖥(E) for a large number of qubits.
§ PAULI NOISE UNITARITY
Incoherent noise is tp and unital, and a distinction from coherent noise is that it generates no net rotation of the state space <cit.>. Pauli noise is a special case of incoherent noise described by a channel ρ↦∑_P∈𝖯𝖺𝗎𝗅𝗂_nα_PPρP, where α_P is the Pauli error probability associated to a n-qubit Pauli operator P. Its relevance stems not only from that of the Pauli basis in quantum information theory and experiment but also from its extensive application in quantum error characterization <cit.>, quantum error mitigation <cit.>, quantum error correction <cit.> and in its connection with fault tolerance thresholds <cit.>. Any quantum channel E can be averaged to a corresponding Pauli channel E^ by means of its Pauli twirl, E↦E^(·):=∑_P∈𝖯𝖺𝗎𝗅𝗂_n4^-nPE(P·P)P, i.e., reduced to a Pauli channel with the same Pauli error probabilities as E.
[Fidelity bound for Pauli unitarity] The unitarity of the Pauli-twirled channel E^ of E is bounded by
f(E)^2 ≤(E^) ≤4^n-2(2^n-1)^2 r(E)^2+f(E)^2,
where f is average polarization defined by Eq. (<ref>), and r:=1- is average infidelity.
The proof is detailed in in Appendix <ref>. Given an average layer (in)fidelity, the bound in Ineq. (<ref>) allows us to estimate whether average noise is close to Pauli through the average unitarity. Furthermore, the Pauli-twirled channel E^ of a given noise channel E can be efficiently approximated operationally by rc <cit.>, so Ineq. (<ref>) enables to certify rc through the average unitarity. Generally, for small average infidelity, Ineq. (<ref>) will be tight, and the average unitarity will need to approach the square of the average polarization, f(E)^2, for the average noise to be strictly Pauli. This can be seen more clearly for a fixed number of qubits n, as in Fig. <ref>.
The upper bound in Ineq. (<ref>) overestimates the average unitarity of Pauli noise by a proportion of cross-products of non-identity Pauli error rates, (∑α_P)^2-∑α_P^2, and further assumes tp noise, so while a unitarity outside the Pauli bounds guarantees that average noise will contain coherent error contributions, a unitarity value within the Pauli interval (<ref>) solely points to the likelihood of the average noise being Pauli.
§ PURITY AND FIDELITY VIA RANDOMIZED MEASUREMENTS
The main roadblock for a scalable urb stems from purity being a quadratic function of a quantum state, in general requiring knowledge of all its components, e.g., as described in <cit.> requiring state tomography within a rb protocol. The efficient estimation of purity, however, was one of the first problems giving rise to the rm and shadow tomography techniques <cit.>. While for the particular case of purity, the number of required measurements for a given accuracy still scales exponentially, the exponent for rms is significantly smaller than for full tomography <cit.>.
We will focus on the result of <cit.>,
(ρ^2 ) = 2^n∑_𝐬,𝐬^' (-2)^-h(𝐬,𝐬^')_𝖴_i∼𝖧𝖺𝖺𝗋𝖯_𝖴^(𝐬)𝖯_𝖴^(𝐬^'),
where here 𝖴=_i=1^n𝖴_i is a random unitary, with _𝖴_i∼𝖧𝖺𝖺𝗋 denoting averaging with the Haar measure over the each 𝖴_i, 𝐬 and 𝐬^' are n-bit strings, h(𝐬,𝐬^') is the Hamming distance (number of distinct bits) between them, and the
𝖯_𝖴^(𝐬) := ⟨𝐬|𝖴^ρ 𝖴|𝐬⟩,
are probabilities of observing the n-bit string 𝐬 upon randomizing with 𝖴, and which are estimated in experiment. Since Eq. (<ref>) involves two copies of 𝖴 and 𝖴^, it suffices that the local unitaries 𝖴_i belong to a unitary 2-design <cit.>, such as the uniformly-distributed Clifford group.
By a rm, here we will mean a random unitary operation 𝖴, followed by a projective measurement in the computational basis, 𝖴|𝐬⟩ [ More strictly, this corresponds to a rm element, and the concept of a rm can be formalized with Random povms, as in <cit.>]. In Eq. (<ref>), this can equivalently be read as randomizing the state ρ with 𝖵=𝖴^, and then performing a computational basis measurement.
Purity can alternatively be estimated through the so-called shadow of the state, constructed via rms. That is, one can equivalently construct proxy states ρ̂_𝗌 through the probabilities 𝖯_𝖴(𝗌) and compute the purity through products (ρ̂_𝗌ρ̂_𝗌^') <cit.>. This is an equivalent approach with similar performance guarantees <cit.>; here we employ Eq. (<ref>) because it can be used within a rb-like protocol straightforwardly.
Operationally, the purity is truly important once in light of an associated fidelity <cit.>: notice that the probabilities 𝖯_𝖴^(𝐬) can be recycled in a straightforward way to estimate the fidelity of ρ with respect to an ideal pure state |ψ⟩, through a slight modification to Eq. (<ref>), as
⟨ψ|ρ|ψ⟩ = 2^n∑_𝐬,𝐬^' (-2)^-h(𝐬,𝐬^')_𝖴_i∼𝖧𝖺𝖺𝗋𝖯_𝖴^(𝐬)𝖰_𝖴^(𝐬^'),
where here now 𝖰_𝖴^(𝐬)=|⟨ψ|𝖴|𝐬⟩|^2 is the probability of observing the n-bit string 𝐬 upon randomizing with 𝖴=_i𝖴_i and measuring |ψ⟩. This comes at the expense of estimating the ideal probabilities 𝖰_𝖴^(𝐬), but this can be done classically in an efficient way since the 𝖴_i can be replaced by single-qubit Clifford unitaries.
§ AVERAGE UNITARITY THROUGH RANDOMIZED MEASUREMENT CORRELATIONS
While we do not require the same setup as in mrb or birb, we mainly follow the framework and notation of <cit.>. We will, in particular, only consider single-qubit and two-qubit gate sets, G_1 and G_2, distributed according to probability distributions Ω_1 and Ω_2, respectively. We will refer to instructions with these gate sets on n-qubits, simply as layers, and denote the corresponding set of layers by L(G), where G=G_1∪G_2. In practice, a rule on how to apply the sampled gates must also be specified, depending e.g., on the topology, connectivity, and the desired density of the gates.
We construct n-qubit random circuits, of layer circuit circuit depth m, of the form
𝖢_m=𝖫_m𝖫_m-1⋯𝖫_1,
where the right-hand side is read (or acts) from right to left, and where each 𝖫_i=𝖫_i^(2)𝖫_i^(1) is a composite layer, made up of subsequent applications of a layer 𝖫_i^(1)∈L(G_1), of only parallel single-qubit gates, and a layer 𝖫_i^(2)∈L(G_2), of only parallel two-qubit gates. We will say that all 𝖫_i∈L(G) are random layers distributed according to Ω, which is such that Ω(𝖫_i)=Ω_1(𝖫_i^(1))Ω_2(𝖫_i^(2)), and we refer to 𝖢_m in Eq. (<ref>) as a Ω-distributed random circuit of depth m.
We use the term depth to mean the number of layers entering the given Ω-distributed circuit, before transpilation to a given basis gate set. We illustrate these concepts in Fig. <ref>. We also point out that while the choice of the pair (G,Ω) is a priori arbitrary, here we will only consider G_1:=𝖢𝗅𝗂𝖿_1 being the Clifford group (or effectively any other efficient single-qubit unitary 2-design) and Ω_1 a uniform distribution.
Finally, below we distinguish estimators of expectation values of a random variable by a circumflex, i.e., 𝖷̂ represents the estimator of the expectation of some random variable 𝖷.
§.§ The protocol
With this setup, the protocol consists of:
* Preparation: Generate N_𝖢_m^* samples of n-qubit Ω-distributed circuits 𝖢_m for all given circuit depths m, each with a different layer 𝖵∈L(𝖢𝗅𝗂𝖿_1) sampled uniformly, preppended to it. We will denote 𝖢^*_m:=𝖢_m𝖵.
* Quantum Execution: Estimate probabilities
𝖯_𝖶,𝖢_m^*^(𝐬) := ⟨𝐬|𝖶 𝖢_m𝖵(|0⟩⟨0|)𝖵^𝖢_m^†𝖶^†|𝐬⟩,
of observing the n-bit string 𝐬, for all circuit depths m, all N_𝖢_m^* circuit samples and a given number N_𝖶 of layer samples 𝖶∈L(G_1), sampled according to Ω_1. We denote the corresponding estimator quantities, given a finite number of measurements N_𝗆𝖾𝖺𝗌 by
𝖯̂_𝖶,𝖢_m^*^(𝐬)=1N_𝗆𝖾𝖺𝗌∑_i=1^N_𝗆𝖾𝖺𝗌1_𝗌(𝐱_i),
where 𝐱_i is a random variable describing the ith projective measurement of the n-qubits in the computational basis, with 1_𝐬(𝐱)=1 if 𝐱=𝐬 or 0 otherwise.
* Classical post-processing: Estimate the average purity tr(ρ_m^2) for all depth m states ρ_m:=𝖢_m𝖵(|0⟩⟨0|)𝖵^𝖢_m^†, via
𝔓̂_m = 2^n∑_𝐬,𝐬^'
𝖶,𝖢_m^*(-2)^-h(𝐬,𝐬^')𝖯̂^(𝐬)_𝖶,𝖢_m^*𝖯̂^(𝐬^')_𝖶,𝖢_m^*N_𝖶N_𝖢_m^*,
where h(𝐬,𝐬) is the number of distinct elements between the n-bit strings 𝐬 and 𝐬^'. Estimate the probabilities 𝖰̂^(𝐬)_𝖶,𝖢_m^* := |⟨𝐬|𝖶^(𝗂𝖽)|ψ^(𝗂𝖽)_m⟩|^2 for all noiseless states, ψ^(𝗂𝖽)_m:=𝖢_m^(𝗂𝖽)𝖵^(𝗂𝖽)|0⟩ of depth m —with the label (𝗂𝖽) here denoting noiseless quantities—, and the corresponding average fidelities ⟨ψ_m|ρ_m|ψ_m⟩, via
𝔉̂_m = 2^n∑_𝐬,𝐬^'
𝖶,𝖢_m^*(-2)^-h(𝐬,𝐬^')𝖯̂^(𝐬)_𝖶,𝖢_m^*𝖰̂^(𝐬^')_𝖶,𝖢_m^*N_𝖶N_𝖢_m^*.
Both estimators can be rendered unbiased and improved by other considerations as in <ref>.
Whenever the Ω-distributed circuits are generated by layers containing non-Clifford gates in a way that becomes impractical to simulate (e.g., whereby there is a high density of such non-Cliffords or for a large number of qubits), the average layer fidelity can alternatively be estimated through mrb <cit.>; the trade-off is having to construct and measure corresponding mirror circuits. The estimations of Protocol <ref> and mrb by definition will agree upon employing the same ensemble of layers; in Appendix <ref> we numerically observe such agreement.
[Exponential decay]Throughout Appendices <ref>, <ref> and <ref> we show that, under the circumstances detailed below,
_Ωtr(ρ_m^2) ≈ A (E)^m + 12^n,
where 0≤A≤1, and
𝗎(E) = 2^n(E^E)-12^n-1 = f(E^E)
is the average layer unitarity of the noise, where E:=L_i_𝗇𝗈𝗂𝗌𝗒L_i^ is approximately the noise channel corresponding to any layer 𝖫_i with associated map L_i(·):=𝖫_i(·)𝖫_i^, and (E^E) is the average fidelity of E^E with respect to the identity.
Similarly, the decay in average fidelity can be seen to correspond to
_Ω⟨ψ_m|ρ_m|ψ_m⟩≈αf(E)^m + 12^n,
where 0≤α≤1, similar to a usual rb decay for unital spam.
In such case, the estimators in Eq. (<ref>) and Eq. (<ref>) of protocol <ref>, can be fit to the respective decays in depth m, whereby both average layer unitarity and fidelity can be estimated.
§.§.§ Conditions for an exponential decay
While a general functional form in terms of any Markovian noise model for the average sequence purity and fidelity can be obtained, as in Appendices <ref>, <ref> and <ref>, ensuring that it will follow a simple exponential decay as in Eq. (<ref>) relies on the noise satisfying certain approximate conditions and the pair (G,Ω) having certain properties, similar to any rb-based technique. Aside from standard assumptions such as Markovianity (i.e., that noise is not temporally correlated), time-independence and weak gate-dependence [ Here meaning with time- and gate-dependent contributions being negligible on average; see e.g., <cit.>.], Result <ref> requires the following:
* That noise is approximately trace-preserving and unital, i.e., without significant leakage or entropy-decreasing contributions.
* That the Ω-distributed circuits approximate a unitary 2-design, to the effect that _𝖫∼ΩL^XL(·)≈p(·)+(1-p)1/2^n, where p≤1 only dependent on X; i.e., the average composition of noisy layers is approximately equivalent to a depolarizing channel encoding some information about X. This is formalized and discussed in Appendix <ref>.
Assumption <ref> implies that the purity decay only depends on the unitarity of the average noise, and not on its non-unital or trace-decreasing contribution. Otherwise, if this condition is not satisfied, the average unitarity will not be described by Eq. (<ref>), but rather the most general Eq. (<ref>), and the purity decay will be a convex combination of average unitarity and trace-decreasing contributions of the average noise, as expressed by Eq. (<ref>) of Appendix <ref> (in agreement with <cit.>). The implications for the fidelity estimation in this case are similar, since the polarization factor would not be described by Eq. (<ref>) but rather a more general f_t(E)=(2^n(E)-t/(2^n-1) for t=[E(1/2^n)] quantifying how non-tp the channel is <cit.>. In other words, the main implication is that the fitting procedure would generally be slightly more complicated for non-unital, non-tp noise.
Assumption <ref> implies that we can identify 𝗎(E)=p^2 as given by Eq. (<ref>), i.e., the average unitarity is simply equal to the polarization of the average composition of noisy layers. This assumption is slightly stronger than that in <cit.>, and generally means that up to the second moment and a small positive ϵ, the Ω-distributed circuits we employ should reproduce the statistics of the uniform Haar measure on the unitary group on the n qubits. While the choice of (G_1,Ω_1) being the uniform single-qubit Clifford group already generates separately a unitary 2-design on each qubit (projecting noise to a Pauli channel <cit.>), the choice (G_2,Ω_2) should be such that the random Ω-distributed circuits are highly scrambling, as defined in <cit.>, which can be achieved by it containing at least an entangling gate with high probability. While this does not ensure that the circuits will approximate a unitary 2-design in an increasing number of qubits, it suffices in practice for mid-scale systems. This is discussed in detail in Appendix <ref>.
§.§ Mid-scale
In <ref>, we will discuss the reasons why Protocol <ref> is practical and feasible for at least 10-qubits and generally within tenths of qubits, which is what we refer to as mid-scale. This is in the sense that all classical aspects, i.e., steps <ref> and <ref> can be managed with a standard current-technology laptop and with the quantum execution requiring a mild number of circuits, rms, and measurement shot samples, as exemplified in <ref>.
§.§ Unbiased and Median of Means estimators
There are at least two simple but effective ways in which reliable results can be ensured through the estimators in Eq. (<ref>) and Eq. (<ref>): i) using unbiased estimators [ An estimator of some statistical parameter is said to be unbiased or faithful when its expectation matches the real expected value of the parameter, or otherwise it is said to be biased.] and ii) using moms estimators.
In <cit.> it is pointed out that, even though the probability estimator in Eq. (<ref>) is faithful, i.e. [𝖯̂]=𝖯 (where we have dropped all indices), it is biased for any other positive integer power. A unique unbiased estimator can nevertheless be built for any power, and in particular, 𝖯̂_2:=𝖯̂(𝖯̂N_𝗆𝖾𝖺𝗌-1)/(N_𝗆𝖾𝖺𝗌-1) is an unbiased estimator of 𝖯^2. Thus all terms in Eq. (<ref>) and Eq. (<ref>) for the n-bit string 𝐬=𝐬^' should be computed through 𝖯̂_2 to render average sequence fidelity and purity estimations unbiased.
On the other hand, a simple but highly effective way to reduce the uncertainty associated to a mean estimator is to use moms estimators: given K samples of estimators 𝔉̂_m^(1),⋯,𝔉̂_m^(K) for the average fidelity of circuits of depth m in Eq. (<ref>), the moms estimator of such samples is 𝔉̂_m^𝖬𝗈𝖬𝗌(K):=𝗆𝖾𝖽𝗂𝖺𝗇(𝔉̂_m^(1),⋯,𝔉̂_m^(K)), or similarly for the case of the average purity in Eq. (<ref>). In total, this requires N_𝖶N_𝖢_m^*K circuit samples to be measured N_𝗆𝖾𝖺𝗌 times, which however, is generally more robust compared with simply constructing an empirical estimator of the mean with the same amount of samples <cit.>. While it has been noted that when measurements are randomized with the single-qubit Clifford group both types of estimators converge similarly to the true mean <cit.>, moms has the concentration property of the probability of observing outliers from the true mean decreasing exponentially in K <cit.>.
§ EXPERIMENT ON IQM SPARK (TM)
We now demonstrate the execution of Protocol <ref> through both an experiment on <cit.>, a commercial 5-qubit superconducting quantum system by IQM targeting education in quantum computing; technical details of the hardware can be seen in <cit.>. The five qubits in the are connected in a star-shape, with a central qubit connected to the four remaining qubits. The experiments were performed at specific dates, pointed out where the respective results are displayed, and only reflect the performance of the hardware at such point in time.
We generated Ω-distributed circuits using the gate set G=𝖢𝗅𝗂𝖿_1∪{𝖢𝖹}, with 𝖢𝗅𝗂𝖿_1 being the uniformly-distributed single-qubit Clifford group (i.e., Ω_1 assigning 1/24 probability for each Clifford and Ω_2 probability 1 of sampling 𝖢𝖹). The sampling of layers was done employing the edge-grab sampler, defined in <cit.>, with a two-qubit gate density (the expected proportion of qubits occupied by two-qubit gates) of 1/2. The edge-grab sampler considers layers with two-qubit gates only on connected qubits, and where a single logical layer consists of a parallel mixture of one- and two-qubit gates (sampled according to Ω), i.e., there is no more than one gate acting on a given qubit in any layer.
The native set of gates is {r_θ,φ,𝖢𝖹}, where r_θ,φ is a single-qubit rotation of angle θ around the cos(φ)X + sin(φ)Y axis, 𝖢𝖹 is a controlled-Z two-qubit gate. The 𝗆𝖾𝖺𝗌𝗎𝗋𝖾𝗆𝖾𝗇𝗍 operator is a computational basis projective measurement operation, and a 𝖻𝖺𝗋𝗋𝗂𝖾𝗋 object is used to prevent the layers defined through 𝔾 from being compiled together. The definition of depth that we adopt is that of the number of layers defined through 𝔾, i.e., before transpilation to the native gate set.
We extracted decay rates of the average state fidelity and purity in increasing layer depths according to Protocol <ref>, and thus layer fidelity and unitarity, for Ω-distributed circuits in arbitrary combinations of n=1,2,…,5 qubits. In both cases, the sample parameters we employ are N_𝖢_m^*=12 random Ω-distributed circuits, times N_𝖶=10 randomized measurement layers, and times N_𝗆𝖾𝖺𝗌=2^11 shots per measurement; we furthermore employ unbiased estimators for square probabilities and either K=1 for n=1,2,3 or K=2 for n=4,5 momsmomsmoms estimators.
In Fig. <ref> we show the purity decays in increasing circuit depth for each number of qubits, determined via the estimators in Eq. (<ref>), where, by fitting the decay model in Eq. (<ref>) to the averages, we estimate the corresponding average unitarity. It is worth noticing that we used no median of means for up to three qubits, and thus the individual smaller points in the plot do not represent purities as these have not been averaged over all circuit samples; this is a reason why the violin distributions can stretch beyond 1. Nevertheless, it is expected for the spread in individual outputs to be larger for larger values of purity. Importantly, while averages (larger points) do not all fall exactly on the respective exponential (particularly for n=1), deviations do not appear alarmingly high. Finally, spam contributions can be seen to lead to a shift in the offset of the curve for n=3.
It is important to notice that in Fig. <ref>, the unitarity does not appear to be monotonically decreasing in qubit count. In this case, however, this apparent increase could also be explained away by the uncertainty of the data and could be at most a plateau in unitarity; a way to decrease the uncertainties could be by increasing the number of moms. This is nevertheless an interesting feature and one that could be investigated further also in the context of determining crosstalk or other types of correlations at the unitarity level.
Following this, in Fig. <ref> we show the respective fidelity decays, computed together with the measurement counts from the same experiment and the estimation of the corresponding noiseless probabilities, according to estimator in Eq. (<ref>). Similarly here, the distributions tend to show a larger spread for smaller n, where only the averages represent the corresponding fidelity estimation. While the decays display different spam contributions, the averages do decrease in system size; the uncertainties could similarly be reduced e.g., by increasing the amount of moms. In Appendix <ref> we compare with mrb results gathered weeks before, showing qualitative agreement despite a clear difference in the respective distributions. Since error bars remain relatively large for fidelities, and hence for the estimated Pauli unitarity intervals, an alternative could be to construct mirror circuits (without the rm Clifford gates appended) within the same experiment and then instead estimate fidelities via mrb for smaller uncertainties.
This finally enables us to get an overall picture of both average layer fidelities and unitarities, together with an estimation of whether noise falls within the Pauli unitarity bound in Eq. (<ref>) or it contains a larger coherent contribution. In Fig. <ref> we plot together the average outputs retrieved in Fig. <ref> and Fig. <ref>, together with estimations of Pauli unitarity bounds given the average fidelity estimates, computed via Eq. (<ref>). Fig. <ref> gives an overall picture of a coherent noise budget, which could in turn inform the overhead of Pauli twirling techniques. It is relevant to notice that both it is possible for the average unitarity to increase even without there being necessarily correlations between qubits, and that generally the coherence of noise could remain relatively large with respect to fidelity for larger systems.
§ SCALING BOTTLENECKS
§.§ Sample Complexity
The main roadblock for estimating the average coherence of noise in large systems (or even just the purity of quantum states) with rms, is the required number of measurements to have an estimation within a given error, i.e., its sample complexity, which is then by definition manifested by the variance associated to such rms. In Protocol <ref>, statistical uncertainty stems from the sampling of the Ω-distributed circuits, that of the rm elements, and the number of measurements per rm element; these quantities correspond to N_𝖢_m^* and N_𝖶 in the estimators of Eq. (<ref>) and Eq. (<ref>), and to N_𝗆𝖾𝖺𝗌 of Eq. (<ref>), respectively.
The main limitation to scalability is imposed by N_𝗆𝖾𝖺𝗌: using local rms via Eq. (<ref>) to estimate purity to precision 1/√(N_𝖶), it was observed numerically in <cit.> that it scales approximately as N_𝗆𝖾𝖺𝗌∼2^0.75n in number of qubits n. Generally, within the classical shadows framework, the required number of measurements can be seen to scale exponentially in a similar fraction of n and linearly in the actual purity of the state <cit.>.
Mixedness reduces the sample complexity, as also observed numerically in <cit.>, and with Ω-distributed circuits, quantum states will naturally get increasingly mixed in increasing depth. Coincidentally, thus, a high percentage of coherent error contributes to a higher variance not only in average fidelity estimation but also in average unitarity estimation itself.
When it comes to Ω-distributed circuit samples when estimating average fidelity, the variance associated with it can be directly linked to the average amount of coherent noise <cit.>. How to control this uncertainty, as well as guarantees when choosing the number of sample circuits and number of measurements, has been widely studied for Clifford rb <cit.>; furthermore, <cit.> obtained a bound on the variance for the purity in circuits within (an improved version of) Clifford urb, assuming unital noise. While we are not aware of analogous bounds on the variance for scalable rb (namely employing Ω-distributed circuits), it has been observed numerically that similar behavior follows (at least when layers are made up only of either generators-of or Clifford gates themselves) <cit.>, and that Pauli twirling <cit.> generally suppresses the variance of expectation values under any sequence of operations, regardless of whether noise is correlated (either spatially or temporally) or not <cit.>.
A prospect could therefore be to benchmark average Pauli-dressed gates, aiming to obtain a unitarity within Ineq. (<ref>), with the trade-off being to sample an extra number of Pauli layers per circuit. Here we do not attempt to derive a rigorous bound on the variance for the average layer unitarity, but as we have demonstrated experimentally in <ref>, while for a given total number of samples the uncertainty is larger for the unitarity than for fidelity, estimating it nevertheless remains tractable for mid-scale systems.
§.§ Practical bottlenecks
There are two other practical considerations to take into account to estimate at which scale Protocol <ref> remains feasible: the spam factors in the exponential decay in Eqs. (<ref>),(<ref>), and the number of terms entering in the sum of Eqs. (<ref>),(<ref>).
While our technique gives spam-independent estimates of the average unitarity, it can nevertheless make the exponential decays drop way too quickly, so that fitting becomes unfeasible. This is a shared problem of all rb-based techniques, and other than simply improving readouts and/or state preparation, there could be ways of removing the 2^-n offset, for example, but the multiplicative factor is still determined by the unitarity of the spam [ For example, if we assume that spam noise (i.e. that which we would attribute to the initial and final single-qubit-only layers) is global depolarizing (as done for mrb <cit.>) with polarization p, then the multiplicative factor in Eq. (<ref>) is A≲p^4.]. More than a rb problem this might be an issue in general when trying to estimate global, high-weight properties on large systems, essentially because signal readouts eventually become too small.
On the other hand, while Eq. (<ref>) and Eq. (<ref>) are remarkable mathematically, in practice they involve a sum over all possible pairs of n-bit strings. Naively, this means summing up 4^n probability terms; for the case of the purity, this can be reduced at least to 2^n-1(2^n-1), given that the Hamming distance is symmetric. This computation can be done easily on a classical machine in parallel, but it can also become rather unpractical (as it furthermore would need to be done for every single purity estimate). While this is a practical problem, it is fundamentally tied to the fact that the purity of a state involves all the elements of the density matrix.
§ CONCLUSIONS
We have derived an upper bound on the average unitarity of Pauli-twirled noise solely in terms of the average fidelity of the respective bare noise, and we have established a protocol enabling the estimation of the average unitarity for a broad class of quantum circuits in digital systems, independent of technology platform or architecture, with tenths of qubits. Our results have been inspired both by novel scalable rb techniques, as well as rm methods. We have shown that reliable estimation of both average operation coherence and fidelity in such systems can be done under mild conditions and with a reasonable sampling overhead. Finally, we demonstrated our results in experiment on up to 5 qubits and numerically in simulation with up to 10 qubits.
A fully scalable estimation of the average coherence of noise will almost surely rely on modularity, e.g., by using smaller-scale unitarity information extracted from smaller qubit subsets and then combined with a Simultaneous rb-like protocol.
A promising way of moving forward in this direction is that of <cit.>, where the authors establish a method to estimate entropy and entanglement of a quantum state in polynomially-many measurements, relying on conditions essentially satisfied when spatial correlation lengths in the system remain finite and for systems in a one-dimensional topology. Their main result suggests that generally one could partition any large n-qubit system into smaller subsystems and reconstruct the global n-qubit purity from the local purity of pairs of contiguous subsystems, like stitching together the bigger purity with a number of measurements that overall grow polynomially in n. As a direct consequence, if such results hold more generally, our protocol could be used to estimate global average noise coherence with a similar sample complexity.
Nevertheless, several things remain unclear, e.g., what classes of (Markovian) noise such a method would hold for, how it could be phrased beyond one-dimensional qubit arrays, or how one would certify the global purity estimations, among others. As we have argued, however, the average coherence of noise is a crucial figure of merit for benchmarking the error mechanisms limiting the performance of quantum computers, perhaps only standing after average layer fidelity in terms of importance as a benchmark, so the aforementioned hurdles must soon be overcome.
The authors acknowledge support from the German Federal Ministry of Education and Research (BMBF) under Q-Exa (grant No. 13N16062) and QSolid (grant No. 13N16161). The authors also acknowledge the entire IQM Technology team for their support in the development of this work.
apsrev4-1_custom
10
PRXQuantum.3.020335
R. Blume-Kohout, M. P. da Silva, E. Nielsen, T. Proctor, K. Rudinger,
M. Sarovar, and K. Young.
“A taxonomy of small Markovian errors”.
https://dx.doi.org/10.1103/PRXQuantum.3.020335PRX Quantum
3, 020335 (2022).
PRXQuantum.2.040338
K. Rudinger, C. W. Hogle, R. K. Naik, A. Hashim, D. Lobser, D. I. Santiago,
M. D. Grace, E. Nielsen, T. Proctor, S. Seritan, S. M. Clark,
R. Blume-Kohout, I. Siddiqi, and K. C. Young.
“Experimental characterization of crosstalk errors with simultaneous
gate set tomography”.
https://dx.doi.org/10.1103/PRXQuantum.2.040338PRX Quantum
2, 040338 (2021).
Temme_PEC_2017
K. Temme, S. Bravyi, and J. M. Gambetta.
“Error mitigation for short-depth quantum circuits”.
https://dx.doi.org/10.1103/PhysRevLett.119.180509Phys. Rev.
Lett. 119, 180509 (2017).
McDonough_2022
B. McDonough, A. Mari, N. Shammah, N. T. Stemen, M. Wahl, W. J. Zeng, and P. P.
Orth.
“Automated quantum error mitigation based on probabilistic error
reduction”.
In 2022 IEEE/ACM Third International Workshop on Quantum Computing
Software (QCS).
IEEE (2022).
Gonzales_2023
A. Gonzales, R. Shaydulin, Z. H. Saleem, and M. Suchara.
“Quantum error mitigation by Pauli check sandwiching”.
https://dx.doi.org/10.1038/s41598-023-28109-xSci. Rep.13 (2023).
PhysRevResearch.5.033193
E. van den Berg, S. Bravyi, J. M. Gambetta, P. Jurcevic, D. Maslov, and
K. Temme.
“Single-shot error mitigation by coherent Pauli checks”.
https://dx.doi.org/10.1103/PhysRevResearch.5.033193Phys. Rev.
Res. 5, 033193 (2023).
vandenBerg_PEC_2023
E. van den Berg, Z. K. Minev, A. Kandala, and K. Temme.
“Probabilistic error cancellation with sparse Pauli–Lindblad
models on noisy quantum processors”.
https://dx.doi.org/10.1038/s41567-023-02042-2Nat.
Phys. (2023).
knill2005
E. Knill.
“Quantum computing with realistically noisy devices”.
https://dx.doi.org/10.1038/nature03350Nature 434,
39–44 (2005).
eastin2007error
B. Eastin.
“Error channels and the threshold for fault-tolerant quantum
computation” (2007).
http://arxiv.org/abs/0710.2560arXiv:0710.2560.
Wallman_2014
J. J. Wallman and S. T. Flammia.
“Randomized benchmarking with confidence”.
https://dx.doi.org/10.1088/1367-2630/16/10/103032New J. Phys.
16, 103032 (2014).
Sanders_2015
Y. R. Sanders, J. J. Wallman, and B. C. Sanders.
“Bounding quantum gate error rate based on reported average
fidelity”.
https://dx.doi.org/10.1088/1367-2630/18/1/012002New J. Phys.
18, 012002 (2015).
Wallman_2015
J. J. Wallman, C. Granade, R. Harper, and S. T. Flammia.
“Estimating the coherence of noise”.
https://dx.doi.org/10.1088/1367-2630/17/11/113020New J. Phys
17, 113020 (2015).
Kueng_2016
R. Kueng, D. M. Long, A. C. Doherty, and S. T. Flammia.
“Comparing experiments to the fault-tolerance threshold”.
https://dx.doi.org/10.1103/PhysRevLett.117.170502Phys. Rev.
Lett. 117, 170502 (2016).
Hashim_2023
A. Hashim, S. Seritan, T. Proctor, K. Rudinger, N. Goss, R. K. Naik, J. M.
Kreikebaum, D. I. Santiago, and I. Siddiqi.
“Benchmarking quantum logic operations relative to thresholds for
fault tolerance”.
https://dx.doi.org/10.1038/s41534-023-00764-ynpj Quantum
Inf.9 (2023).
PhysRevA.94.052325
J. J. Wallman and J. Emerson.
“Noise tailoring for scalable quantum computation via randomized
compiling”.
https://dx.doi.org/10.1103/PhysRevA.94.052325Phys. Rev. A
94, 052325 (2016).
unitarity_helsen_2019
B. Dirkse, J. Helsen, and S. Wehner.
“Efficient unitarity randomized benchmarking of few-qubit Clifford
gates”.
https://dx.doi.org/10.1103/PhysRevA.99.012315Phys. Rev. A
99, 012315 (2019).
Hashim_2021
A. Hashim, R. K. Naik, A. Morvan, J.-L. Ville, B. Mitchell, J. M. Kreikebaum,
M. Davis, E. Smith, C. Iancu, K. P. O'Brien, I. Hincks, J. J. Wallman,
J. Emerson, and I. Siddiqi.
“Randomized compiling for scalable quantum computing on a noisy
superconducting quantum processor”.
https://dx.doi.org/10.1103/PhysRevX.11.041039Phys. Rev. X
11, 041039 (2021).
certif_and_bench_2020
J. Eisert, D. Hangleiter, N. Walk, I. Roth, D. Markham, R. Parekh, U. Chabaud,
and E. Kashefi.
“Quantum certification and benchmarking”.
https://dx.doi.org/10.1038/s42254-020-0186-4Nat. Rev. Phys.
2, 382–390 (2020).
PRXQuantum.2.010201
M. Kliesch and I. Roth.
“Theory of quantum system certification”.
https://dx.doi.org/10.1103/PRXQuantum.2.010201PRX Quantum
2, 010201 (2021).
helsen_general
J. Helsen, I. Roth, E. Onorati, A.H. Werner, and J. Eisert.
“General framework for randomized benchmarking”.
https://dx.doi.org/10.1103/PRXQuantum.3.020357PRX Quantum
3, 020357 (2022).
mrb_prl2022
T. Proctor, S. Seritan, K. Rudinger, E. Nielsen, R. Blume-Kohout, and K. Young.
“Scalable randomized benchmarking of quantum computers using mirror
circuits”.
https://dx.doi.org/10.1103/PhysRevLett.129.150502Phys. Rev.
Lett. 129, 150502 (2022).
hines2022demonstrating
J. Hines, M. Lu, R. K. Naik, A. Hashim, J.-L. Ville, B. Mitchell, J. M.
Kriekebaum, D. I. Santiago, S. Seritan, E. Nielsen, R. Blume-Kohout,
K. Young, I. Siddiqi, B. Whaley, and T. Proctor.
“Demonstrating scalable randomized benchmarking of universal gate
sets”.
https://dx.doi.org/10.1103/PhysRevX.13.041030Phys. Rev. X
13, 041030 (2023).
hines2023fully
J. Hines, D. Hothem, R. Blume-Kohout, B. Whaley, and T. Proctor.
“Fully scalable randomized benchmarking without motion
reversal” (2023).
http://arxiv.org/abs/2309.05147arXiv:2309.05147.
mckay2023benchmarking
D. C. McKay, I. Hincks, E. J. Pritchett, M. Carroll, L. C. G. Govia, and S. T.
Merkel.
“Benchmarking quantum processor performance at scale” (2023).
http://arxiv.org/abs/2311.05933arXiv:2311.05933.
hines2023scalable
J. Hines and T. Proctor.
“Scalable full-stack benchmarks for quantum computers” (2023).
http://arxiv.org/abs/2312.14107arXiv:2312.14107.
shadows_2020
H.-Y. Huang, R. Kueng, and J. Preskill.
“Predicting many properties of a quantum system from very few
measurements”.
https://dx.doi.org/10.1038/s41567-020-0932-7Nat. Phys. 16, 1050–1057 (2020).
rmtoolbox_2022
A. Elben, S. T. Flammia, H.-Y. Huang, R. Kueng, J. Preskill, B. Vermersch, and
P. Zoller.
“The randomized measurement toolbox”.
https://dx.doi.org/10.1038/s42254-022-00535-2Nat. Rev. Phys.
5, 9–24 (2022).
helsen2021estimating
J. Helsen, M. Ioannou, J. Kitzinger, E. Onorati, A. H. Werner, J. Eisert, and
I. Roth.
“Shadow estimation of gate-set properties from random sequences”.
https://dx.doi.org/10.1038/s41467-023-39382-9Nat. Commun.
14, 5039 (2023).
PhysRevLett.130.160401
G. A. L. White, K. Modi, and C. D. Hill.
“Filtering crosstalk from bath non-Markovianity via spacetime
classical shadows”.
https://dx.doi.org/10.1103/PhysRevLett.130.160401Phys. Rev.
Lett. 130, 160401 (2023).
vanEnk_2012
S. J. van Enk and C. W. J. Beenakker.
“Measuring Tr^n on single copies of
using random measurements”.
https://dx.doi.org/10.1103/PhysRevLett.108.110503Phys. Rev.
Lett. 108, 110503 (2012).
Renyi_2018
A. Elben, B. Vermersch, M. Dalmonte, J. I. Cirac, and P. Zoller.
“Rényi entropies from random quenches in atomic hubbard and spin
models”.
https://dx.doi.org/10.1103/PhysRevLett.120.050406Phys. Rev.
Lett. 120, 050406 (2018).
purity_science_2019
T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon,
P. Zoller, R. Blatt, and C. F. Roos.
“Probing Rènyi entanglement entropy via randomized
measurements”.
https://dx.doi.org/10.1126/science.aau4963Science 364,
260–263 (2019).
purity_pra_2019
A. Elben, B. Vermersch, C. F. Roos, and P. Zoller.
“Statistical correlations between locally randomized measurements:
A toolbox for probing entanglement in many-body quantum states”.
https://dx.doi.org/10.1103/PhysRevA.99.052323Phys. Rev. A
99, 052323 (2019).
spark
“IQM quantum computers: IQM sparktm”.
<https://www.meetiqm.com/products/iqm-spark>.
Accessed: 2024-04-17.
PhysRevLett.106.230501
S. T. Flammia and Y.-K. Liu.
“Direct fidelity estimation from few Pauli measurements”.
https://dx.doi.org/10.1103/PhysRevLett.106.230501Phys. Rev.
Lett. 106, 230501 (2011).
PhysRevLett.107.253602
A. Chiuri, V. Rosati, G. Vallone, S. Pádua, H. Imai, S. Giacomini,
C. Macchiavello, and P. Mataloni.
“Experimental realization of optimal noise estimation for a general
Pauli channel”.
https://dx.doi.org/10.1103/PhysRevLett.107.253602Phys. Rev.
Lett. 107, 253602 (2011).
Flammia_2020
S. T. Flammia and J. J. Wallman.
“Efficient estimation of Pauli channels”.
https://dx.doi.org/10.1145/3408039ACM Transactions on Quantum
Computing 1, 1–32 (2020).
Harper_2020
R. Harper, S. T. Flammia, and J. J. Wallman.
“Efficient learning of quantum noise”.
https://dx.doi.org/10.1038/s41567-020-0992-8Nat. Phys. 16, 1184–1188 (2020).
PRXQuantum.2.010322
R. Harper, W. Yu, and S. T. Flammia.
“Fast estimation of sparse quantum noise”.
https://dx.doi.org/10.1103/PRXQuantum.2.010322PRX Quantum
2, 010322 (2021).
Chen_2023
S. Chen, Y. Liu, M. Otten, A. Seif, B. Fefferman, and L. Jiang.
“The learnability of Pauli noise”.
https://dx.doi.org/10.1038/s41467-022-35759-4Nat. Commun.14 (2023).
berg2023techniques
E. van den Berg and P. Wocjan.
“Techniques for learning sparse Pauli-lindblad noise
models” (2023).
http://arxiv.org/abs/2311.15408arXiv:2311.15408.
PhysRevLett.121.190501
S. J. Beale, J. J. Wallman, M. Gutiérrez, K. R. Brown, and R. Laflamme.
“Quantum error correction decoheres noise”.
https://dx.doi.org/10.1103/PhysRevLett.121.190501Phys. Rev.
Lett. 121, 190501 (2018).
Wagner2022paulichannelscanbe
T. Wagner, H. Kampermann, D. Bruß, and M. Kliesch.
“Pauli channels can be estimated from syndrome measurements in
quantum error correction”.
https://dx.doi.org/10.22331/q-2022-09-19-809Quantum 6, 809 (2022).
PhysRevA.91.022335
M. Gutiérrez and K. R. Brown.
“Comparison of a quantum error-correction threshold for exact and
approximate errors”.
https://dx.doi.org/10.1103/PhysRevA.91.022335Phys. Rev. A
91, 022335 (2015).
PhysRevLett.120.050505
D. K. Tuckett, S. D. Bartlett, and S. T. Flammia.
“Ultrahigh error threshold for surface codes with biased noise”.
https://dx.doi.org/10.1103/PhysRevLett.120.050505Phys. Rev.
Lett. 120, 050505 (2018).
PhysRevA.80.012304
C. Dankert, R. Cleve, J. Emerson, and E. Livine.
“Exact and approximate unitary 2-designs and their application to
fidelity estimation”.
https://dx.doi.org/10.1103/PhysRevA.80.012304Phys. Rev. A
80, 012304 (2009).
Heinosaari_2020
T. Heinosaari, M. A. Jivulescu, and I. Nechita.
“Random positive operator valued measures”.
https://dx.doi.org/10.1063/1.5131028J. Math. Phys.61 (2020).
Di_Franco_2013
C. Di Franco and M. Paternostro.
“A no-go result on the purification of quantum states”.
https://dx.doi.org/10.1038/srep01387Sci. Rep.3 (2013).
characterizing_Magesan_2012
E. Magesan, J. M. Gambetta, and J. Emerson.
“Characterizing quantum gates via randomized benchmarking”.
https://dx.doi.org/10.1103/PhysRevA.85.042311Phys. Rev. A
85, 042311 (2012).
Wallman2018randomized
Joel J. Wallman.
“Randomized benchmarking with gate-dependent noise”.
https://dx.doi.org/10.22331/q-2018-01-29-47Quantum 2,
47 (2018).
directRB2023
A. M. Polloreno, A. Carignan-Dugas, J. Hines, R. Blume-Kohout, K. Young, and
T. Proctor.
“A theory of direct randomized benchmarking” (2023).
http://arxiv.org/abs/2302.13853arXiv:2302.13853.
simRB_2012
J. M. Gambetta, A. D. Córcoles, S. T. Merkel, B. R. Johnson, John A. Smolin,
J. M. Chow, C. A. Ryan, C. Rigetti, S. Poletto, T. A. Ohki, M. B. Ketchen,
and M. Steffen.
“Characterization of addressability by simultaneous randomized
benchmarking”.
https://dx.doi.org/10.1103/PhysRevLett.109.240504Phys. Rev.
Lett. 109, 240504 (2012).
Zoller_2018
B. Vermersch, A. Elben, M. Dalmonte, J. I. Cirac, and P. Zoller.
“Unitary n-designs via random quenches in atomic Hubbard and
spin models: Application to the measurement of Rényi entropies”.
https://dx.doi.org/10.1103/PhysRevA.97.023604Phys. Rev. A
97, 023604 (2018).
PhysRevLett.127.110504
A. Zhao, N. C. Rubin, and A. Miyake.
“Fermionic partial tomography via classical shadows”.
https://dx.doi.org/10.1103/PhysRevLett.127.110504Phys. Rev.
Lett. 127, 110504 (2021).
lerasle2019lecture
M. Lerasle.
“Lecture notes: Selected topics on robust statistical learning
theory” (2019).
http://arxiv.org/abs/1908.10761arXiv:1908.10761.
spark_paper
Jami Rönkkö, Olli Ahonen, Ville Bergholm, Alessio Calzona, Attila Geresdi,
Hermanni Heimonen, Johannes Heinsoo, Vladimir Milchakov, Stefan Pogorzalek,
Matthew Sarsby, Mykhailo Savytskyi, Stefan Seegerer, Fedor Šimkovic IV,
P. V. Sriluckshmy, Panu T. Vesanen, and Mikio Nakahara.
“On-premises superconducting quantum computer for education and
research” (2024).
http://arxiv.org/abs/2402.07315arXiv:2402.07315.
Proctor_2021
T. Proctor, K. Rudinger, K. Young, E. Nielsen, and R. Blume-Kohout.
“Measuring the capabilities of quantum computers”.
https://dx.doi.org/10.1038/s41567-021-01409-7Nat. Phys. 18, 75–79 (2021).
PhysRevLett.125.200501
A. Elben, R. Kueng, H.-Y. Huang, R. van Bijnen, C. Kokail, M. Dalmonte,
P. Calabrese, B. Kraus, J. Preskill, P. Zoller, and B. Vermersch.
“Mixed-state entanglement from local randomized measurements”.
https://dx.doi.org/10.1103/PhysRevLett.125.200501Phys. Rev.
Lett. 125, 200501 (2020).
vermersch2023manybody
B. Vermersch, M. Ljubotina, J. I. Cirac, P. Zoller, M. Serbyn, and L. Piroli.
“Many-body entropies and entanglement from polynomially-many local
measurements”.
https://dx.doi.org/10.1103/PhysRevX.14.031035Phys. Rev. X
14, 031035 (2024).
PhysRevA.100.032304
J. Helsen, J. J. Wallman, S. T. Flammia, and S. Wehner.
“Multiqubit randomized benchmarking using few samples”.
https://dx.doi.org/10.1103/PhysRevA.100.032304Phys. Rev. A
100, 032304 (2019).
PhysRevA.103.042604
M. Ware, G. Ribeill, D. Ristè, C. A. Ryan, B. Johnson, and M. P. da Silva.
“Experimental Pauli-frame randomization on a superconducting
qubit”.
https://dx.doi.org/10.1103/PhysRevA.103.042604Phys. Rev. A
103, 042604 (2021).
figueroaromero2023operational
P. Figueroa-Romero, M. Papič, A. Auer, M.-H. Hsieh, K. Modi, and
I. de Vega.
“Operational Markovianization in Randomized
Benchmarking”
https://doi.org/10.1088/2058-9565/ad3f44Quantum Sci. Technol. 9, 035020 (2024).
Brandao2016
F. G. S. L. Brandão, A. W. Harrow, and M. Horodecki.
“Local random quantum circuits are approximate polynomial-designs”.
https://dx.doi.org/10.1007/s00220-016-2706-8Commun. Math.
Phys. 346, 397–434 (2016).
Haferkamp2022randomquantum
J. Haferkamp.
“Random quantum circuits are approximate unitary t-designs in
depth O(nt^5+o(1))”.
https://dx.doi.org/10.22331/q-2022-09-08-795Quantum 6, 795 (2022).
Emerson_2005
J. Emerson, R. Alicki, and K. Życzkowski.
“Scalable noise estimation with random unitary operators”.
https://dx.doi.org/10.1088/1464-4266/7/10/021Journal of
Optics B: Quantum and Semiclassical Optics 7, S347 (2005).
Nielsen_fidelity2002
M. A Nielsen.
“A simple formula for the average gate fidelity of a quantum
dynamical operation”.
https://dx.doi.org/10.1016/s0375-9601(02)01272-0Physics
Letters A 303, 249–252 (2002).
proctor2022establishing
T. Proctor, S. Seritan, E. Nielsen, K. Rudinger, K. Young, R. Blume-Kohout, and
M. Sarovar.
“Establishing trust in quantum computations” (2022).
http://arxiv.org/abs/2204.07568arXiv:2204.07568.
FigueroaRomero2022towardsgeneral
P. Figueroa-Romero, K. Modi, and M.-H. Hsieh.
“Towards a general framework of Randomized Benchmarking
incorporating non-Markovian Noise”.
https://dx.doi.org/10.22331/q-2022-12-01-868Quantum 6, 868 (2022).
aces
S. T. Flammia.
“Averaged circuit eigenvalue sampling” (2021).
http://arxiv.org/abs/2108.05803arXiv:2108.05803.
toc
§ PRELIMINARIES
Here we detail and expand on some of the notation and technical details that go into showing the results claimed in the main text. Much of the structure for the circuits entering the protocol is inspired by <cit.>, so for this reason we stick to most of the original terminology and notation.
§.§ Omega-distributed circuits
We will consider a quantum system made up of n≥2 qubits and a pair of single-qubit and two-qubit gate sets, G_1 and G_2, respectively, with corresponding probability distributions Ω_1 and Ω_2. We will then generate quantum circuits by sampling from 𝗀_1∼Ω_1 and 𝗀_2∼Ω_2, to generate parallel instructions on the n qubits as specified by a set L(G)={L_i=L_i^(1)L_i^(2)}, where each L_i^(1)∈L(G_1) and L_i^(2)∈L(G_2) are instructions made up of only the single and two-qubit gates 𝗀_1 and 𝗀_2, respectively. To stress when some quantities are random variables, we use sans font when relevant, as in 𝖫_i being a random layer sampled from Ω, which is such that Ω(𝖫_i)=Ω_1(𝖫_i^(1))Ω_2(𝖫_i^(2)).
Circuits generated this way, here for the particular case G=(G_1,G_2) and Ω=(Ω_1,Ω_2), are said to be Ω-distributed circuits, e.g., the circuit as stated in the main text as
𝖢_m=𝖫_m𝖫_m-1⋯𝖫_1,
where each 𝖫_i∈L(Ω) sampled according to Ω, and the right-hand-side is read from right to left (i.e., 𝖢_m is an operator acting onto quantum states on the left), is a Ω-distributed circuit of depth m.
We point out that, in general, Ω-distributed circuits can be constructed with an arbitrary number of gate sets and their corresponding distributions. Moreover, while so far G_1,G_2 and Ω_1,Ω_2 are completely up to the user to specify, in practice the main requirements we will have on this choice are that, i) the resulting Ω-distributed circuits constitute an ϵ-approximate unitary 2-design, and ii) that in particular (G_1,Ω_1) is an exact unitary 2-design, e.g., the single-qubit Clifford group. As such conditions i) and ii) are considered a posteriori for a particular case of interest, we will have a general derivation and then these will be stated formally in point <ref> of <ref> and in <ref>, respectively.
§.§ Ideal vs real implementations
While we do not stress this difference in the main text, here we will denote by U_L an ideal (noiseless) unitary operator associated to a representation of the layer L, and by U_L(ρ):=U_Lρ U_L^ the action of its corresponding map or superoperator U_L.
We write a composition of two maps A and B simply as BA to mean “apply map B after applying map A”, equivalent to the usual notations B∘A, or B(A(x)).
Noisy implementations of layers will generally be denoted by a map ϕ, e.g., the noisy application of a layer L is denoted by the cp map (which we also refer to as a quantum channel, or simply, a channel) ϕ(L), which can be defined (see e.g., IV of <cit.>) such that ϕ(L):=E_L U_L, with E_L being generally a cp map associated to L.
§.§ The output states of Omega-distributed circuits
We will consider a set of Ω-distributed circuits of depth m, acting on a fiducial initial state |0⟩, which in turn we randomize with a layer 𝖵∈L(G_1). That is, we will have circuits of the form 𝖢_m𝖵|0⟩, which will always give a pure state output in the noiseless case. In the noisy case, however, we will have states of the form
ϱ̃_m := _𝖫_i∼Ω
𝖵∼Ω_1ϕ(𝖫_m)ϕ(𝖫_m-1)⋯ϕ(𝖫_1)ϕ(𝖵)(|0⟩⟨0|),
averaged independently over each of the Ω-distributed layers 𝖫_i, and uniformly over the initial single-qubit gate layer 𝖵∈L(G_1) with Ω_1. This noisy output can of course now be mixed, 2^-N≤(ϱ̃_m^2)≤1.
Our protocol focuses on efficiently estimating the purities of the states ϱ̃_m, to in turn estimate the average unitarity of noise in the circuits 𝖢_m. In the following, we first study the behavior of the average purity of ϱ̃_m in increasing circuit depth m, and then we see that we can efficiently estimate this quantity employing rms, allowing us to readily extract the average unitarity of noise in certain circumstances which we analyze.
§ PURITY OF AVERAGE OUTPUTS OF OMEGA-DISTRIBUTED CIRCUITS
§.§ Purity of the initial randomization
Let us consider first the case of no gate layers but just the noisy initial single-qubit Clifford layer, m=0, then we have
(ϱ̃_0^2) = _𝖵∼Ω_1[ϕ(𝖵)(|0⟩⟨0|)ϕ(𝖵)(|0⟩⟨0|)^]
:= _ψ∼Ω_1[E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)^],
where we defined the noisy implementation of 𝖵 as ϕ(𝖵):=E_𝗌𝗉𝖺𝗆 U_𝖵 for some noise channel E_𝗌𝗉𝖺𝗆 explicitly standing for spam noise, and where we defined the initial randomized pure state as U_𝖵(|0⟩⟨0|):=|ψ⟩⟨ψ|.
Noticing that the noisy outputs E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|) are quantum states and thus Hermitian, and that we can define the self-adjoint map X^ of a quantum channel X by the property [AX(B)]:=[X^(A)B], we can write
(ϱ̃_0^2) = _ψ∼Ω_1⟨ψ|E_𝗌𝗉𝖺𝗆^E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)|ψ⟩.
Equivalently, the self-adjoint map of a quantum channel can be defined in terms of its Kraus operators as X^(·):=∑K_μ^(·)K_μ, where {K_μ} the Kraus operators of X, so Eq. (<ref>) in a sense is already measuring how different E_𝗌𝗉𝖺𝗆 is from a unitary.
In general, the purity of a quantum state, ρ:=X(|φ⟩⟨φ|) can be written as (ρ^2):=⟨φ|X^X(|φ⟩⟨φ|)|φ⟩, which corresponds to the gate fidelity of X^X with respect to the identity on the state |φ⟩. Thus, we can write Eq. (<ref>) as a gate-fidelity of E_𝗌𝗉𝖺𝗆^E_𝗌𝗉𝖺𝗆 with respect to the identity, averaged over all possible initial states |ψ⟩,
(ϱ̃_0^2)
= _ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆^E_𝗌𝗉𝖺𝗆),
where here
_ψ(X):=⟨ψ|X(|ψ⟩⟨ψ|)|ψ⟩,
is the gate-fidelity of the map X with respect to the identity map on the state |ψ⟩.
§.§ Relation to average unitarity and average trace-decrease
Ultimately, however, the average loss of purity in the average state ϱ̃_0 must be related to some amount of non-unitarity of the noise, i.e., it is noise that decreases purity. This can be quantified by the average unitarity, _|ψ⟩𝗎_ψ, defined for a cp map E as
_ψ∼Ω_1𝗎_ψ(E):=(2^n2^n-1)_ψ∼Ω_1_ψ(E^' E^'), where E^'(·):=E(·-1/2^n),
which is so defined to account for the case of E being trace-decreasing. When E is tp we have the relation E^ 'E^'(·)=E^E(·-1/2^n), otherwise, however, for any quantum state ρ,
E^ 'E^'(ρ) = E^E(ρ-12^n)-[E(ρ-12^n)]E^(12^n),
thus
(2^n-12^n)_ψ∼Ω_1_ψ(E) = _ψ∼Ω_1_ψ(E^E)
- _ψ∼Ω_1⟨ψ|E^E(1/2^n)|ψ⟩ - _ψ∼Ω_1[E^'(|ψ⟩⟨ψ|)]⟨ψ|E^(1/2^n)|ψ⟩
= _ψ∼Ω_1_ψ(E^E) - 2^-n_ψ∼Ω_1{𝖲_ψ(E^E) + 𝖲_ψ(E)[𝖲_ψ(E) - [E(1/2^n)]]},
where
𝖲_ψ(E):=[E(|ψ⟩⟨ψ|)],
is a trace-preservation measure of the map E for a given sample |ψ⟩.
We can then write the average purity of ϱ̃_0 as
(ϱ̃_0^2) = (2^n-12^n)_ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆)
+ 2^-n_ψ∼Ω_1{𝖲_ψ(E_𝗌𝗉𝖺𝗆^E_𝗌𝗉𝖺𝗆) + 𝖲_ψ(E_𝗌𝗉𝖺𝗆)[𝖲_ψ(E_𝗌𝗉𝖺𝗆) - [E_𝗌𝗉𝖺𝗆(1/2^n)]]},
which we recall is defined as averaged over the states |ψ⟩⟨ψ|=U_𝖵(|0⟩⟨0|), which are distributed uniformly and independently on the n-qubits according to a distribution Ω_1 on single-qubit gates G_1.
When the noise channel E is tp, the rightmost summand in Eq. (<ref>) becomes a term equal to -2^-n, and in that case the average unitarity is just a re-scaling of the average gate-fidelity of the form
_ψ∼Ω_1_ψ(E) (E is TP)= 2^n _ψ∼Ω_1_ψ(E^E)-12^n-1.
and thus also the average purity in Eq. (<ref>) in this case would become
(ϱ̃_0^2) (E_𝗌𝗉𝖺𝗆 is TP)=(2^n-12^n)_ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆) + 12^n.
§.§ The case of global depolarizing noise
A particular case of a quantum channel for modeling noise that is both easy mathematically as in interpretation is that of global depolarizing channel,
E_p(·):=p (·)+(1-p)(·) 12^n,
where 0≤p≤1 is the probability for the input state to remain the same, and which can be directly defined in terms of the channel by the so-called polarization γ,
γ(E) := 4^n(E)-14^n-1,
where here
(E):=⟨Ψ|(I⊗E)[|Ψ⟩⟨Ψ|]|Ψ⟩, where |Ψ⟩:=∑_i=1^2^n|ii⟩√(2^n),
is the so-called entanglement fidelity (equivalent to the purity of the so-called Choi state of E). Because maximally entangled states can be written in the Pauli basis as |Ψ⟩⟨Ψ|=4^-n∑_P∈𝖯𝖺𝗎𝗅𝗂_nP⊗P^T, this is equivalent to
(E)=14^n∑_P∈𝖯𝖺𝗎𝗅𝗂_n12^n[PE(P)] = 14^n(E),
where we identified the matrix E, with elements E_ij=12^n[P_iE(P_j)], as the Pauli Transfer Matrix representation of E.
For a depolarizing channel, in particular,
γ(E_p)=p.
Also, a composition of depolarizing channels with the same polarization is just another depolarizing channel as
E_p⋯E_p_k times = E_p^k.
Given the above property, for a depolarizing channel, we also have
_ψ(E_p) = 2^n _ψ(E_p^2)-12^n-1
= γ(E_p)^2
:=(E_p),
for any state |ψ⟩. To stress this independence of the state ψ, we drop the subindex in the last line, which simply means (E_p):=_φ(E_p) for an arbitrary pure state |φ⟩.
Notice that, E_p U_𝖢=U_𝖢 E_p, or more generally,
E_p X=XE_p,
whenever X is a unital and tp quantum channel. Furthermore, for any unital and tp channel X, the unitarity _ψ(E_pX) only depends on _ψ(X^E_p^2X), which satisfies
_ψ(X^E_p^2X) = ⟨ψ|X^E_p^2X(|ψ⟩⟨ψ|)|ψ⟩
= p^2_ψ(X^X) + (1-p^2)2^-n,
= (E_p)(_ψ(X^X)-2^-n)+2^-n,
and so
_ψ(E_pX) = 2^n(E_p)(_ψ(X^X)-2^-n)2^n-1=(E_p)_ψ(X),
i.e. the unitarity this way for depolarizing channels factorizes.
§.§ Average purity in the many-layer case
We now first consider adding a single layer of gates according to the distribution Ω, and evaluate the purity of ϱ̃_1,
(ϱ̃_1^2) = _𝖫_1∼Ω
𝖵∼Ω_1[ϕ(𝖫_1)ϕ(𝖵)(|0⟩⟨0|)ϕ(𝖫_1)ϕ(𝖵)(|0⟩⟨0|)^]
= _𝖫_1∼Ω
ψ∼Ω_1[E_1U_1E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)E_1U_1E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)]
= _𝖫_1∼Ω
ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆^U_1^E_1^E_1U_1E_𝗌𝗉𝖺𝗆).
This can then be generalized directly to the case of m layers as
(ϱ̃_m^2) = _𝖫_1∼Ω
ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆^U_1^E_1^⋯U_m^E_m^E_mU_m⋯E_1U_1E_𝗌𝗉𝖺𝗆),
and thus by means of Eq. (<ref>), this is equivalent to
(ϱ̃_m^2)
= (2^n-12^n)_𝖫_i∼Ω
ψ∼Ω_1_ψ(E_mU_m⋯E_1U_1E_𝗌𝗉𝖺𝗆) + 𝗍𝗋𝖺𝖼𝖾 𝖽𝖾𝖼𝗋𝖾𝖺𝗌𝗂𝗇𝗀 𝗍𝖾𝗋𝗆𝗌,
where the trace decreasing terms are terms in 𝖲 that can be read directly from Eq. (<ref>). One can express this way the most general form of the purity of the average state ϱ̃_m in circuit depth m.
However, we care, of course, for the situation in which the decay is exponential in m with the unitarity as the rate of decay.
* We may consider first that the noise is approximately gate and time independent, E_i≈E, in which case at most we can get a double exponential in (E) and S(E), as in <cit.>. This approximation can be formalized by assuming that the average over time steps and gates is the leading contribution to the actual noise; see e.g., <cit.>.
* If we consider noise that is only gate-independent but that is unital and tp, we have
(ϱ̃_m^2) = _𝖫_1∼Ω
ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆^U_1^E_1^⋯U_m^E_m^E_mU_m⋯EU_1E_𝗌𝗉𝖺𝗆)
= _ψ∼Ω_1⟨ψ|_𝖫_i∼ΩE_𝗌𝗉𝖺𝗆^U_1^E_1^⋯U_m^E_m^E_mU_m⋯E_1U_1E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)|ψ⟩,
and we can average recursively over each layer.
* This circuit averaging, however, becomes significant if the layer set at least forms an ϵ-approximate unitary 2-design [ A unitary t-design is a probability measure μ on the d-dimensional unitary group U(d) (or a subset thereof), such that T^(t)_μ = T_𝖧𝖺𝖺𝗋^(t), where T^(t)_μ(·):=∫_U(d)dμ(U)U^⊗t(·)(U^⊗t)^ is a t-twirl over the unitary group. There are several nonequivalent, albeit often related, ways of approximating unitary designs, here we employ that in <cit.>, where (1-ϵ)T_μ^(t)≼T^(t)_𝖧𝖺𝖺𝗋≼(1+ϵ)T^(t)_μ with A≼B here if and only if A-B is cp, which can be interpreted in a straightforward way as having a small multiplicative factor of difference when measuring a state acted on with T^(t)_μ vs with T^(t)_𝖧𝖺𝖺𝗋. Finally, for the case t=2, the 2-twirl on a cp map Φ can be stated in terms of its matrix representation Φ through T_μ^(2)(Φ), equivalent to the expression ∫_U(d)dμ(U)U^Φ(UρU^)U for any state ρ.], to the effect that _𝖫∼ΩU^_𝖫E^EU_𝖫≈Ξ̃_p for a global depolarizing channel Ξ̃_p^2 with polarization
p^2 = ∑|[K_α^K_β]|^2-14^n-1
=[E^E]-14^n-1
= 4^n(E^E)-14^n-1
= 2^n(E^E)-12^n-1,
where in the first line {K_μ} are the Kraus operators of E <cit.>, or equivalently in the second line with E being a matrix representation of E [ It is possible to identify X=[ S(E) X_𝗌𝖽𝗅; X_𝗇 X_𝗎𝗇𝗂𝗍𝖺𝗅 ], where S(E), X_𝗌𝖽𝗅, X_𝗇 and X_𝗎𝗇𝗂𝗍𝖺𝗅 are trace-decreasing, state-dependent leakage, non-unital and unital components (of dimensions 1, 1×(4^n-1), (4^n-1)×1 and square (4^n-1)), respectively. This leads to the expression in <cit.> for the average unitarity.], or in terms of the entanglement fidelity through Eq. (<ref>) in the third line, or finally through the average gate-fidelity <cit.>, as we write in the main text. Thus, if we further consider Ω-distributed circuits that form an approximate unitary 2-design, we may write
(ϱ̃_m^2) ≈_ψ∼Ω_1⟨ψ|E_𝗌𝗉𝖺𝗆^Ξ̃_p_m^2⋯Ξ̃_p_1^2E_𝗌𝗉𝖺𝗆(|ψ⟩⟨ψ|)|ψ⟩,
where the approximation “≈” here neglects the small ϵ multiplicative corrections; due to the property in Eq. (<ref>), and where we intentionally defined the polarizations to be squares, given that p^2=(Ξ̃_p_i) [ We notice that this does not mean that Ξ̃_p corresponds to (approximately) the average of a single E (which only occurs if E itself is already depolarizing).]. For approximately time-stationary noise, to the effect that all polarizations are equal, p_1=⋯=p_m=p, this is equivalent to the exponential decay
(ϱ̃_m^2) ≈ A(Ξ̃_p)^m+12^n,
where A=2^-n(2^n-1)_ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆) is a spam constant. Of course, given that (Ξ̃_p)=p^2, this unitarity can also be expressed by (E), as done in Eq. (<ref>) in the main text.
§ OPERATIONAL ESTIMATION OF THE PURITY
Now we know how the purity of the noisy average state ϱ̃_m in Eq. (<ref>) behaves in terms of the average unitarity of noise with respect to circuit depth m. The task is then to estimate this quantity via rms.
§.§ Purity estimation through randomized measurements
rms found some of their first applications in estimating the purity of the quantum state of a subsystem <cit.>. Specifically, here we will focus on the results of <cit.>, whereby it is shown that for a reduced quantum state ρ_A=_B(ρ_AB), from some composite closed quantum system AB, where here we assume A to be a n-qubit quantum system, its purity can be expressed as
(ρ_𝖠^2) = 2^n∑_𝐬,𝐬^' (-2)^-h(𝐬,𝐬^')_𝖴∼𝖧𝖺𝖺𝗋[𝖯_𝖴(𝐬)𝖯_𝖴(𝐬^')],
where 𝐬 and 𝐬^' are n-bit strings, h(𝐬,𝐬^') is the Hamming distance (number of distinct bits) between them, _𝖴∼𝖧𝖺𝖺𝗋 denotes uniform averaging with the Haar measure over the local unitaries 𝖴=_i=1^n𝖴_i, [ That is, E_𝖧𝖺𝖺𝗋 here refers to a uniform and independent average over all the local unitaries U_i with the Haar measure, i.e., E_𝖧𝖺𝖺𝗋∼E_U_1E_U_2⋯E_U_N] and the
𝖯_𝖴(𝐬):=⟨𝐬| U_𝖴(ρ_𝖠)|𝐬⟩,
are noiseless probabilities of observing the n-bit string 𝐬 upon applying the unitary 𝖴 on the reduced state ρ_𝖠.
The main advantage here is that the quantum component of estimating purity this way is just the probabilities in Eq. (<ref>), which simply means applying a product of random unitaries and measuring each on a user-specified basis. The rest, i.e., fully estimating Eq. (<ref>) is purely classical post-processing of said probabilities. For n qubits, the number of distinct terms in the sum over n-bit strings in Eq. (<ref>) is 2^n-1(2^n-1), since the Hamming distance is symmetric, h(𝐬,𝐬^') = h(𝐬^', 𝐬); however, this sum is only a function of the probabilities 𝖯_𝖴, so it could, for example, be computed in parallel.
§.§ Purity of many layer Omega-distributed circuits through randomized measurements
Let us again begin by considering the case m=1, of a single random layer, so that the equivalent of the probability in Eq. (<ref>), by measuring the state ϱ̃_1 with a rm using a random unitary 𝖶∈L(G_1) over Ω_1, reads as
𝖯_1,𝖶(𝐬) = [ϕ(𝖶)(|𝐬⟩⟨𝐬|) ϱ̃_1]
= _𝖫_1∼Ω
𝖵∼Ω_1[ϕ(𝖶)(|𝐬⟩⟨𝐬|) ϕ(𝖫_1)ϕ(𝖵)(|0⟩⟨0|)]
=_𝖫_1∼Ω
ψ∼Ω_1[E_𝗌𝗉𝖺𝗆_𝗐^'U^'_𝖶(|𝐬⟩⟨𝐬|) E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏(|ψ⟩⟨ψ|)]
=_𝖫_1∼Ω
ψ∼Ω_1⟨𝐬| U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏(|ψ⟩⟨ψ|)|𝐬⟩,
where we again defined U_𝖵(|0⟩⟨0|)=|ψ⟩⟨ψ| as a random (according to a uniform U_𝖵∼Ω_1) pure state, and identified each E_𝗌𝗉𝖺𝗆 as the noise contribution to spam; in the penultimate line (purely for notational reasons) we defined ϕ(𝖶):=E_𝗌𝗉𝖺𝗆_𝗐^'U^'_𝖶 and in the last line, U_𝖶=U_𝖶^' and E_𝗌𝗉𝖺𝗆_𝗐=E_𝗌𝗉𝖺𝗆_𝗐^'.
Putting this together with the probability of getting another n-bit string 𝐬^' with the same layer sequence,
𝖯_1,𝖢(𝐬)𝖯_1,𝖢(𝐬^')
= _𝖫_1∼Ω
ψ∼Ω_1⟨𝐬|U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏(|ψ⟩⟨ψ|)|𝐬⟩⟨𝐬^'| U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏(|ψ⟩⟨ψ|)|𝐬^'⟩
= _𝖫_1∼Ω
ψ∼Ω_1⟨𝐬| U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏[|ψ⟩⟨ψ|E_𝗌𝗉𝖺𝗆_𝗏^ U_1^E_1^ E_𝗌𝗉𝖺𝗆_𝗐^ U_𝖶^(|𝐬^'⟩⟨𝐬^'|)|ψ⟩⟨ψ|]|𝐬⟩
= _𝖫_1∼Ω
ψ∼Ω_1⟨𝐬| U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏P_ψE_𝗌𝗉𝖺𝗆_𝗏^ U_1^E_1^ E_𝗌𝗉𝖺𝗆_𝗐^ U_𝖶^(|𝐬^'⟩⟨𝐬^'|)|𝐬⟩,
where in the last line we defined P_ψ(·):=|ψ⟩⟨ψ|·|ψ⟩⟨ψ|.
Now we can exploit the identity derived in <cit.>,
(X) = ∑_x(-2)^-h(x,y)⟨x|_𝖴_i∼𝖧𝖺𝖺𝗋{U_𝖴^ X U_𝖴} [|y⟩⟨y]|x⟩,
where here is entanglement fidelity, defined in Eq. (<ref>), and U_𝖴 is the noiseless unitary map corresponding to the product 𝖴=_i=1^n𝖴_i with local, single-qubit random unitaries 𝖴_i. Since this expression requires two copies of U_𝖴, where only the individual 𝖴_i should be Haar random, it suffices for these (as opposed to the global unitary) to belong to a unitary 2-design, e.g., the single-qubit Clifford group.
We emphasize then, that we will require the rms to be randomized by a layer made up of uniformly random single-qubit Clifford gates; while in the main text we fix this to correspond to G_1 and Ω_1, all other layers in principle can be chosen arbitrarily (with the reasoning of having some better choices than others detailed in the previous sections).
We can now apply Eq. (<ref>) on Eq. (<ref>), so that
(ϱ̃_1^2) ≃ 4^n _𝖫_1∼Ω
ψ∼Ω_1(E_𝗌𝗉𝖺𝗆_𝗐E_1U_1E_𝗌𝗉𝖺𝗆_𝗏P_ψE_𝗌𝗉𝖺𝗆_𝗏^ U_1^E_1^E_𝗌𝗉𝖺𝗆_𝗐^)
= _𝖫_1∼Ω
ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆_𝗏^ U_1^E_1^E_𝗌𝗉𝖺𝗆_𝗐^E_𝗌𝗉𝖺𝗆_𝗐E_1 U_1E_𝗌𝗉𝖺𝗆_𝗏),
where the second line follows directly from the definition of entanglement fidelity. This generalizes to any circuit depth m, to
(ϱ̃_m^2) = _𝖫_i∼Ω
ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆_𝗏^ U_1^E_1^⋯U_m^E_m^E_𝗌𝗉𝖺𝗆_𝗐^E_𝗌𝗉𝖺𝗆_𝗐E_m U_m⋯E_1 U_1E_𝗌𝗉𝖺𝗆_𝗏),
which coincides with Eq. (<ref>) up to E_𝗌𝗉𝖺𝗆_𝗐.
Now similar to point <ref> in <ref> above, we mainly care about an exponential decay, which occurs for Ω-distributed circuits being an approximate unitary 2-design and noise being approximately gate+time independent, unital and tp. Now we will have
(ϱ̃_m^2) ≈ A(Ξ̃_p)^m + 12^n,
for A=2^-n(2^n-1)(Ξ̃_q_𝗌𝗉𝖺𝗆_w)E_ψ∼Ω_1_ψ(E_𝗌𝗉𝖺𝗆_𝗏), where Ξ̃_q_𝗌𝗉𝖺𝗆_𝗐 is the contribution from the layer average of E_𝗌𝗉𝖺𝗆_𝗐; of course both unitarity terms in A cannot be distinguished and can simply be interpreted as an average unitarity of spam.
The result in Eq. (<ref>) corresponds to that with an exact average over layers and initial states; this average can be approximated numerically with Eq. (<ref>) given a number N_𝖶 of measurements, 𝖢_m of Ω-distributed circuits, and N_𝖵 of initial randomization samples, respectively.
§ OPERATIONAL ESTIMATION OF FIDELITY THROUGH RANDOMIZED MEASUREMENTS
Since estimating the average unitarity is really useful only when informed by the average fidelity, we now consider estimating the average layer fidelity of the same Ω-distributed circuits, once given a set of randomized measurement probabilities 𝖯_𝖴.
Naturally, if the noiseless output is a state |φ⟩, Eq. (<ref>) can be modified to account for its fidelity with respect to a mixed state ρ as
⟨φ|ρ|φ⟩ = 2^n∑_𝐬,𝐬_𝗂𝖽𝖾𝖺𝗅 (-2)^-h(𝐬,𝐬_𝗂𝖽𝖾𝖺𝗅)_𝖴∼𝖧𝖺𝖺𝗋𝖯_𝖴(𝐬)𝖯^(𝗂𝖽𝖾𝖺𝗅)_𝖴(𝐬_𝗂𝖽𝖾𝖺𝗅),
where 𝖯^(𝗂𝖽𝖾𝖺𝗅)_𝖴 is the probability of observing a n-bit string 𝐬_𝗂𝖽𝖾𝖺𝗅 with a randomized measurement defined by 𝖴 on |φ⟩⟨φ|, that is
𝖯^(𝗂𝖽𝖾𝖺𝗅)_𝖴(𝐬_𝗂𝖽𝖾𝖺𝗅) = |⟨𝐬_𝗂𝖽𝖾𝖺𝗅|𝖴|φ⟩|^2.
Notice that this assumes that we can efficiently compute the n-bit strings, and thus the probabilities, corresponding to the noiseless outputs. This generally implies a limitation in the gate sets that can be employed to only Clifford gates, however, for mid-scale estimations the ideal probabilities can still be computed without a major overhead when the gate set contains non-Clifford gates to be sampled with a relatively low probability.
In our case, we want to estimate the quantity ⟨ψ_m|ϱ̃_m|ψ_m⟩, where
|ψ_m⟩⟨ψ_m| := 𝖢_m^(𝗂𝖽𝖾𝖺𝗅)(|0⟩⟨0|) = U_m⋯U_1U_𝖵(|0⟩⟨0|)
= U_m⋯U_1(|ψ⟩⟨ψ|).
Thus, consider first the case m=1; we have
𝖯_1,𝖶(𝐬)𝖯^(𝗂𝖽𝖾𝖺𝗅)_1,𝖶(𝐬_𝗂𝖽𝖾𝖺𝗅) = _𝖫_1∼Ω
ψ∼Ω_1⟨𝐬| U_𝖶 E_𝗌𝗉𝖺𝗆_𝗐 E_1 U_1 E_𝗌𝗉𝖺𝗆_𝗏P_ψ U_1^ U_𝖶^(|𝐬_𝗂𝖽𝖾𝖺𝗅⟩⟨𝐬_𝗂𝖽𝖾𝖺𝗅|)|𝐬⟩,
which follows as in Eq. (<ref>), where in the last line we defined P_ψ(·):=|ψ⟩⟨ψ|·|ψ⟩⟨ψ|. Similarly, we can follow all steps from Eq. (<ref>) to Eq. (<ref>) in an analogous way to obtain
⟨ψ_m|ϱ̃_m|ψ_m⟩ = _𝖫_i∼Ω
ψ∼Ω_1_ψ( U_1^⋯U_m^E_𝗌𝗉𝖺𝗆_𝗐E_m U_m⋯E_1 U_1E_𝗌𝗉𝖺𝗆_𝗏),
which is simply an rb-like fidelity decay. Similarly here, if the Ω-distributed circuits approximate a unitary 2-design, to the effect that _𝖫∼ΩU^_𝖫E_iU_𝖫≈Ξ̃_p_i, i.e., the average noise is approximately a global depolarizing channel with p_i=(2^n(E)-1)/(2^n-1), we end up with
⟨ψ_m|ϱ̃_m|ψ_m⟩ ≈_ψ∼Ω_1_ψ(Ξ̃_p_m^'Ξ̃_p_m-1⋯Ξ̃_p_1E_𝗌𝗉𝖺𝗆_𝗏)
= _ψ∼Ω_1_ψ(Ξ̃_p_𝗌𝗉𝖺𝗆_𝗐Ξ̃_p_mp_m-1⋯p_1E_𝗌𝗉𝖺𝗆_𝗏),
which is a rb-like decay of the state fidelity in the circuit depth m, where we defined p_m^'=p_mp_𝗌𝗉𝖺𝗆_𝗐 for some p_𝗌𝗉𝖺𝗆_𝗐. Further, for time-independent noise, such that p_1=p_2=…=p_m=p, this is an exponential decay in m,
⟨ψ_m|ϱ̃_m|ψ_m⟩ ≈ Ap^m + B,
where A=p_𝗌𝗉𝖺𝗆_𝗐[_ψ(E_𝗌𝗉𝖺𝗆_𝗏)-2^-n], B=2^-n, and
p = 2^n(E)-12^n-1,
which can similarly be written in terms of entanglement fidelity of E, or in terms of its matrix representation, similar to the usual rb constants for unital spam.
This means we can generate a set of random Ω-distributed circuits, attach some set of randomized measurements, and use the probabilities to simultaneously estimate average layer fidelity and unitarity.
§ SCRAMBLING AND UNITARY 2-DESIGNS
Here we refer to both assumptions, <ref>, <ref>, as written in the main text in <ref>, for an exponential decay of the average sequence purity in circuit depth.
While assumption <ref> is beyond the choices that the user executing the protocol <ref> can make, assumption <ref> is important to make the protocol <ref> adhere to the exponential decay of main Result <ref>, but it also sets a practical limit in scalability to mid-scale systems, which we discuss below.
There is, furthermore, a fundamental limit to scalability imposed by the sample complexity of estimating purity via rms, i.e., the number of experimental samples, N_𝗆𝖾𝖺𝗌 in Eq. (<ref>), needed to estimate purity within a given error. This is discussed in the main text in <ref>, and it compounds with the sample complexity of the Ω-distributed circuits, related to assumption <ref>.
The need for unitary 2-designs in rb-based techniques stems from the fact that, by definition, these inherit the property of fully random (i.e. Haar distributed) unitaries whereby their second moment is described by a depolarizing channel, which requires a single parameter to be fully specified. This implies that figures of merit, such as average gate fidelity or average gate unitarity can be encapsulated into such polarization parameter (once assumption <ref> is satisfied) given circuits that generate a unitary 2-design.
Loosely speaking, a unitary 2-design is a probability distribution μ on the unitary group, or a subset thereof, satisfying _V∼μV^XV = _U∼𝖧𝖺𝖺𝗋U^XU for any quantum channel X. The action _V∼μV^(·)V is referred to as a 2-twirl with μ. That is, the 2-twirl with μ reproduces the second moment of the whole unitary group with the Haar measure. That an ensemble of unitaries constitutes a 2-design is particularly useful because it is known that doing a 2-twirl with the Haar measure on a cp map, reduces it to a depolarizing channel <cit.>, i.e., also for a 2-design _V∼μV^XV(·) = p (·)+(1-p)1/2^n, where here p=(2^n(X)-1)/(2^n-1), where (X) is the average gate fidelity of X. Both Eq. (<ref>) and Eq. (<ref>) are obtained using this relation.
The notion of a unitary 2-design can further be relaxed by having equality up to a small ϵ according to a given metric, to the effect that the 2-twirl is also close to a depolarizing channel. More precisely, denoting a 2-twirl by Δ_μ(·):=_V∼μV^(·)V we say that V is an ϵ-approximate unitary 2-design, if
(1-ϵ)Δ_𝖧𝖺𝖺𝗋≼Δ_μ≼(1+ϵ)Δ_𝖧𝖺𝖺𝗋,
where here ≼ is semidefinite ordering in the sense that X≼Y if and only if X-Y is a cp map. This definition was put forth (for the more general setting of t-designs) in <cit.> and has the particularly simple interpretation of having a small multiplicative factor 1±ϵ of difference when measuring a state acted on with a channel twirled Δ_μ as opposed to with Δ_𝖧𝖺𝖺𝗋 [ This definition is further connected in <cit.> to the usual diamond norm definition as implying that if V is an ϵ-approximate unitary 2-design, also Δ_μ-Δ_𝖧𝖺𝖺𝗋_♢≤2ϵ, where X_♢:=sup_ρ(I_d⊗X)ρ_1, where X_1=√(XX^) denotes trace norm and the supremum is taken over all dimensions d≥1 of the identity and corresponding density matrices ρ.]. Thus, the approximation sign in both Eq. (<ref>) and Eq. (<ref>) refers to these small multiplicative factors in the 2-design approximation, exponential in m.
A well-known example of an exact 2-design is the n-qubit Clifford group <cit.>: it is also simultaneously a strong reason
why standard [ As can be seen in <cit.> and <cit.>, there is a plethora of techniques under the term rb; by standard we mean single or two-qubit RB with corresponding Clifford gate sets, estimating the group's average gate fidelity by fitting survival probabilities to an exponential decay in sequence length.] Clifford rb works so well, and the culprit (in practice) of why it does not scale in n. Techniques akin to standard Clifford rb, such as drb, mrb and birb resolve such scalability issues by, among other things, retaining some of the randomness through Ω-distributed circuits. Rather than requiring an approximate unitary design property, e.g., mrb <cit.> and birb <cit.> require a scrambling property in the sense that Pauli errors get quickly spread among other qubits before other Pauli errors occur. This can be stated in terms of an entanglement fidelity for non-identity Paulis P, P^', with action P(·)=P(·)P and P'(·)=P^'(·)P^', such that for an expected infidelity of the layers α, there is k≪1/α and a δ≪1 with
_Ω[P C_Ω P^' C_Ω^-1] ≤δ + 14^n,
where here C_Ω=U(𝖫_k⋯𝖫_1), with U the unitary map of the sequence of noiseless layers 𝖫_k⋯𝖫_1. Similarly, drb uses generators of unitary 2-designs as the benchmarking gate set and then requires that these constitute a sequence-asymptotic unitary 2-design <cit.>.
While the scrambling condition in Eq. (<ref>) is stated for Pauli noise channels, the choice of (G_1,Ω_1) being the uniformly distributed single-qubit Clifford group has the effect of projecting any cp map (modeling Markovian noise) to a Pauli channel (i.e., a tensor product of depolarizing channels) since each Clifford group constitutes a unitary 2-design on the respective qubit.
The reason approximating a unitary 2-design is a stronger condition than scrambling, as defined by Eq. (<ref>), is because it would require δ shrinking as O(4^-n) [ This can be seen e.g., by assuming the U(𝖫_k⋯𝖫_1) generates a unitary 2-design and using Eq. (<ref>)]. Nevertheless, while we directly assume that our Ω-distributed circuits approximate a unitary 2-design, for a mid-scale of tenths of qubits the scrambling condition suffices, and indeed it holds in a similar way than it does in mrb or birb [ A difference of our technique with mrb is that we do not employ a mirror structure, while one with birb is that we do not employ initial and final stabilizers.], as exemplified in <ref>. Increasing the qubit count to estimate average unitarity would require establishing a scrambling condition, e.g., as for birb by employing stabilizer states and measurements, or otherwise relaxing the approximate design condition.
§ NUMERICAL ADDENDUM
§.§ The simulator backend
As explained in the main text in <ref>, we performed experiments on , a 5-qubit superconducting system with a central qubit connected to the rest, and simulations with a backend of a total of 20 qubits in a grid topology, connected as depicted in the graph of Fig. <ref>. In the simulation, we considered a subset of these qubits, namely qubits ranging from qubit 3 to qubit 12, in pairs as depicted in Fig. <ref>. All qubits were associated with a noise model on readout, gate times, single- and two-qubit depolarizing noise, and decoherence, T_1 and T_2 times.
§.§ Simulation on 10 qubits
We now employ a simulator backend that considers 20 possible qubits with a grid topology, as displayed fully in Fig. <ref> of Appendix <ref>. All operations (except barriers) are modeled as noisy, in this particular example with a set of parameters considering a range of relaxation times T_1, dephasing times T_2, gate duration parameters, depolarizing parameters on both single- and two-qubit gates, and readout errors (bit-flip probabilities).
Similarly in this case, according to Protocol <ref>, we extracted decay rates of the average state purity and fidelity in increasing depths, and thus average layer unitarity and fidelity for Ω-distributed circuits with up to 10 qubits. In Fig. <ref>, we show the corresponding purity decays in increasing depth. We display only even qubit numbers n for easiness of visualization, where each n corresponds to choices of qubits and couplings as shown in the inset. In all cases, we employed unbiased estimators and K=2 momsmomsmoms estimators.
In Fig. <ref> we show all average layer fidelities and unitarities in increasing number of qubits, together with the Pauli noise unitarity intervals as predicted by the bound in Ineq. (<ref>). In the corresponding inset, we show the average state fidelity decays in increasing depth for each number of qubits n, and thus the corresponding average gate fidelities; similar to the case of , these were computed using the measurements within the same experiment and using the estimator in Eq. (<ref>) using the measurements of the noiseless circuits. The noiseless probabilities were estimated with the same number of N_𝗆𝖾𝖺𝗌=2^10 shots per measurement.
Generally, while variances in the decays are relatively high, the averages show only a modest deviation up to 9 qubits. The noise model is dominated by stochastic errors, so it is expected that most points should fall within the Pauli unitarity bound; the cases of n=6,8 are indeed outliers, which in experiment would need to be investigated further, in our case they most likely can be explained by both the qubit arrangement and the amplitude damping T_1 parameters involved. It should be noted that for simulation in a common laptop, going beyond 10 qubits becomes quite computationally demanding.
§.§ Comparison of layer fidelity estimations with MRB
The Ω-distributed circuits we consider are a primitive component in mrb <cit.>, thus the average fidelity estimations from both our technique and mrb should agree. There are two main differences between both procedures: one is that we do not construct a “mirror” circuit, i.e., we do not append the circuit made up of the inverses of the original Ω-distributed circuit, and the second is that we do not interleave layers of random Pauli gates. Our technique is able to estimate average unitarity too, although this comes at a sampling complexity cost that requires exponentially many more samples to increase accuracy.
§.§.§ IQM Spark (TM)
Tthe estimated fidelities from randomized measurements should be consistent with those estimated by mrb <cit.>. Indeed, a primitive of mrb is Ω-distributed circuits; a difference with our technique is that we do not implement a mirror (i.e. the sequence of inverses), nor interleave random Pauli operators. Nevertheless, both techniques can estimate the same quantity (mrb through polarizations instead of state fidelities), rms simply enable to access the unitarity. In Fig. <ref> we show the fidelity decays obtained experimentally with mrb about a month ago on , with the same Ω-distributed circuits considered in <ref>, namely made up of single-qubit Clifford gates uniformly sampled and 𝖢𝖹 gates with a two-qubit gate density of 1/2 according to the edge-grab sampler.
Clearly, while the estimated average layer fidelities agree, the distributions have a smaller variance than those in Fig. <ref>, and were obtained with less total circuit samples. It can be conceivable then, that alternatively, average fidelities may be computed by constructing mirror circuits and following the mrb protocol <cit.>, at the expense of performing separate measurements of these circuits.
§ UNITARITY AND RANDOMIZED COMPILING
To begin with, let us label the n-qubit Pauli group P_n, i.e., the group made up of all n-fold products of Pauli operators (1,X,Y,Z), as in <cit.>: let a be a 2n-bit string, a=a_1a_2…a_2n, and define
P_a := i^a^TΥa∏_j=1^nX_j^a_2j-1Z_j^a_2j,
where X_j and Z_j are single-qubit Pauli operators acting on the jth qubit and Υ:=_j=1^n([ 0 1; 0 0 ]) such that P_a is ensured to be Hermitian. The Pauli group P_n is then made up of all such P_a together with their products and their overall phases {±1,±i}; e.g., P_1={±1,±i1,±X,…,±iZ}. In particular, denoting any other 2n-bit strings with bold lowercase letters, we will make use of the property
P_aP_b = (-1)^⟨a,b ⟩P_bP_a,
where
⟨a,b ⟩ := a^T(Υ+Υ^T)b2
= (a_1b_2 + a_2b_1 + a_3b_4 + a_4b_3 + …+a_2nb_2n-1+a_2n-1b_2n)2,
which in particular is such that
14^n∑_a(-1)^⟨a,b+c ⟩ = δ_bc .
Let us then write a generic n-qubit noise channel (i.e., some cp map) in the so-called χ-representation, E(·)=∑_a,bχ_abP_a(·)P_b, where χ is a positive Hermitian matrix. We can also relate this representation to the so-called ptm representation, which is a 4^n-square matrix with real entries given by
E_ab := 12^n[P_aE(P_b)]
= 12^n∑χ_cd[P_aP_cP_bP_d],
which is not quite informative by itself; however, a particular case where this representation is useful is for the case of a Pauli channel, which has for χ a diagonal matrix, implying that its ptm is diagonal too. A Pauli channel can be enforced on any channel by twirling it with the Pauli group, i.e.,
E ↦ E^(·) := 14^n∑_P∈P_nPE(P·P)P,
which gives
E^_ab = 12^3n∑_c,d,qχ_cd[P_aP_qP_cP_qP_bP_qP_dP_q]
= 12^3n∑_c,d,q (-1)^⟨q,c+d⟩ χ_cd [P_aP_cP_bP_d]
= 12^n∑_c,dχ_cdδ_cd[P_aP_cP_bP_d]
= 12^n∑_dχ_dd[P_aP_dP_bP_d]
= δ_ab∑_d (-1)^⟨a,d⟩χ_dd := δ_ab∑_d (-1)^⟨a,d⟩α_d,
where we made use of both Eq. (<ref>) and Eq. (<ref>), so that indeed E^, the ptm of the twirled map E^, is manifestly diagonal with eigenvalues λ_a:=∑_b (-1)^⟨a,b⟩α_b, where we write the diagonal elements with a single index as α_a:=χ_aa.
The diagonal elements α are the so-called Pauli error probability rates, conversely related to the ptm eigenvalues as α_a=2^-n∑_b(-1)^⟨a,b⟩λ_b, and in terms of these, the trace non-increasing condition translates to ∑_aα_a≤1, saturating for E being tp.
If follows that the average gate fidelity of a Pauli channel is given by
(E^) = [E^]+2^n2^n(2^n+1)
= ∑_aλ_a+2^n2^n(2^n+1)
= 2^nα_0+12^n+1
= (E)
E is TP= 1 - (2^n2^n+1)∑_i≠0α_i,
where in the penultimate line we highlighted that it is the same as for the raw (not-twirled) channel E, and where used Eq. (<ref>), with 0 being the 2n-bit string with all bits being zero, and in the last line the case ∑_aα_a=1 for E being tp; such expression makes it manifest that average gate fidelity captures only the information of the probability of any Pauli error happening [ This does not mean that average gate fidelity is insensible to coherent errors, but rather that it is only partially so through their contribution to the diagonal elements in the ptm.].
Conversely, the unitarity of the Pauli channel is given by
(E^) = [E^ E^]-14^n-1
= ∑_aλ_a^2-14^n-1
= 4^n∑_aα_a^2-14^n-1,
which is minimal in the sense that for a non-identity it corresponds to a purely stochastic noise channel, as it only involves the diagonal χ-matrix terms, the Pauli error rates, involved in the average gate-fidelity. Clearly, when the distribution of errors is uniform, i.e., all α_i=2^-n so that the channel is maximally depolarizing, the unitarity vanishes. As opposed to average gate fidelity, the average unitarity of the raw channel E would capture all elements of its ptm. We can further lower-bound this as follows,
(E^) = 4^n∑_aα_a^2-14^n-1
≤4^n[α_0^2+(∑_b≠0α_b)^2]-14^n-1
≤4^n[α_0^2+(1-α_0)^2]-14^n-1
= (2^n(E)-12^n-1)^2 + (4^n-2)(1-(E)2^n-1)^2,
so together with the lower bound f(E)^2 of <cit.>, we can express the unitarity for the Pauli-twirled channel E^ as
f(E)^2 ≤(E^) ≤f(E)^2 + 2^2n-2(2^n-1)^2 r(E)^2,
where here f(E)=(2^n(E)-1)/(2^n-1) is the noise-strength parameter in Eq. (<ref>) in the main text, and r(E):=1-(E) is the average gate infidelity of E.
The lower bound in Ineq. (<ref>) saturates for depolarizing noise, i.e., when all non-identity Pauli error rates are the same, while the upper bound overestimates the unitarity for Pauli noise through cross-products of non-identity Pauli error rates. Thus, the upper bound serves as a proxy for whether noise has coherent components, if it is above it, or whether it is potentially Pauli, if it is under it.
Of course, in practice, we cannot directly twirl noise quantum channels but we need a way of achieving it by manipulating the noisy gates in question. Whenever the ideal (or target) gates are Clifford, a Pauli twirl can be enforced by a so-called G-twisted twirl <cit.>, where the outer Pauli operators in Eq. (<ref>) are acted on (or twisted) with the ideal Cliffords. This concept of a twisted-twirl, together with employing random samples instead of all the 4^n possible Pauli operators, are at the core of so-called rc, where we would then effectively have
E^_N(·) := 1N∑_P∼P_n^N samplesPE(P·P)P,
where here P∼𝖯𝖺𝗎𝗅𝗂 means P are Pauli samples drawn uniformly at random with no repetition from P_n, and the sum runs over N≤4^n such samples. In practice, this randomized sum would be computed by measuring and then averaging many circuits where the noisy gate is twisted-twirled; considering that the noisy gate may be transpiled into single and two-qubit gates, the way this is usually done is by precisely compiling the single-qubit gates with the Paulis in each random sample, with the overall gate being logically exactly the same. Since also in practice the Pauli operators will be noisy, care has to be taken not to increase the overall number of gates that are executed. In general if the Paulis themselves have coherent noise, perfect diagonalization would be achieved strictly for N→∞ by sampling with repetition.
The corresponding ptm of E^_N(·) has components
(E^_N)_ab = 1N2^n∑_q^N samples∑_c,dχ_cd[P_aP_qP_cP_qP_bP_qP_dP_q]
= 12^n∑_c,d(1N∑_q^N samples(-1)^⟨q,c+d⟩) χ_cd [P_aP_cP_bP_d],
where now we can only approximate the property in Eq. (<ref>) with the N random samples of bit strings q: in the infinite sample limit, N→∞, ,or if exactly each distinct bit string is sampled once, the term within parenthesis goes to δ_cd. Since ⟨q,2c⟩=0 for any pair of bit strings q and c, the sum over q when c=d equals 1 for any number of samples; this just reflects the fact that twirling (or rc) leaves the diagonal of the ptm invariant. Thus, we can write
(E^_N)_ab = E^_ab + ∑_c≠d(1N∑_q^N samples(-1)^⟨q,c+d⟩ χ_cd [P_aP_cP_bP_d]2^n)
:= E^_ab + 1N∑_q^N samplesλ̃_ab^(q),
where here E^ as before, is the perfectly Pauli-twirled channel in Eq. (<ref>), and where we defined
λ̃_ab^(q):=∑_c≠d(-1)^⟨q,c+d⟩ χ_cd [P_aP_cP_bP_d]2^n,
which correspond to the off-diagonal elements of the ptm upon the twirl action of an element P_q. The effect of N=1, a single randomization, will simply be to either change a sign of an off-diagonal element or leave it as it is. For an even number N of samples, the off-diagonals will either vanish or be suppressed in magnitude as integer multiples of 2/N (with ratio less than one). On the other hand, for odd N, all off-diagonals vanish in magnitude as integer multiples of 1/N.
Finally, then
(E^_N) = (E^) + 1N^2∑_q,q^'^N samplesλ̃_ab^(q)λ̃_ba^(q^')4^n-1,
where similarly now, the off-diagonal contributions will vanish as (small) multiples of 1/N^2.
|
http://arxiv.org/abs/2409.03751v1 | 20240905175908 | Randomized Lower Bounds for Tarski Fixed Points in High Dimensions | [
"Simina Brânzei",
"Reed Phillips",
"Nicholas Recker"
] | cs.CC | [
"cs.CC",
"cs.GT"
] |
Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution
Marga Don10009-0001-5435-8935
Stijn Pinson 2 Blanca Guillen Cebrian 2
Yuki M. Asano 1,3
September 9, 2024
===============================================================================================
§ ABSTRACT
The Knaster-Tarski theorem, also known as Tarski's theorem, guarantees that every monotone function defined on a complete lattice has a fixed point.
We analyze the query complexity of finding such a fixed point on the
k-dimensional grid of side length n under the
≤ relation. Specifically, there is an unknown monotone function f: {0,1,…, n-1}^k →{0,1,…, n-1}^k and an algorithm must query a vertex v to learn f(v).
Our main result is a randomized lower bound of Ω( k + k ·logn/logk) for the k-dimensional grid of side length n, which is nearly optimal in high dimensions when k is large relative to n. As a corollary, we characterize the randomized and deterministic query complexity of the Tarski search problem on the Boolean hypercube {0,1}^k as Θ(k).
§ INTRODUCTION
The Knaster-Tarski theorem, also known as Tarski's theorem, guarantees that every monotone function f : ℒ→ℒ defined over a complete lattice (ℒ, ≤ ) has a fixed point.
Tarski proved the most general form of the theorem <cit.>:
Let (ℒ, ≤) be a complete lattice and let f : ℒ→ℒ be an order-preserving (monotone) function with respect to ≤. Then the set of fixed points of f in ℒ forms a complete lattice under ≤.
An earlier version was shown by Knaster and Tarski <cit.>, who established the result for the special case where ℒ is the lattice of subsets of a set (i.e. the power set lattice).
This is a classical theorem with broad applications. For example, in formal semantics of programming languages and abstract interpretation, the existence of fixed points can be exploited to guarantee well-defined semantics for a recursive algorithm <cit.>. In game theory, Tarski's theorem implies the existence of pure Nash equilibria in supermodular games <cit.>.
Surprisingly, it is not fully understood how efficiently Tarski fixed points can be found.
Formally, for k,n ∈ℕ, let ℒ_n^k = {0, 1, …, n-1}^k be the k-dimensional grid of side length n.
Let ≤ be the binary relation where for vertices a = (a_1, …, a_k) ∈ℒ_n^k and b = (b_1, …, b_k) ∈ℒ_n^k, we have a≤b if and only if a_i ≤ b_i for each i ∈ [k].
We consider the lattice (ℒ_n^k, ≤). A function f :ℒ_n^k →ℒ_n^k is monotone if a≤b implies that f(a) ≤ f(b).
Tarski's theorem states that the set P of fixed points of f is non-empty and that the system (P, ≤) is itself a complete lattice <cit.>.
In this paper, we focus on the query model, where there is an unknown monotone function f:ℒ_n^k →ℒ_n^k. An algorithm has to probe a vertex v in order to learn the value of the function f(v). The task is to find a fixed point of f by probing as few vertices as possible.
The randomized query complexity is the expected number of queries required to find a solution with a probability of at least 9/10 by the best algorithm [Any other constant greater than 1/2 would suffice.], where the expectation is taken over the coin tosses of the algorithm.
There are two main algorithmic approaches for finding a Tarski fixed point. The first approach is a divide-and-conquer method that yields an upper bound of O((logn)^⌈k+1/2⌉) for any fixed k due to <cit.>, which improves an algorithm of <cit.>.
The second is a path-following method that initially queries the vertex 0⃗ = (0, …, 0) and proceeds by following the directional output of the function.
With each function application, at least one coordinate is incremented, which guarantees that a fixed point is reached within O(nk) queries.
<cit.> proved a randomized query complexity lower bound of Ω(log^2(n)) on the 2D grid of side length n, which implies the same lower bound for the k-dimensional grid of side length n when k is constant. This lower bound shows that the divide-and-conquer algorithm is optimal for dimensions k=2 and k=3.
For dimension k ≥ 4, there is a growing gap between the best-known upper and lower bounds, since
the upper bound given by the divide-and-conquer algorithm has an exponential dependence on k. Meanwhile, the path-following method provides superior performance in high dimensions, such as the Boolean hypercube {0,1}^k, where it achieves an upper bound of O(k).
§.§ Our Contributions
Let
TARSKI(n,k) denote the Tarski search problem on the k-dimensional grid of side length n.
Let k,n ∈ℕ.
Given oracle access to an unknown monotone function f : ℒ_n^k →ℒ_n^k, find a vertex x ∈ℒ_n^k with f(x) = x using as few queries as possible.
Our main result is the following:
There is a constant c > 0 such that for all k, n ∈ℕ, the randomized query complexity of TARSKI(n,k) is at least c ·( k + k ·logn/logk).
The lower bound in <ref> is sharp for constant n ≥ 2 and nearly optimal in the general case when k is large relative to n.
<ref> gives a characterization of Θ(k) for the randomized and deterministic query complexity on the Boolean hypercube {0,1}^k, since the deterministic path-following method that iteratively applies the function starting from vertex 0⃗ = (0, …, 0) finds a solution within O(k) queries.
No lower bound better than Ω(1) was known for the Boolean hypercube.
The randomized and deterministic query complexity of TARSKI(2,k) is Θ(k).
We obtain <ref> by designing the following family of monotone functions.
For each a ∈ℒ_n^k,
we define a function f^a : ℒ_n^k →ℒ_n^k coordinate by coordinate. That is,
for each v =(v_1, …, v_k) ∈ℒ_n^k and i ∈ [k], let
f^a_i(v) =
v_i - 1 if (v_i > a_i) and (v_j ≤ a_j for all j<i)
v_i + 1 if (v_i < a_i) and (v_j ≥ a_j for all j<i)
v_i otherwise.
Let f^a(v) = (f^a_1(v), …, f^a_k(v)).
Define ℱ = {f^a | a ∈ℒ_n^k}.
The intuition is that the first digit that is too low and the first digit that is too high both get pushed towards their correct value.
An example of a function from Definition <ref> is shown in Figure <ref>.
For k=2, this construction is similar to the herringbone construction of <cit.>; however, our construction does not induce a lower bound of log^2(n) on the 2D grid since the shape of the path from (0,0) or (n-1,n-1) to the fixed point is too predictable.
The real strength of our construction emerges for large k, where the herringbone is not defined. Critically, a function f ∈ℱ has the property that f(v) differs from v in at most 2 dimensions for all v, no matter how large k is. This makes it difficult to derive information about more than a constant number of dimensions with a single query. The proof of <ref> makes this intuition precise.
§.§ Related Work
Tarski fixed points. Algorithms for the problem of finding Tarski fixed points on the k-dimensional grid of side length n have only recently been considered. <cit.> gave an O(log^k (n)) divide-and-conquer algorithm. <cit.> gave an O(log^2 (n)) algorithm for the 3D grid and used it to construct an O(log^2 ⌈ k/3 ⌉ (n)) algorithm for the k-dimensional grid of side length n. <cit.> extended their ideas to get an O(log^⌈ (k+1)/2 ⌉(n)) algorithm.
<cit.> showed a lower bound of Ω(log^2(n)) for the 2D grid, implying the same lower bound for the k-dimensional grid of side length n. This bound is tight for k = 2 and k=3, but there is an exponential gap for larger k. They also showed that the problem is in both PLS and PPAD, which by the results of <cit.> implies it is in CLS.
<cit.> give a black-box reduction from the Tarski problem to the same problem with an additional promise that the input function has a unique fixed point. This result implies that the Tarski problem and the unique Tarski problem have the same query complexity.
Next we briefly summarize query and communication complexity results for two problems representative for the classes PLS and PPAD, respectively. These problems are finding a local minimum and a Brouwer fixed point, respectively. In both cases, the existing lower bounds also rely on hidden path constructions.
Brouwer fixed points. In the Brouwer search problem, we are given a function f : [0,1]^d → [0,1]^d that is L-Lipschitz, for some constant L > 1. The task is to find a fixed point of f using as few queries to f as possible. The existence of a fixed point is guaranteed by Brouwer's fixed point theorem.
The query complexity of computing an -approximate Brouwer fixed point was studied in a series of papers starting with <cit.>, which introduced a construction where the function is induced by a hidden walk. This was later improved by <cit.> and <cit.>.
Local minima. In the local search problem, we are given a graph G = (V,E) and a function f : V →ℝ. A vertex v is a local minimum if f(v) ≤ f(u) for all (u,v) ∈ E. An algorithm can probe a vertex v to learn its value f(v). The task is to find a vertex that is a local minimum using as few queries as possible. <cit.> obtains a lower bound of Ω(2^k/2-o(k)) on the query complexity for the Boolean hypercube {0,1}^k by a random walk analysis.
Aldous' lower bound for the hypercube was later improved by
<cit.> to Ω(2^k/2/k^2) via a relational adversary method inspired from quantum computing.
<cit.> further improved this lower bound to Θ(2^k/2·√(k)) via a “clock”-based random walk construction.
Meanwhile, <cit.> developed a deterministic divide-and-conquer algorithm.
For the k-dimensional grid [n]^k,
<cit.> used the relational adversary method to show a randomized lower bound of Ω(n^k/2-1 / log n) for every constant k ≥ 3. <cit.> proved a randomized lower bound of Ω(n^k/2) for every constant k ≥ 4.
The work of <cit.> closed further gaps in the quantum setting as well as the randomized k=2 case.
For general graphs, <cit.> gave a quantum lower bound of Ω( √(s/Δ) / log(n) ), where n is the number of vertices of the graph, s is the separation number, and Δ the maximum degree.
<cit.> improved this lower bound to Ω( √(s/Δ)) for randomized algorithms and also obtained a randomized lower bound of Ω(n^1.5/g), where g is the vertex congestion of the graph.
<cit.> also gave an upper bound of O((s + Δ) ·log n), which was obtained by refining a divide-and-conquer algorithm of <cit.>.
<cit.> gave lower bounds for Cayley graphs as a function of the number of vertices and the diameter of the graph.
<cit.> obtained upper bounds as a function of the genus of the graph.
<cit.> studied the communication complexity of local search; this captures distributed settings, where data is stored on different computers.
§ PROPERTIES OF THE FAMILY OF FUNCTIONS ℱ
In this section we show that each function in the family ℱ of Definition <ref> is monotone and has a unique fixed point.
For each a ∈ℒ_n^k, the function f^a from Definition <ref> is monotone.
Consider two arbitrary vertices u,v ∈ℒ_n^k with u ≤ v.
Suppose towards a contradiction that f^a(u) ≤ f^a(v) does not hold.
Then there exists an index i ∈ [k] such that f^a_i(u) > f^a_i(v).
Since u ≤ v, we have u_i ≤ v_i, so at least one of f^a_i(u) > u_i or f^a_i(v) < v_i holds.
Case 1: f^a_i(u) > u_i.
By definition of f^a, we then have u_i < a_i and u_j ≥ a_j for all j<i.
Since u ≤ v, we also have v_j ≥ a_j for all j<i. Furthermore, we have v_i = u_i, or v_i = u_i+1, or v_i ≥ u_i + 2. We consider a few sub-cases:
* (v_i = u_i). Then f^a_i(v) = v_i + 1 = u_i + 1 = f^a_i(u).
* (v_i = u_i+1). Then v_i ≤ a_i, so f^a_i(v) ≥ v_i = u_i + 1 = f^a_i(u).
* (v_i ≥ u_i + 2).
Then f^a_i(v) ≥ v_i-1 ≥ u_i+1 = f^a_i(u).
In each subcase (a-c), we have f^a_i(v) ≥ f^a_i(u). This is in contradiction with f^a_i(u) > f^a_i(v), thus case 1 cannot occur.
Case 2: f^a_i(v) < v_i.
By definition of f^a, we then have v_i > a_i and v_j ≤ a_j for all j<i.
Since u ≤ v, we also have u_j ≤ a_j for all j<i.
Furthermore, we have u_i = v_i, or u_i = v_i - 1, or u_i ≤ v_i - 2. We consider a few sub-cases:
* (u_i = v_i). Then f^a_i(u) = u_i - 1 = v_i - 1 = f^a_i(v).
* (u_i = v_i - 1). Then u_i ≥ a_i, so f^a_i(u) ≤ u_i = v_i - 1 = f^a_i(v).
* (u_i ≤ v_i - 2). Then f^a_i(u) ≤ u_i + 1 ≤ v_i - 1 = f^a_i(v).
In each subcase (a-c), we have f^a_i(u) ≤ f^a_i(v). This in contradiction with f^a_i(u) > f^a_i(v), thus case 2 cannot occur either.
In both cases 1 and 2 we reached a contradiction, so the assumption that f^a(u) ≤ f^a(v) does not hold must have been false. Thus f^a is monotone.
For each a ∈ℒ_n^k, the function f^a ∈ℱ has a unique fixed point at a.
By definition of f^a we have f^a(a) = a, so a is a fixed point of f^a.
Let v ≠ a.
Then there exists i ∈ [k] such that v_i a_i. Let i be the minimum such index. We have two cases:
* (v_i < a_i):
Then f^a_i(v) = v_i + 1, so f^a(v) ≠ v.
* (v_i > a_i):
Then f^a_i(v) = v_i - 1, so f^a(v) ≠ v.
In both cases v is not a fixed point, so a is the only fixed point of f^a.
§ LOWER BOUNDS
§.§ Lower bound for the Boolean hypercube
Using the family of functions ℱ from <ref>, we can now prove a randomized lower bound of Ω(k) for the Boolean hypercube {0,1}^k.
The randomized query complexity of TARSKI(2,k) is Ω(k).
We proceed by invoking Yao's lemma.
Let 𝒰 be the uniform distribution over the set of functions ℱ.
Let 𝒜 be the deterministic algorithm with the smallest possible expected number of queries that succeeds with probability at least 4/5, where both the expected query count and the success probability are for input drawn from 𝒰.
The algorithm 𝒜 exists since there is a finite number of deterministic algorithms for this problem, so the minimum is well defined.
Let D be the expected number of queries issued by 𝒜 on input drawn from 𝒰.
Let R be the randomized query complexity of TARSKI(2,k); i.e. the expected number of queries required to succeed with probability at least 9/10.
Then Yao's lemma (<cit.>, Theorem 3) yields 2R ≥ D.
Therefore it suffices to lower bound D.
For each t ∈ℕ, let
* ℋ_t denote the history of queries and responses received at steps 1, …, t.
* ℐ_t = {i ∈ [k] |𝒜 a_i ℋ_t}.
* v^t denote the t-th query submitted by 𝒜.
We prove by induction on t that ℐ_t is all the information that the algorithm 𝒜 has by the end of round t. Clearly ℐ_0 = ∅ since the algorithm has not made any queries so any input function is equally likely.
We assume the inductive hypothesis holds for t-1 and prove it for t.
We consider the indices where v^t has zeroes and divide in two cases; the analysis for indices where v^t has ones is symmetric. Initialize ℐ_t = ℐ_t-1. We explain in each case what new indices may enter ℐ_t.
* There exists i ∈ [k] such that ( v^t_i = 0 and f^a_i(v^t) = 1 ). This implies three things:
* a_i=1, so i is added to ℐ_t.
* For all j < i such that v^t_j = 0, it must be the case that a_j = 0; otherwise, the bit at index j would have been corrected to a 1 instead of the bit at index i getting corrected. Therefore, each such j is added to ℐ_t.
* The algorithm does not learn anything about the bits at locations j > i with v^t_j = 0, since regardless of the value of a_j, we have f^a_j(v^t) = 0. No such j is added to ℐ_t, though some may have already been in ℐ_t-1.
* There is no i ∈ [k] such that ( v^t_i = 0 and f^a_i(v^t) = 1 ). Then for all j ∈ [k] such that v_j^t = 0, it must have been the case that a_j = 0. Therefore, all such j are added to ℐ_t.
In either case (i) or (ii), the value of f^a_i(v^t) for indices i ∈ [k] with v^t_i = 0 gives 𝒜 no information about indices j ∈ [k] with v^t_j = 1, as the value of f^a(v^t) at such j depends only on other indices ℓ∈ [k] where v^t_ℓ = 1.
Thus for each index j ∈ [k], either j ∈ℐ_t (and so the algorithm 𝒜 knows the bit a_j with certainty) or the posterior for a_j remains the uniform distribution. This completes the inductive step.
Next we argue that the expected number of bits learned with each query is upper bounded by a constant, that is [|ℐ_t| - |ℐ_t-1| |ℋ_t-1] ≤ 4.
For b ∈{0, 1}, let c_t(b) be the index of the bit where v_c_t(b)^t = b and f^a_c_t(b)(v^t) = 1-b, or c_t(b) = ∞ if no such index exists. Then define Δ_t(b) as:
Δ_t(b) = { i | i ∉ℐ_t-1, v_i^t = b, i ≤ c_t(b)} .
In other words Δ_t(b) is precisely the set of indices i ∈ [k] added to ℐ_t (which were not in ℐ_t-1) with the property that v_i^t = b.
We therefore have
Δ_t(0) ∪Δ_t(1) = ℐ_t ∖ℐ_t-1 .
The distribution of |Δ_t(b)| can be bounded effectively. For any C ∈ℕ, we have |Δ_t(b)| > C only if the first C indices i ∈ [k] with the property that both i ∉ℐ_t-1 and v_i^t = b are guessed correctly, i.e. a_i = b.
Therefore:
[|Δ_t(b)| > C |ℋ_t-1] ≤ 2^-C .
We can bound the expected value of |Δ_t(b)| as:
𝔼[|Δ_t(b)| |ℋ_t-1] = ∑_C=0^∞[|Δ_t(b)| > C |ℋ_t-1] By Lemma <ref>
≤∑_C=0^∞ 2^-CBy (<ref>)
= 2 .
Using (<ref>) and (<ref>), we get:
𝔼[|ℐ_t| - |ℐ_t-1| |ℋ_t-1] = 𝔼[|Δ_t(0)| |ℋ_t-1] + 𝔼[|Δ_t(1)| |ℋ_t-1] ≤ 2 + 2 = 4 .
Since the upper bound of 4 applies for all histories ℋ_t-1, taking expectation over all possible histories gives:
𝔼[|ℐ_t| - |ℐ_t-1|] ≤ 4 ∀ t ∈ℕ .
Since |ℐ_0| = 0, inequality (<ref>) implies 𝔼[|ℐ_t|] ≤ 4t. Thus [|ℐ_t|] < 5t for all t ∈ℕ^*.
Let T = k/100.
Then by Markov's inequality applied to the random variable ℐ_T, we have
[|ℐ_T| ≥ k] ≤[|ℐ_T|]/k < 5T/k = 1/20 .
When |ℐ_T| < k, algorithm 𝒜 makes an error with probability at least 1/2 since it would have to guess at least one bit of a. Since 𝒜 succeeds with probability at least 4/5, we must have |ℐ_T| = k with probability at least 3/5.
Thus 𝒜 issues more than k/100 queries with probability at least 1 - 1/20 - 2/5 = 11/20.
Therefore the expected number of queries issued by 𝒜 is D > (11/20) · (k/100) ∈Ω(k).
§.§ Lower bound for the k-dimensional grid of side length n
In this section we show the randomized lower bound of Ω(k) also holds for TARSKI(n,k). Afterwards, we prove the construction from <ref> also yields a lower bound of Ω(k logn/log(k)) for the k-dimensional grid of side length n.
The randomized query complexity of TARSKI(n,k) is at least
the randomized query complexity of TARSKI(2,k).
We show a reduction from TARSKI(2,k) to TARSKI(n,k).
Let f^* : {0,1}^k →{0,1}^k be an arbitrary instance of TARSKI(2,k). As such, f^* is monotone.
Let g : {0,1,…,n-1}^k →{0,1}^k be the clamp function, given by
g(v) = (g_1(v), …, g_k(v)), g_i(v) = min(v_i, 1) ∀ i ∈ [k] .
Then define f : {0,1,…,n-1}^k →{0,1,…,n-1}^k as f(v) = f^*(g(v)).
To show that f is monotone, let u,v ∈{0,1,…,n-1}^k be arbitrary vertices with u ≤ v. Then g(u) ≤ g(v), so f(u) = f^*(g(u)) ≤ f^*(g(v)) = f(v) by monotonicity of f^*. Thus f is monotone and has a fixed point.
Every vertex u ∈{0,1,…,n-1}^k ∖{0,1}^k is not a fixed point of f, since f(u) ∈{0,1}^k. Every fixed point u ∈{0,1}^k of f is also a fixed point of f^* since g(u) = u for all u ∈{0,1}^k. Therefore all fixed points of f are also fixed points of f^*.
A query to f may be simulated using exactly one query to f^* since computing g does not require any knowledge of f^*.
Therefore any algorithm that finds a fixed point of f can also be used to find a fixed point of f^* in the same number of queries. Therefore the randomized query complexity of TARSKI(n,k) is greater than or equal to the randomized query complexity of TARSKI(2,k).
Applying <ref> to <ref> directly gives the following corollary.
The randomized query complexity of TARKSI(n,k) is Ω(k).
We also get the following lower bound.
The randomized query complexity of TARSKI(n,k) is Ω(klog(n)/log(k)).
We proceed by invoking Yao's lemma.
Let 𝒰 be the uniform distribution over the set of functions ℱ.
Let 𝒜 be the deterministic algorithm with the smallest possible expected number of queries that succeeds with probability at least 4/5, where both the expected query count and the success probability are for input drawn from 𝒰.
The algorithm 𝒜 exists since there is a finite number of deterministic algorithms for this problem, so the minimum is well defined.
Let D be the expected number of queries issued by 𝒜 on input drawn from 𝒰.
Let R be the randomized query complexity of TARSKI(n,k); i.e. the expected number of queries required to succeed with probability at least 9/10.
Then Yao's lemma (<cit.>, Theorem 3) yields 2R ≥ D.
Therefore it suffices to lower bound D.
For each vertex v ∈{0, …, n-1}^k, let Q_v be the set of possible outputs when plugging in v:
Q_v = {f^a(v) | a ∈{0,1,…,n-1}^k} .
We next bound |Q_v|.
By the definition of f^a, the vertex f^a(v) differs from v in at most two coordinates: the first i ∈ [k] such that v_i > a_i (if any) and the first j ∈ [k] such that v_j < a_j (if any). Each of i and j have k+1 options, corresponding to the k dimensions and the possibility that no such dimension exists. Therefore
|Q_v| ≤ (k+1)^2 .
Recall that 𝒜 is defined to the best deterministic algorithm that succeeds on 𝒰 with probability at least 4/5.
Since 𝒰 is uniform over ℱ, there must exist at least (4/5) · n^k inputs on which 𝒜 outputs a fixed point.
Then the decision tree of 𝒜 must have at least (4/5) · n^k leaves since all supported inputs have different and unique fixed points.
Every node of this tree has at most (k+1)^2 children, since |Q_v| ≤ (k+1)^2 for all v by (<ref>).
Therefore the average depth of the leaves is at least
log_(k+1)^2((4/5)n^k ) - 1 = log_2((4/5) n^k)/log_2((k+1)^2) - 1 ≥k log_2 (n) -1/2log_2 (k) + 2 - 1 ∈Ω(k log n/log k) .
Then on input distribution 𝒰, algorithm 𝒜 issues an expected number of queries of D ∈Ω(k log n/log k).
Thus the randomized query complexity of TARSKI(n,k) is Ω(k log n/log k) as required.
The proof of <ref> follows from <ref> and <ref>.
The randomized query complexity of TARSKI(n,k) is
Ω(k) by <ref> and
Ω(k logn/log(k)) by <ref>.
This implies a lower bound of Ω( k + k logn/log(k)) as required.
§ UPPER BOUND FOR THE FAMILY OF FUNCTIONS ℱ
It may seem that the family of instances ℱ from <ref> should in fact yield a lower bound of Ω(k logn). Intuitively, k logn may be achievable since the only feedback an algorithm receives with each query on an input from ℱ is whether the query is too high or too low on roughly two coordinates.
However, this intuition turns out to be incorrect when n is close to k.
The next proposition shows that the family of functions ℱ cannot yield a lower bound of Ω(k logn) for all k and n.
There is a deterministic O(k + n)-query algorithm for TARSKI(n, k) on the set of functions ℱ.
For each coordinate i ∈ [k] and each query index t, let x^t_i and y^t_i be the minimum and maximum possible values of a_i given the first t queries the algorithm makes. For example, x^0_i = 0 and y^0_i = n-1 for all i ∈ [k], and the algorithm finishes in T queries if x^T_i = y^T_i for all i ∈ [k].
If the algorithm has not finished by its (t+1)-th query, it queries the vertex with coordinates:
v^t+1_i =
x^t_i if x^t_i = y^t_i
x^t_i + 1 otherwise
There are two possible outcomes from each such query:
* Some coordinate j satisfies f^a_j(v^t+1) = v^t+1_j - 1. This immediately identifies a_j = x^t_j, as a_j ≥ x^t_j and a_j < x^t_j + 1. Since there are only k coordinates to learn, this case can occur for at most k queries before the algorithm terminates.
* No coordinate j satisfies f^a_j(v^t+1) = v^t+1_j - 1. Then, for each i such that x^t_i < y^t_i, the possibility of a_i = x^t_i is ruled out. Since each coordinate only has n possible values, this case can occur for at most n queries before the algorithm terminates.
Therefore, this algorithm terminates within O(k+n) queries.
§ DISCUSSION
It would be interesting to characterize the query complexity of the Tarski search problem as a function of the grid side-length n and dimension k.
§ ACKNOWLEDGEMENTS
We are grateful to Davin Choo and Kristoffer Arnsfelt Hansen for useful discussions.
alpha
§ A FOLK LEMMA
Let X be a random variable that takes values in ℕ. Then
[X] = ∑_C = 0^∞ [X > C] .
By the definition of [X]:
[X] = ∑_n=0^∞ n · [X = n] = ∑_n=0^∞∑_C=1^n [X = n] = ∑_C=1^∞∑_n=C^∞ [X = n] = ∑_C=1^∞ [X ≥ C] = ∑_C=0^∞ [X > C] .
|
http://arxiv.org/abs/2409.02286v1 | 20240903204741 | The Wanderer: Charting WASP-77A b's Formation and Migration Using a System-Wide Inventory of Carbon and Oxygen Abundances | [
"David R. Coria",
"Neda Hejazi",
"Ian J. M. Crossfield",
"Maleah Rhem"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
0000-0002-1221-5346]David R. Coria
Department of Physics & Astronomy, University of Kansas, Lawrence, KS, USA
Email: [email protected]
0000-0001-5541-6087]Neda Hejazi
Department of Physics & Astronomy, University of Kansas, Lawrence, KS, USA
Department of Physics & Astronomy, University of Kansas, Lawrence, KS, USA
Department of Physics & Astronomy, University of Kansas, Lawrence, KS, USA
§ ABSTRACT
The elemental and isotopic abundances of volatiles like carbon, oxygen, and nitrogen may trace a planet’s formation location relative to H_2O, CO_2, CO, NH_3, and N_2 “snowlines”, or the distance from the star at which these volatile elements sublimate. By comparing the C/O and ^12C/^13C ratios measured in giant exoplanet atmospheres to complementary measurements of their host stars, we can determine whether the planet inherited stellar abundances from formation inside the volatile snowlines, or non-stellar C/O and ^13C enrichment characteristic of formation beyond the snowlines. To date, there are still only a handful of exoplanet systems where we can make a direct comparison of elemental and isotopic CNO abundances between an exoplanet and its host star. Here, we present a ^12C/^13C abundance analysis for host star WASP-77A (whose hot Jupiter's ^12C/^13C abundance was recently measured). We use MARCS stellar atmosphere models and the radiative transfer code TurboSpectrum to generate synthetic stellar spectra for isotopic abundance calculations. We find a ^12C/^13C ratio of 51± 6 for WASP-77A, which is sub-solar (∼ 91) but may still indicate ^13C-enrichment in its companion planet WASP-77A b (^12C/^13C = 26 ± 16, previously reported). Together with the inventory of carbon and oxygen abundances in both the host and companion planet, these chemical constraints point to WASP-77A b's formation beyond the H_2O and CO_2 snowlines and provide chemical evidence for the planet’s migration to its current location ∼0.024 AU from its host star.
§ BACKGROUND
§.§ Refractory Abundances and the Star-Planet Connection
Stellar composition has long been implicated in planet formation– after all, stars and planets form out of the same disk of gas and dust albeit through different mechanisms. Different protoplanetary disk compositions and planet formation mechanisms have diverse outcomes which produce rocky terrestrial planets, super-Earths, sub-Neptunes, and even gas giants like Jupiter and Saturn. Since stellar atmospheres evolve slowly, the elemental abundances of exoplanet hosts tend to reflect the composition of their planet-forming disks <cit.> and have the potential to yield constraints on planet formation processes and, in turn, even the physical properties of exoplanets themselves <cit.>. One of the best known links between stellar abundances and exoplanets is stellar metallicity. The occurrence rate of close-in (<1 au) Jupiter-class planets is increased around higher metallicity stars <cit.>. This relation is interpreted as support for the core accretion model: protoplanetary disks around more metal-rich stars often have higher masses and metal contents <cit.> and can sustain solids for longer periods <cit.>, which allows giant planets to form more efficiently. Planet formation is therefore predicted to be unlikely in in metal-poor environments lacking sufficient metals to form planetary cores and kick-start accretion <cit.>. The relation with host-star metallicity, though weaker, has also been observed for sub-Neptunes and Super-Earths <cit.>. Besides stellar metallicity, there is also a correlation between the abundances of individual refractory elements like (Mg, Si, Al, Ti) and planetary occurrence rates <cit.>. For example, giant-planet host stars exhibit higher refractory-to-iron abundance ratios than non-host stars. In Neptune-class planet hosts, there is notable increase in [Ti/Fe], [Si/Fe], [Al/Fe], and [Mg/Fe] compared to non-host stars.
Refractory-to-volatile ratios also become useful in tracing giant exoplanet formation and migration for the hot and ultra-hot Jupiter-class exoplanets <cit.>. In these planets with equilibrium temperatures greater than 2000 K, refractory elements like Fe, Mg, and Si are not condensed, except perhaps on the night side, but rather gaseous and observable in the planet's atmosphere <cit.>. Using these atmospheric refractory-to-volatile abundance ratios, we may infer a planet's rock-to-ice fraction and constrain planet formation and migration scenarios <cit.>. Furthermore, formation of a planet beyond the water snowline followed by inward migration results in excess accretion of oxygen-poor, refractory-rich material leading to super-stellar alkali metal abundances but stellar water abundances in the planet's atmosphere <cit.>.
§.§ Volatile Element Abundances as Formation Diagnostics
Similarly, the stellar abundances of volatile elements like H, C, N, O, and S may shed light on giant exoplanet formation mechanisms, planet composition and atmospheric evolution <cit.>. Stellar carbon and oxygen abundances are particularly important formation diagnostics because of their influence on exoplanet ice, gas, and rock chemistry. Formation mechanisms such as gravitational instability for brown dwarfs and core accretion for Jupiter-class exoplanets each leave behind unique carbon and oxygen signatures in the atmospheres of these sub-stellar objects. Formation via gravitational instability occurs more rapidly than core accretion, on sub-Myr timescales, which results in brown dwarfs with stellar to super-stellar C/O ratios and abundances characteristic of their progenitor molecular clouds, similar to binary star systems <cit.>. Core accretion, on the other hand, is a much slower process, occurring on Myr timescales, which allows for protoplanets to incorporate varying quantities of gas and solids into their atmospheres, potentially resulting in a wide range of atmospheric metallicities and C/O ratios, but maintaining sub-stellar carbon and oxygen abundances <cit.>.
Multiple studies also explore the possibility of using elemental abundance ratios like the carbon-to-oxygen ratio <cit.> to constrain a planet's formation location relative to “snowlines", or the distance from a star where volatile molecules like H_2O, CH_4, CO, and CO_2 condense into ice grains. When a planet forms within the H_2O and CO snowlines, like Jupiter and Saturn for example, the planet is expected to inherit the same C/O ratio as its host star. When the planet forms outside the volatile snowlines, there are two main scenarios that affect the volatile abundances of these planets: (1) the planet migrates before dissipation of the protoplanetary disk resulting in atmospheric C/O ratios < 0.5 and super-solar metallicities due to the accretion of oxygen-rich planetesimals or (2) the planet migrates after dissipation of the protoplanetary disk resulting in atmospheric C/O ratios ∼ 1 and sub-solar metallicities <cit.>. See also <cit.> for an in-depth analysis on how inward drifting evaporating pebbles affect volatile abundances in exoplanet atmospheres along with a couple example giant exoplanet formation scenarios.
Though precise stellar abundances provide useful context for interpreting the atmospheric composition of their companion exoplanets <cit.>, there is still a need to investigate the connection between atmospheric composition, protoplanetary disk chemistry, and planetary formation inferences <cit.>. After all, the chemical composition of giant planets and their atmospheres are greatly affected by the accretion of gas and solids as they form and migrate to their final locations, deviating significantly from abundances of the protoplanetary disk <cit.>. Comparisons of the atmospheric abundances of exoplanets to planet formation simulations provide additional, complementary constraints on planet formation pathways. The condensation sequence of H_2O, CO_2, CH_4, and CO results in an increase of the gas phase C/O ratio with radial distance from the host star, which makes the planetary C/O ratio an ideal signature of a planet's accretion history <cit.>. Besides C/O, other volatile ratios like C/N, N/O, and S/N appear to follow monotonic trends with the extent of migration– the deviation from stellar values increases with the extent of disk-driven migration <cit.>. By measuring these volatile element abundances in an exoplanet's atmosphere, comparing them to the host star's abundances as well as planet formation models, we may predict how and where in the disk the planet formed and infer subsequent migration.
§.§ Isotopologue Abundance Ratios as Formation Diagnostics
In the era of the James Webb Space Telescope (JWST), ground-based 8m- class telescopes, upcoming 30m- class telescopes, and ultra high-resolution spectroscopy, we are suddenly sensitive to the signatures of much weaker molecular absorption lines in dwarf stars, brown dwarfs and even exoplanets. In addition to the major isotopologues of CO, CH, H_2O, NH_3, we are now detecting minor isotopologues such as ^13C^16O, ^12C^18O, ^13CH, H_2^17O, H_2^18O, and ^15NH_3 which allow for the measurements of ^12C/^13C, ^14N/^15N, ^16O/^18O, and ^17O/^18O isotopologue ratios. The most widely observed of the CNO isotope ratios is the ^12C/^13C ratio. Measurements have been made in young stellar objects <cit.>, solar twin stars <cit.>, M dwarf stars <cit.>, brown dwarfs (e.g. L dwarf 2M0355, T dwarf, 2M0415, M dwarf HD 984 B, and several others) <cit.>, short-orbit hot-Jupiters (e.g. WASP-77A b and HD 189733 b) <cit.> and wide-orbit Jupiters (e.g. TYC-8998 b and VHS 1256 b) <cit.> using instruments and observatories such as VLT/CRIRES, ESO/HARPS, NASA IRTF/iSHELL, Keck/KPIC, Keck/NIRSpec, Gemini/IGRINS, and JWST/NIRSpec.
These isotopic ratios are not only useful in constraining Galactic Chemical Evolution (GCE) models <cit.>, but also in understanding planet formation pathways and accretion histories <cit.>. The first study investigating carbon isotope fractionation in protoplanetary disks finds that the ^12C/^13C ratio of a system varies with radius and height in the disk <cit.>. Complementary observations and measurements of the ^12C/^13C ratio in solar system objects provide a hint of ^13C-enrichment of the ice giants located beyond our Sun's CO snowline <cit.>. Since then, observations of ^12C/^13C in protoplanetary disks have unveiled different isotopic reservoirs that may imprint their signatures in exoplanet atmospheres <cit.>. Beyond the CO snowline at around 20 AU for sun-like stars, <cit.> finds one reservoir enriched in ^12C composed mainly of methane/hydrocarbon ices and a second, ^13C-enriched reservoir dominated by gaseous CO. <cit.> instead proposes ^13C-enriched ice, found beyond the CO snowline, as a source for observed giant planet ^13C enrichment.
Similar to elemental carbon and oxygen abundances, different formation mechanisms also produce signature atmospheric ^12C/^13C ratios. The sub-Myr timescales of formation via gravitational instability result in brown dwarfs with solar to super-solar ^12C/^13C ratios while the longer, Myr timescales of formation via core accretion allow for protoplanets to accrete ^13C-rich ice/gas. Although there are few planetary growth models that incorporate isotopolgue ratios, this particular pattern is arising from ^12C/^13C ratio measurements made in brown dwarfs (e.g. L dwarf 2M0355, T dwarf, 2M0415, M dwarf HD 984 B) <cit.>, and lower-mass super-Jupiters (e.g. WASP-77A b, HD 189733 b, TYC-8998 b and VHS 1256 b) <cit.>. Furthermore, the ice-gas partitioning of these carbon isotopes allows for the ^12C/^13C ratio to be used as a tracer of a planet's formation location and migration. Planets forming within the CO snowline are expected to inherit the host star's ^12C/^13C ratio while those forming beyond the CO snowline ( 20 AU) are expected to accrete ^13C-rich ice/gas which lower the planet's ^12C/^13C ratio. The formation scenarios and corresponding C/O and ^12C/^13C ratios outlined above assume in-situ formation. However, if there is a mismatch between a planet's C/O ratio, ^12C/^13C ratio, its current location, and the host star's C/O and ^12C/^13C ratio, this could be an indicator for planetary migration. Recent observations of both hot Jupiters (e.g. WASP-77A b and HD 189733 b) <cit.> and wide-orbit, directly imaged planets (e.g. VHS 1256 b and TYC 8998-760-1 b) <cit.> reveal similar ^13C enrichment, supporting the hypothesis that hot Jupiters likely did not form in situ.
To test how planetary ^12C/^13C ratios change with distance with their host stars and relative to various volatile snowlines, it is necessary to measure volatile abundances in three types of targets: (1) planets located inside the CO snowline, (2) planets located outside the CO snowline, and (3) exoplanet host stars. Jupiter-class planets are the most amenable to these system-wide abundance surveys. Short-period super-Jupiters close to their host stars (within the CO snowline) can have their C/O and ^12C/^13C ratios measured using transmission spectroscopy (as in <cit.>) while bright super-Jupiters outside the CO snowline, like TYC-8998-760-1 b <cit.>, can have their C/O and ^12C/^13C ratios measured using a combination of spectroscopy and direct imaging techniques <cit.>. Although FGK-type host stars routinely have their abundances measured using standard methods <cit.>, cooler mid- to late- type K and M dwarfs pose a bigger challenge due to their intrinsic faintness and overwhelming molecular absorption in their spectra. However, with high-resolution, high signal-to-noise spectra and a careful analysis we may successfully measure abundances in these stars as well <cit.>. Contemporary researchers use both the stellar and planetary abundances to model exoplanet atmospheres, to infer the structure and composition of the exoplanet's interior, and to understand how atmospheres and interiors co-evolve over time <cit.>.
In this paper, we focus on the complementary stellar and planetary isotopic abundances of the volatile elements: carbon and oxygen. We present the ^12C/^13C ratio measured in exoplanet host star WASP-77A using optical CH absorption features. We then compare archival carbon and oxygen abundances and our novel ^12C/^13C ratio in WASP-77A to those in its companion, the hot-Jupiter WASP-77A b. Our isotope ratio measurement corroborates previous findings from stellar and planetary elemental abundance analyses that favor formation beyond the H_2O and CO_2 snowlines, rather than an in-situ formation, for WASP-77A b.
In Section <ref>, we provide an overview of three previous elemental abundance analyses for the host star (WASP-77A) with data from Keck/HIRES, MPG/ESO/FEROS, and ARC/ARCES. We also summarize the constraints placed on the companion planet's (WASP-77A b) C/O and ^12C/^13C abundance ratios derived from thermal emission spectroscopy. WASP-77A b is a well-studied hot Jupiter, so we include the companion's abundance measurements using data from three facilities: Gemini/IGRINS, HST/WFC3, and JWST/NIRSpec.
Section <ref> provides details on our Keck/HIRES and VLT/ESPRESSO spectra, and our methodology for determining ^12C/^13C for the host star WASP-77A. In Section <ref>, we discuss WASP-77A b's formation scenario and the possible migration mechanisms that may have transformed a once cool-Jupiter to the hot-Jupiter we see today. Finally, in Section <ref>, we summarize the results of our stellar ^12C/^13C analysis and discuss the prospects for measuring more complementary stellar isotopologue ratios.
§ PREVIOUS OBSERVATIONS OF THE WASP-77 SYSTEM
The WASP-77 system is a visual binary otherwise known as BD -07 436. The primary star (WASP-77 A) is a moderately bright, solar metallicity G8V star (V_mag = 10.3) and the secondary star (WASP-77 B) is a fainter K dwarf at a separation of approximately 3. <cit.> discovered a transiting, tidally-locked planetary mass companion (WASP-77A b) to the primary with an orbital period of 1.36 days. WASP-77A b is slightly larger than Jupiter with a mass of 1.76 ± 0.06 M_Jup and radius of 1.21 ± 0.02 R_Jup and rather hot with an effective temperature of 1740 K<cit.>. Both WASP-77 A and its companion planet are well-studied, well-characterized, and prime targets for chemical composition analyses as summarized below.
§.§ Host Star: WASP-77A High-resolution Spectroscopy with Keck/HIRES, MPG/ESO/FEROS, & ARC/ARCES
<cit.> uses high-resolution, high signal-to-noise optical spectra from Keck/HIRES and KeckSpec and a machine-learning-based tool to derive fundamental stellar parameters and abundances for 25 prime JWST target exoplanet host stars. Please refer to Table <ref> for their adopted effective temperature, surface gravity, and [Fe/H]. Although they provide stellar abundances for 15 elements, here we include only the constraints placed on WASP-77A's carbon, oxygen, and [Fe/H] abundances. They report stellar [C/H] = -0.02 ± 0.05, [O/H] = 0.06 ± 0.07, C/O = 0.46 ± 0.09, and [Fe/H] = +0.01 ± 0.03.
<cit.> works with an MPG/ESO/FEROS spectrum (S/N = 116) to derive chemical abundances of elements most informative to planet formation and composition (C, N, O, Na, Mg, Si, S, K, and Fe). Here, we include only the stellar [C/H] = -0.04 ± 0.04, [O/H] = -0.04 ± 0.04, C/O = 0.59 ± 0.08, and [Fe/H] = -0.15 ± 0.06. Again please refer to Table <ref> for the adopted stellar parameters. Notice that although they measure carbon and oxygen abundances consistent with solar values, they do, however, find a slightly subsolar [Fe/H].
<cit.> also determined the effective temperature, surface gravity, and [Fe/H] for WASP-77A using an optical spectrum (from ARC/ARCES at the Apache Point Observatory), ATLAS9 model atmospheres <cit.>, the MOOG radiative transfer code <cit.>, and the isochrones package <cit.>. Please refer to Table <ref> for their adopted stellar parameters. They infer elemental abundances for WASP-77A using the equivalent width method and find [C/H] = 0.1 ± 0.09, [O/H] = 0.23 ± 0.02, and C/O = 0.44 ± 0.07. These abundances are mostly consistent with previous publications: <cit.>, <cit.>, and <cit.>. These abundances for WASP-77A b and its host star are indicative of formation beyond the H_2O snowline, rather than in situ. We discuss this further in Section <ref>.
§.§ Differences Between Abundance Measurements
The relevant stellar parameters and abundances discussed above are shown in Table <ref>. All three studies above find a slightly subsolar iron abundance for WASP-77A and a solar to slightly subsolar C/O ratio. Though mostly consistent, there are a few notable discrepancies in effective temperature, [Fe/H], carbon and oxygen abundance values. Oxygen abundances are notoriously difficult to determine, particularly when using optical spectra where there is a shortage of detectable atomic oxygen lines with both observational and theoretical complications <cit.>. In this case, however, the differences are likely due to the slightly different T_eff and [Fe/H] values adopted.
Generally, significant differences in abundance measurements can be caused by a number of factors including the choice of input stellar parameters (T_eff, log g, [Fe/H]), the selected stellar model atmospheres, the radiative transfer code used for generating synthetic spectra, and the preferred atomic and molecular line lists. We refer the reader to <cit.> Section 5.2 and the references therein for a more detailed discussion of abundance discrepancies. For future isotopologue analyses, this means that an elemental abundance fit prior to measuring isotopic abundances is necessary to ensure the best possible fit between the base model spectrum and the observed spectrum as this is the starting point for generating the ^12C/^13C model grid.
§.§ WASP-77A b Thermal Emission Spectroscopy
WASP-77A b is an excellent candidate for transit spectroscopy and is actually one of the highest signal-to-noise ratio planets for thermal emission measurements in the near-infrared <cit.>– making this hot Jupiter an ideal target for atmospheric characterization.
<cit.> provides secondary eclipse spectroscopy of WASP-77A b. The single 4.7 hour time-series sequence was taken with Gemini-South/IGRINS <cit.> on December 14, 2020. The resulting spectrum is high-resolution (R ∼ 45,000) with wavelength coverage spanning 1.43 - 2.42 μm.
The IGRINS analysis reports a planetary [C/H] = -0.46^+0.17_-0.16, [O/H] = -0.49^+0.14_-0.12, and a carbon-to-oxygen ratio C/O = 0.59 ± 0.08. More importantly, they also retrieve a constraint on the ^12CO/^13CO ratio: 10.2-42.6 at 68% confidence. The sub-solar carbon and oxygen abundances in WASP-77A b suggest an atmosphere depleted in metals, at least based on extrapolation from the solar system planets. Although a comparison to solar abundances is a natural first step, the key is to make the comparison with the planet's host star abundances. Refer to Section <ref> for an abundance comparison between the host star WASP-77A and the companion planet WASP-77A b and see Table <ref> for a list of all available carbon and oxygen abundances and metallicities for the WASP-77A system.
<cit.> presents another set of secondary eclipse observations for WASP-77A b, this time with the Hubble Space Telescope's WFC3 covering wavelengths 1.1-1.7 μm and Spitzer/IRAC at 3.6 and 4.5 μm. This atmospheric retrieval places a 3σ lower limit on the atmospheric H_2O abundance: log(n_H2O) > -4.78 but was unable to constrain the CO abundance and individual [C/H] and [O/H] abundances. There are no carbon-bearing molecule features resolved in this emission spectrum, which results in a poor constraint on the C/O ratio: a 2σ upper limit of C/O = 0.78. The retrieval's best fit metallicity is [M/H] = 0.43^+0.36_-0.28. This value is much higher and less precise than that derived from the high-resolution Gemini/IGRINS observations. After performing a grid fit to a combination of the WFC3 and Spitzer data, they derive a metallicity of [M/H] = 0.10^+0.43_-0.31, which is more consistent with the high-resolution measurement.
Recent secondary eclipse observations of WASP-77A b with JWST/NIRSpec <cit.> cover the wavelength range 2.8-5.2 μm and contains several H_2O and CO features but no CO_2. The atmospheric retrieval finds a sub-solar metallicity [M/H] = -0.91^+0.24_-0.16 and a C/O ratio = 0.36^+0.10_-0.09 as well as molecular abundances of log_10(n_H_2O) = -4.26^+0.14_-0.10 for water and log_10(n_CO) = -4.58^+0.27_-0.24 for carbon monoxide. These results agree with the Gemini/IGRINS abundances from <cit.> within ∼ 1σ for [Fe/H] and ∼ 1.8σ for the C/O ratio.
§ METHODS: ISOTOPE ANALYSIS
§.§ The Spectrum
We observed WASP-77A with the HIRES spectrograph <cit.> at the Keck Observatory. The 582-second exposure of WASP-77A was taken on February 22, 2021 06:03 UT by the California Planet Search collaboration (Lead: Andrew Howard, Caltech). They use the C2 decker under effective seeing conditions of roughly 1.3. Standard reduction routines and telluric corrections are applied resulting in a spectrum of R ∼ 45,000 and median signal-to-noise of 82. The final spectrum covers a wavelength range of 3600-4400 Å. This is the same spectrum analysed in <cit.>. Additionally, we also utilize publicly available ESO data for WASP-77A. This 14400-second exposure of WASP-77A was taken on October 29, 2021 02:03 UTC using VLT/ESPRESSO's ultra-high-resolution (UHR) mode. The data is reduced using the ESPRESSO data-reduction software (DRS) pipeline, and the final extracted, wavelength-calibrated spectrum has R ∼ 140,000, S/N = 511 and covers a wavelength range of 3770-7898 Å.
§.§ Generating the Synthetic Spectra Grid
We will measure CH isotopologue abundances in WASP-77A by comparing the observed spectrum to synthetic spectra generated from custom 1D hydrostatic MARCS stellar atmosphere models <cit.> and the LTE radiative transfer code TurboSpectrum (Version 15.1) <cit.>. Although there is an extensive grid of MARCS models, we further used the interpolation routine developed by Thomas Masseron [https://marcs.astro.uu.se/software.php] to interpolate models with the same physical parameters as those of our target star. We also use the set of atomic and molecular line lists, assuming solar abundances from <cit.>, as described in <cit.>. This includes atomic line data from the Vienna Atomic Line Database (VALD, <cit.>) and molecular line lists from multiple sources including VALD for TiO, the Kurucz (Smithsonian) Atomic and Molecular Database <cit.>, and the high resolution transmission molecular absorption database (HITRAN, <cit.>). The most important molecular bands for this study are: FeH <cit.>, CN <cit.>, and CH <cit.>.
Initially, we adopt the effective temperature, surface gravity, and [Fe/H] (as a proxy for the bulk stellar metallicity) values for WASP-77 A derived from our Keck/HIRES spectrum using the machine-learning tool KeckSpec: T_eff = 5569 ± 77 K, log g = 4.45 ± 0.09, and [Fe/H] = 0.01 ± 0.03 <cit.>. At this point, we have not altered individual elemental abundances from solar values. However, the base model generated from these parameters did not appear to fit the spectral lines surrounding several target ^13CH lines in the HIRES spectrum, particularly for atomic iron lines and molecular CH lines. The problem persists for a couple lines in the higher-quality ESPRESSO spectrum, but the overall fit is greatly improved. This leads us to believe that the issue with these specific lines lies in the opacity calculations or the atomic and molecular line lists incorporated in synthesizing spectra rather than the base stellar parameters. Rather than choose a set of effective temperature, surface gravity, and [Fe/H] (as a proxy for the bulk stellar metallicity) from one particular study, we instead adopt weighted averages (weights = 1/σ^2 where σ is the parameter uncertainty) for the stellar parameters since the effective temperature and surface gravity values are similar for the three previous host star studies <cit.>: T_eff = 5545 ± 22 K, log g = 4.45 ± 0.02. Please refer to Section <ref> for further explanation of how our choice of fundamental stellar parameters affects the derived ^12C/^13C ratio.
From these chosen stellar parameters, we then generate a grid of synthetic spectra with ^12C/^13C ratios of 30, 50, 60 80, 90, 100, 110, 130, 160. These model spectra cover wavelengths 3600 - 4400 Å where there are numerous ^12CH and ^13CH lines suitable for this isotopologue abundance analysis.
§.§ Line Selection
We utilize TurboSpectrum's built-in equivalent-width calculation feature to generate a list of equivalent widths of the ^12CH and ^13CH lines from the base model. From that list, we remove all ^13CH lines with EW < 1.5 Å and are left with 95 candidate absorption lines. We narrow down this line list via a visual inspection of our base model spectrum. Lines from the final set are later accepted or rejected from the final ^12C/^13C calculation depending on overall model fit and continuum renormalization. Please refer to Table <ref> for our final ^13CH line list. We also include information for atomic and molecular lines located within the renormalization window of our ^13CH lines of interest.
§.§ Continuum Re-normalization
After applying a Gaussian filter to smooth the model spectra to a spectral resolution ∼45,000 for the HIRES spectrum and ∼140,000 for the ESPRESSO spectrum, and performing an RV correction to the observed spectrum, we begin the continuum selection and re-normalization following the methods described in <cit.>. The ^13CH features in the optical are significantly weaker than the ^13CO features identified in the M-band (4.5-5.0 ) and so this isotopic abundance analysis demands a more precise continuum-selection and re-normalization process than in <cit.>. Otherwise, even a slight misalignment between the observed spectrum and the model grid (either wavelength or continuum normalization misalignment) makes determining the best fit model nearly impossible.
For each of the selected ^13CH lines, we visually inspect a 4-5 Å window around the line center. We then carefully select the continuum/pseudocontinuum regions around the line-of-interest, making sure to exclude all other absorption features. From the selected continuum points, we perform an iterative sigma-clipping, 2σ then 1.5σ, to keep only those continuum/pseudocontinuum points with the best fit between the base model and the observed spectrum. For those remaining continuum/pseudocontinuum points, we perform a polynomial fit to the residuals R = O/M, where O is the observed flux and M is the interpolated model flux at each shifted, observed wavelength. Finally, we divide the observed spectrum by the polynomial fit to re-normalize and align continuum/pseudocontinuum levels of the observed spectrum to the models. See Figure <ref> for an example renormalization using an isolated ^13CH line.
Unfortunately, even after a careful line-by-line renormalization, there are still regions of the observed spectrum that our models are not able to reproduce successfully. This may be due to spectral noise and bad pixels in the observed spectrum, insufficiencies in model assumptions and opacity calculations as well as atomic and molecular line lists incorporated in synthesizing spectra. Other factors such as NLTE effects are typically negligible in solar type stars <cit.>. Magnetic field effects are also negligible when measuring CNO abundances <cit.> but may result in an underestimate of iron abundances <cit.>. Regardless, this means that we must remove lines from our analysis where the poor fit interferes with our ability to reliably determine the stellar ^12C/^13C ratio. This leaves only three candidate lines for the HIRES spectrum and five lines for the ESPRESSO spectrum where both the continuum/pseudo-continuum and nearby atomic absorption lines of the model spectra align well with that of the observed spectrum.
§.§ Statistical Significance
We remind the reader that our HIRES spectrum has a resolution R ∼ 45,000 and a median signal-to-noise ratio of 82 over the wavelength range of 3600-4400 Å. Although this spectrum is fairly high S/N, the ^13CH lines in this study are exceedingly weak. Therefore, we calculate the BIC factor (BIC = χ^2 + n_dof*ln(N_data), where n_dof is the number of degrees of freedom = 1, and N_data is the number of data points used in the fit) for the selected ^13CH lines to test the statistical significance of our detection. We determine ΔBIC_160-BF (BIC between the observed spectrum and the model with ^12C/^13C = 160 minus BIC between the observed spectrum and the best-fit ^12C/^13C model) for the ^13CH lines in Table <ref>. This ΔBIC_160-BF tells us if the observed spectrum significantly favors low ^13C-enrichment (or the ^12C/^13C = 160 model) or higher ^13C-enrichment (or the best fit ^12C/^13C model).
Table <ref> demonstrates a clear difference in the BIC factor, for each of our selected ^13CH lines, for both the HIRES and ESPRESSO analysis. This means that both observed spectra favor models with solar to super-solar ^13C enrichment.
§.§ Deriving Individual Line ^12C/^13C Ratios
We measure the ^12C/^13C ratio in WASP-77 A following a similar process to that described in <cit.>. We calculate χ^2 between the re-normalized observed spectrum and each model spectrum, for each line over a wavelength range (typically 0.1-0.3 Angstroms) previously determined during the continuum selection process. For each set of χ^2 values, we use a cubic spline fit to interpolate the minimum χ^2 which corresponds to the best-fit ^12C/^13C abundance. We infer 1σ confidence intervals using the region where Δχ^2 ≤ 1 <cit.>. This gives us an estimate of the stellar ^12C/^13C ratio for each selected ^13CH line. Figures <ref> - <ref> show the renormalized spectra, a zoom-in on the ^13CH line, the χ^2 minimization, and the derived ^12C/^13C ratio for our selected lines in the HIRES and ESPRESSO spectra. Table <ref> contains more details on the χ^2 minimization including the χ^2 fit window, the number of points fit, the χ^2 at the minimum associated with the best fit ^12C/^13C ratio for each line. We obtain three individual ^12C/^13C from the HIRES analysis and five from the ESPRESSO analysis, respectively. These values range from ^12C/^13C = 47 - 75. Please refer to Table <ref> for the list of ^12C/^13C ratios and their statistical uncertainties.
§.§ Systematic Uncertainties
Prior to determining the final ^12C/^13C ratio for WASP-77A, we must consider the systematic uncertainties involved, particularly those associated with our choice of fundamental stellar parameters and elemental abundances. To test how the selection of fundamental stellar parameters impacts the derived ^12C/^13C ratios, we generate several more model grids, this time varying effective temperature, surface gravity, bulk stellar metallicity, and individual C, O, Ti, and Fe abundances. We find that changing individual elemental abundances (of species in close proximity to selected ^13CH lines: [Fe/H], [Ti/H], [C/H], [O/H]) over the range reported in the literature values (+0.1, -0.2 dex), hardly change the model fit and the derived ^12C/^13C ratios at all. However, effective temperature, surface gravity, bulk stellar metallicity do have a noticeable effect on the derived ^12C/^13C ratios.
To determine systematic uncertainties of the derived ^12C/^13C ratio due to changes in effective temperature, we vary the stellar effective temperature over span comparable various literature measurements, in steps of 25 K around our chosen value of 5545 K, while holding log g and metallicity constant. We find that this plausible range of effective temperature values produces ^12C/^13C ratios ranging from 90-110 for the ^13CH line at 4231.412 Å. However, the changes in ^12C/^13C due to the step change in effective temperature (σ_T = 3) are much smaller than the statistical uncertainties on our measurement of ^12C/^13C derived from the HIRES spectrum (σ = 17). Thus, we are not limited by the uncertainties on the stellar parameters in this case. For the ESPRESSO spectrum, where the data is much more precise, the changes in ^12C/^13C due to the step change in effective temperature (σ_T = 4) are larger than the statistical uncertainties on our measurement of ^12C/^13C (σ = 1). In this case, we are somewhat limited by uncertainties on the stellar parameters.
We repeat the process to determine systematic uncertainties of the derived ^12C/^13C ratio due to changes in surface gravity. We now vary the stellar surface gravity in steps of 0.05 around our chosen value of 4.45, while holding effective temperature and metallicity constant. We find systematic uncertainties in the derived ^12C/^13C due to the change in log g to be: σ_G = 2 for the HIRES analysis and σ_G = 3 for the ESPRESSO analysis. Again, these uncertainties limit the precision of the ^12C/^13C ratio derived from the higher-quality ESPRESSO spectrum, but not the HIRES spectrum.
Finally, to determine systematic uncertainties of the derived ^12C/^13C ratio due to changes in metallicity (Z), we deviate Z in steps of 0.1 dex. These steps are larger than the uncertainties of stellar metallicity reported in the literature ( < 0.05 dex), but we did this to inspect if non-solar metallicities would better reproduce spectral lines such as Ti, Mn, and CH. So, we vary the stellar metallicity in steps of 0.1 dex around our chosen value of +0.00 dex, while holding effective temperature and log g constant. We find that this plausible range of Z values produces a much broader range of ^12C/^13C ratios ranging from 80-140. However, none of the reasonable Z values produce a model spectrum that significantly improves the fit to nearby spectral lines such as Ti, Mn, and CH. In the ESPRESSO spectrum, the models derived using Z = +0.00 dex fit much better than they do the HIRES spectrum.
We find that the derived ^12C/^13C changes by σ_Z = 7 for the HIRES analysis and σ_Z = 6 for the ESPRESSO spectrum with each step of ± 0.1 dex in bulk metallicity Z. These σ_Z are overestimates, however, since the actual uncertainties on the stellar metallicity are less than half of our chosen step size. Therefore, we report final systematic uncertainties in the derived ^12C/^13C ratio due to changes in stellar metallicity as σ_Z = 3.5 for the HIRES analysis and σ_Z = 3 for the ESPRESSO analysis.
Finally, we calculate the quadrature sum of σ_T, σ_G, and σ_Z as the final systematic uncertainties in the ^12C/^13C ratio: σ_sys = 5 for the HIRES analysis and σ_sys = 6 for the ESPRESSO analysis.
§.§ Deriving the Stellar ^12C/^13C Ratio
Because the target ^13CH spectral lines have low statistical significance and are barely visible by eye when considered individually, we derive the final stellar ^12C/^13C ratio from these individual estimates collectively using both statistical uncertainties and the systematic uncertainties discussed above. We determine a final, weighted average of ^12C/^13C = 66 ± 18 for WASP-77 A with our HIRES spectrum analysis, and a consistent, but more precise ^12C/^13C = 51 ± 6 from our ESPRESSO spectrum analysis. We use the methods detailed in <cit.> Section 2.8, "Model 2” to calculate a weighted mean using asymmetric uncertainties.
Contrary to frequent assumptions, we do not find a solar ^12C/^13C ratio for WASP-77A, but rather a sub-solar and even near-ISM ^12C/^13C ratio (solar ^12C/^13C = 91.4 ± 1.3, ISM ^12C/^13C ≈ 68) <cit.>. Compared to its host star, WASP-77A b appears slightly enriched in ^13C with ^12C/^13C = 26.4 ± 16.2 <cit.>. We further discuss what these new abundance ratios may mean for WASP-77A b's formation in the following section.
ccccc
WASP-77 A (Host) vs. WASP-77 Ab (Planet) Abundances
0pt
Host Keck/HIRES ARC/ARCES MPG/ESO/FEROS **Adopted
Parameter <cit.> <cit.> <cit.> Parameters
T_eff (K) 5569 ± 77 5525 ± 25 5660 ± 60 5545
log g 4.45 ± 0.09 4.44 ± 0.02 4.49 ± 0.03 4.45
[Fe/H] 0.01 ± 0.03 -0.05 ± 0.02 -0.15 ± 0.06 +0.00
[C/H] -0.02 ± 0.05 0.10 ± 0.09 -0.04 ± 0.04 +0.00
[O/H] 0.06 ± 0.07 0.23 ± 0.02 -0.04 ± 0.04 +0.00
C/O 0.46 ± 0.09 0.44 ± 0.07 0.59 ± 0.08 -
Planet Gemini/IGRINS HST/WFC3 JWST/NIRSpec
Parameter <cit.> <cit.> <cit.>
[M/H] -0.48 ± 0.15 0.43 ± 0.36 -0.91 ± 0.24
[C/H] -0.46 ± 0.17 - -
[O/H] -0.49 ± 0.17 - -
C/O 0.59 ± 0.08 < 0.78 0.36 ± 0.10
^12C/^13C 26.4 ± 16.2 - -
Parameters and system-wide abundance inventory for host star WASP-77A and its hot Jupiter companion WASP-77A b. **This Paper. We derive the stellar ^12C/^13C ratio for WASP-77A using the same HIRES spectrum as <cit.> and a publicly available VLT/ESPRESSO spectrum.
cccc
Line List
0pt
Species Line Center Equivalent Width Renormalization
(Å) (Å) Window (Å)
^13CH* 3982.247 1.60 3981.321 - 3982.848
Fe I 3981.512 32.735 -
Fe II 3981.612 14.871 -
Ti I 3981.762 111.763 -
Fe I 3981.771 111.74 -
Ti II 3981.991 69.622 -
Ti I 3982.481 65.561 -
Mn I 3982.576 39.202 -
Y II 3982.592 81.178 -
Fe I 3982.624 17.647 -
^13CH* 4048.270 - 4047.913 - 4048.332
CH I 4048.062 30.16 -
^13CH* 4223.934 1.74 4223.645 - 4224.065
Fe I 4223.729 27.096 -
^13CH* 4231.412 1.84 4230.701 - 4231.860
Ni I 4231.032 46.739 -
Fe I 4231.597 12.83 -
CH I 4231.000 67.704 -
CH I 4231.035 69.041 -
CH I 4231.592 41.903 -
CH I 4231.680 43.164 -
^13CH* 4242.948 2.13 4241.542 - 4243.066
Co I 4242.089 15.963 -
Cr II 4242.364 52.167 -
Fe I 4242.595 49.534 -
Fe I 4242.729 60.709 -
CH I 4242.156 44.291 -
CH I 4242.300 46.168 -
CH I 4242.430 69.514 -
CH I 4242.603 72.165 -
^13CH* 4259.480 - 4257.706 - 4259.549
Zr II 4258.041 23.588 -
Fe II 4258.148 43.764 -
Fe I 4258.186 29.79 -
Fe I 4258.315 85.959 -
Fe II 4258.328 16.086 -
Ti I 4258.524 11.828 -
Fe I 4258.611 68.699 -
Fe I 4258.951 52.447 -
Fe I 4259.336 22.261 -
CH I 4258.474 45.463 -
CH I 4258.613 24.176 -
CH I 4258.722 48.086 -
CH I 4258.725 25.54 -
CH I 4259.087 45.468 -
CH I 4259.282 48.09 -
^13CH* 4268.304 2.14 4266.457 - 4268.384
Fe I 4266.958 14.204 -
Fe I 4266.964 79.921 -
Fe I 4267.826 96.387 -
CH I 4266.644 26.397 -
CH I 4266.745 25.4 -
CH I 4266.762 26.443 -
CH I 4266.910 26.815 -
CH I 4267.383 69.777 -
CH I 4267.589 25.404 -
CH I 4267.692 26.819 -
CH I 4267.748 72.441 -
CH I 4267.787 69.781 -
CH I 4268.002 30.661 -
CH I 4268.105 72.445 -
CH I 4268.113 30.14 -
We include species, line center, and equivalent width for atomic and molecular lines within the renormalization window for each of our ^13CH lines of interest (starred). CH line lists are from <cit.>. Atomic lines are from the Vienna Atomic Line Database <cit.>. Equivalent widths are calculated using TurboSpectrum <cit.>. Lines without equivalent widths were identified by eye.
cccccccc
χ^2 Minimization and Individual ^12C/^13C Ratios
0pt
Species Line Center χ^2 Fit Window # of Points Fit χ^2 at Minimum ΔBIC_160 - Best ^12C/^13C ^12C/^13C
Å Å H, E H, E H, E H E
^13CH 3982.247 3982.198 - 3982.306 7, - 5.4, - ΔBIC_160-50 = 19.775, - 59.7^+42_-15 -
^13CH 4048.270 4048.219 - 4048.318 5, 15 -, 13.7 -, ΔBIC_160-80 = 21.050 - 74.9^+10_-7.9
^13CH 4223.934 4223.859 - 4224.051 11, 27 -, 32.7 -, ΔBIC_160-60 = 733.981 - 54.8^+1.1_-1.1
^13CH 4231.412 4231.306 - 4231.497 10, 21 14.6, - ΔBIC_160-100 = 6.691, - 69.6^+16_-11 -
^13CH 4242.948 4242.880 - 4243.015 -, 14 -, 39.8 -, ΔBIC_160-50 = 852.014 - 47.2^+1.1_-1.0
^13CH 4259.480 4259.399 - 4259.528 7, 12 -, 9.64 -, ΔBIC_160-50 = 137.960 - 48.2^+2.7_-2.5
^13CH 4268.304 4268.232 - 4268.373 8, 14 5.9, 19.2 ΔBIC_160-50 = 39.618, 704.753 63.3^+20_-12 50.5^+1.1_-1.1
The first two columns show the location of our selected ^13CH lines. Columns 3 describes the location of the χ^2 fit window including the number data points fit (Column 4), the calculated χ^2 minimum (Column 5), and the ΔBIC_160 - Best (Column 6) associated with the best fit ^12C/^13C ratios for the HIRES (H) and ESPRESSO (E) spectra (Columns 7 and 8, respectively). ΔBIC_160 - Best is calculated as the BIC difference between the lowest ^13C-enriched model with ^12C/^13C = 160, and the best-fit model.
§ DISCUSSION
We now see slight deviations from solar abundances in WASP-77A's photosphere. The weighted average of the three stellar abundance values from <cit.> are [C/H] = -0.02 ± 0.03, [O/H] = 0.17 ± 0.02, C/O = 0.49 ± 0.05, and [Fe/H] = -0.05 ± 0.02. WASP-77A potentially has [C/H] consistent with solar, elevated [O/H], sub-solar C/O (solar C/O = 0.59 ± 0.08 <cit.>), and slightly sub-solar metallicity. Additionally, we now see a potential difference between the solar ^12C/^13C ratio (91.4 ± 1.3) and our measurement of WASP-77A's ^12C/^13C ratio (51 ± 6).
Now, by comparing host star WASP-77A's carbon, oxygen, and metallicity values to those of its hot Jupiter companion, we notice WASP-77A b's sub-stellar [C/H] = -0.46 ± 0.17, significant oxygen depletion with [O/H] = -0.49 ± 14, C/O = 0.50 ± 0.06 consistent with its host star, and sub-stellar metallicity with [M/H] = -0.60 ± 0.13.
§.§ WASP-77A b's Formation Location
Hot Jupiter systems are of particular interest for abundance studies because of their amenability for observation, but also because of their formation histories. There is only sufficient material for in-situ hot Jupiter formation in large protoplanetary disks where there are also likely to be multiple cold Jupiters in the outer disk <cit.>. This does not appear to be the case for WASP-77A b. Thus, we expect non-stellar abundances in hot-Jupiter atmospheres, characteristic of the planet's formation location and accretion history elsewhere in the disk. The following formation scenario for WASP-77 A b relies on the volatile abundance inventory for the WASP-77 A system (Table <ref>: including both host star and planetary C/H, O/H, C/O, and ^12C/^13C ratios) and its usage in determining formation and migration diagnostics as discussed in Sections <ref> and <ref>.
A sub-stellar C/O ratio would indicate radial migration where the planet accumulates O-rich and C-poor material, however, WASP-77A b's C/O ratios are mostly consistent with its host star <cit.>. While the similar C/O ratios alone may be indicative of formation close to the star (within the H_2O snowline at 5 AU), it is critical to take into account individual carbon and oxygen abundances as well. The sub-stellar carbon and oxygen abundances measured in WASP-77A b's atmosphere are a signature of formation in the outer disk, beyond the H_2O and CO_2 snowlines (located at around 5 and 10 AU respectively), where most carbon and oxygen is trapped in grains, leaving the planet-feeding gas depleted in volatiles <cit.>. Planets forming in this outer region of the disk are expected to have sub-stellar C/H and O/H. In fact, the sub-solar C/H, O/H, and solar C/O ratios of WASP-77A b are well reproduced by planet formation and accretion models which predict the planet's formation beyond the CO_2 evaporation front <cit.>. Overall, WASP-77A b's sub-stellar atmospheric carbon and oxygen abundances and stellar C/O ratio are indicative of this planet's formation beyond the host's H_2O and CO_2 snowlines located at ∼ 5AU and 10 AU respectively <cit.>.
Though elemental abundances alone constrain WASP-77A b's formation location to be beyond both the H_2O and CO_2 snowlines, or somewhere at a separation greater than 10 AU, our goal was to make a complementary measurement of the stellar ^12C/^13C ratio to determine whether (1) WASP-77A b's ^13C enrichment was inherited from the host star, placing the formation location between the CO_2 snowline at 10 AU and the CO snowline at 20 AU or (2) whether WASP-77A b's ^13C enrichment was due to the accretion of ^13C-rich material found beyond the CO snowline at 20 AU. Our additional finding that WASP-77A has a sub-solar ^12C/^13C ratio of 51 ± 6 shows that the hot Jupiter WASP-77A b may indeed have ^13C enrichment (^12C/^13C = 10.2-42.6), but perhaps not as significant as when we were assuming a solar ^12C/^13C for the host star. Again, protoplanetary disk chemistry models produce a broad range of ^12C/^13C in CO as mid-plane height and radial distance from the star varies <cit.>. Although this host star and exoplanet ^12C/^13C ratio comparison appears consistent with previous studies that place WASP-77A b's formation beyond the stellar H_2O and CO_2 snowlines located at 5 and 10 AU respectively, it is unclear whether the difference is considerable enough to suggest formation beyond the stellar CO snowline where significant ^13C enrichment is expected occur <cit.>. Notably, WASP-77A b has a ^12C/^13C ratio inconsistent with that of any planet in our own solar system, but this is perhaps unexpected as WASP-77A b's formation and accretion history is thought to be much different than that of our solar system planets. The ^13C-enrichment found in WASP-77A b's atmosphere, relative to its host star, and the elemental carbon and oxygen abundances in the system disfavor an “in situ" formation scenario nonetheless. This new piece of evidence corroborates our understanding of hot Jupiter formation from a dynamical standpoint. That is, hot-Jupiters likely did not form in situ, but rather migrated to their current positions from the outer disk. Refer to Section <ref> for a brief dynamical explanation on how hot Jupiters form.
We emphasize that the interpretation of abundance ratios measured in giant exoplanet atmospheres relies on the abundance ratios measured in the host star's photosphere and on planet formation and accretion models. For a further analysis of the interplay between a planet's atmospheric carbon and oxygen abundances and different planet formation pathways in the context of the WASP-77 system, we refer the reader to <cit.>.
§.§ Possible Migration Mechanisms
There are several mechanisms for migration that could transform a warm/cool Jupiter into a short-period hot Jupiter. These include coplanar high-eccentricity migration <cit.>, secular chaos <cit.>, and Lidov-Kozai cycling <cit.>. A recent study based on data from the California Legacy Survey <cit.> argues that coplanar high-eccentricity migration is the most likely formation pathway for hot-Jupiter formation <cit.>. In this scenario, there are two cold gas giant planets that through the exchange of angular momentum and tidal interactions with the host star, culminate in a hot Jupiter. <cit.> finds that giant planet multiplicity is ubiquitous with an average of 1.3^+1.0_-0.6 companions per hot Jupiter and 1.0 ± 0.3 companions per warm or cold Jupiter. This provides plenty of opportunities for hot Jupiter formation via coplanar high-eccentricity migration. Most notably, this study finds that hot Jupiter-class planets tend to have an outer companion with at least three times its mass. In the WASP-77 system, we do not see an additional, Jupiter-class companion with mass M > 3 M_J that may have driven this type of migration, but we do see a possible suspect in the secondary star, WASP-77B. This star is not well characterized, but studies find it is spectral type K5 with an effective temperature T_eff = 4810 ± 100 K which places it a mass anywhere from 0.5-0.8 solar masses based on mass-effective-temperature relationships for stars on the main sequence <cit.>. This is at least three times more massive than WASP-77A b, and may be the cause of the planet's migration to its current location ∼ 0.02 AU from its host star.
§.§ WASP-77B Contamination
Although we have mostly discussed the primary star WASP-77A and its, companion planet WASP-77A b, there is a third member of the system: a K dwarf star WASP-77B at a separation of approximately 3 arc-seconds <cit.>. It could be argued that the peculiar abundances observed in WASP-77A b may be a result of chemical contamination from the secondary star WASP-77B. However, there is no clear avenue for determining elemental abundances for this star. There are only ∼ 4 low signal-to-noise (Average S/N = 30) ESO/HARPS spectra of WASP-77B in the literature, no effective temperature, surface gravity, or metallicity, constraints and therefore no elemental abundances for this star in the literature. Nonetheless, such contamination seems unlikely as the seeing for our Keck/HIRES spectrum was 1.3” and 0.45” for the VLT/ESPRESSO spectrum.
§ CONCLUSIONS
§.§ Summary
Using the James Webb Space Telescope, ground-based 8m- class telescopes, and ultra high-resolution spectroscopy, we are able to characterize the composition of exoplanet atmospheres like never before. However, it is necessary to properly contextualize these atmospheric abundance constraints by comparing them to their host star's abundances rather than to solar abundances. In the case of hot Jupiter WASP-77A b and its host star WASP-77A, metallicity, carbon, and oxygen abundances provide us an insight into the planet's origins beyond the H_2O and CO_2 snowlines <cit.>. In this paper, we derive the ^12C/^13C ratio for WASP-77A to complement a similar measurement in WASP-77A b made by <cit.>. The sub-solar and near-ISM ^12C/^13C ratio we find for WASP-77A (51 ± 6) provides additional evidence for WASP-77A b's formation beyond the H_2O and CO_2 snowlines.
§.§ Prospects for Measuring Nitrogen and Oxygen Isotopologue Ratios in Cool Dwarf Stars
With the advent of some astounding exoplanet abundance constraints <cit.>, we must continue building the catalogue of elemental and isotopic host star abundances as in <cit.> with an emphasis on carbon, nitrogen, and oxygen. Stellar [N/H] and ^14N/^15N abundances are best measured using optical CN <cit.> and possibly NH lines. While [C/H] and ^12C/^13C may be measured using optical CH lines as we do here, they are preferentially measured for cooler K and M dwarfs in the near-infrared using CO isotopologue lines <cit.>. In the near-infrared, we may also derive [O/H], ^16O/^18O, and possibly ^17O/^18O using CO, OH and H_2O isotopologue lines <cit.>.
§.§ CNO Abundances in Brown Dwarfs
Besides exoplanets, brown dwarf abundances also play an important role in understanding planet formation. Y dwarfs specifically are cool enough that we may see absorption from molecules that dissociate in the atmospheres of short period hot-Jupiters and wide-orbit young Jupiters. Recent studies are able to measure ^12C/^13C <cit.>, ^14N/^15N <cit.>, and ^16O/^18O <cit.> in the atmospheres of cool brown dwarfs like 2M0355, WISE J1828, and 2M0415. CNO isotopologue ratios may provide chemical distinctions between brown dwarfs and super-Jupiters, or top-down vs. bottom-up formation models, as we start seeing ISM- or greater isotopologue ratios in brown dwarfs, but sub-stellar and sub-ISM isotopologue ratios in the super-Jupiters.
§.§ CNO Abundances in the Context of Other Planetary Systems
Jupiter-class exoplanets and their host stars provide the most practical systems with which to test how carbon, oxygen, and their respective isotopic abundance ratios change with radial distance from the host star. Specifically, the most amenable planetary systems to complementary abundance analyses are those in the WASP (Wide Angle Search for Planets) catalog <cit.> and the young, wide separation, directly imaged Jupiter-class planet systems. These planetary systems are routinely targeted by JWST and ground-based observatories for transmission and emission spectroscopy and provide the best avenue for the direct comparison of stellar and planetary carbon and oxygen abundances, both elemental and isotopic. These complementary abundance surveys do, however, come with several challenges exemplified by the WASP-121 and TYC 8998-760-1 systems. WASP-121 and its ultra-hot Jupiter companion WASP-121 b <cit.> are well studied both in transmission and secondary eclipse observations <cit.>. Though these observations may retrieve carbon and oxygen abundance constraints for the companion planet, complementary host star measurements are made much more difficult (and isotopic abundance measurements, impossible) by the star's fast rotation. The recently discovered TYC-8998-760-1 is a young, K2IV star host to two wide-separation Jupiter-class planets <cit.>. As ESO/SPHERE direct imaging and spectroscopy prioritize observations of the planets <cit.>, the chemical composition of the host star remains obscure. Furthermore, the strong magnetic fields prevalent in these young stars make it difficult to model their spectra and determine fundamental effective temperature, surface gravity, and metallicity values– let alone elemental abundances– without some significant processing of the stellar spectrum <cit.>. More work must be done to identify planetary systems where we may determine carbon, oxygen, and their respective isotopic abundances in both the host star and companion planet.
The authors of this work would like to thank the [California Planet Search]https://exoplanets.caltech.edu/cps/ team for providing the HIRES spectrum (originally analyzed in <cit.>) that was used in this analysis. We also thank the referee for an excellent discussion on the statistical significance of our measurements and the systematic uncertainties associated with our choice of model parameters which both greatly improved this paper. Finally, the authors would also like to acknowledge the support of National Science Foundation grant AST-2108686 and NASA ICAR grant A21-0406-S003.
W. M. Keck Observatory
[Adibekyan et al.(2012)]Adibekyan_2012 Adibekyan, V. Z., Sousa, S. G., Santos, N. C., et al. 2012, A&A, 545, A32, doi: 10.1051/0004-6361/201219401
[Ali-Dib et al.(2017)]Ali-Dib_2017 Ali-Dib, M. 2017, , 467, 2845, doi: 10.1093/mnras/stx260
[Andrews et al.(2013)]Andrews_2013 Andrews, S. M., Rosenfeld, K. A., Kraus, A. L.,andWilner, D. J. 2013, , 771, 129, doi: 10.1088/0004-637X/771/2/129
[Asplund et al.(2021)]Asplund_2021 Asplund, M., Amarsi, A. M., and Grevesse, N. 2021, A&A, 653, A141, doi: 10.1051/0004-6361/202140445
[Atreya et al.(2016)]Atreya_2016 Atreya, S. K., Crida, A., Guillot, T., et al. 2016, arXiv e-prints, arXiv:1606.04510, doi: 10.48550/arXiv.1606.04510
[August et al.(2023)]August_2023 August, P. C., Bean, J. L., Zhang, M., et al. 2023, . https://arxiv.org/abs/2305.07753
[Avni et al.(1976)]Avni_1976 Avni, Y. 1976, , 210, 642, doi: 10.1086/154870
[Ayres et al.(2013)]Ayres_2013 Ayres, T. R., Lyons, J. R., Ludwig, H.-G., Caffau, E., and Wedemeyer-Bohm, S. 2013, , 765, 46, doi: 10.1088/0004-637x/765/1/46
[Barber et al.(2006)]Barber_2006 Barber, R. J., Tennyson, J., Harris, G. J., and Tolchenov, R. N. 2006, , 368, 1087, doi: 10.1111/j.1365-2966.2006.10184.x
[Barlow et al.(2004)]Barlow_2004 Barlow, R. 2004, Asymmetric Errors. https://arxiv.org/abs/physics/0401042
[Barrado et al.(2023)]Barrado_2023 Barrado, D., Molli`ere, P., Patapis, P., et al. 2023, Nature, 624, 263, doi: 10.1038/s41586-023-06813-y
[Batygin et al.(2016)]Batygin_2016 Batygin, K., Bodenheimer, P. H., and Laughlin, G. P. 2016, , 829, 114, doi: 10.3847/0004-637X/829/2/114
[Bedell et al.(2018)]Bedell_2018 Bedell, M., Bean, J. L., Mel´endez, J., et al. 2018, , 865, 68, doi: 10.3847/1538-4357/aad908
[Bergin et al.(2024)]Bergin_2024 Bergin, E. A., Bosman, A., Teague, R., et al. 2024, arXiv, arXiv:2403.09739, doi: 10.48550/arXiv.2403.09739
[Bitsch and Mah(2023)]Bitsch_Mah_2023 Bitsch, B., and Mah, J. 2023, A&A, 679, A11, doi: 10.1051/0004-6361/202347419
[Bitsch et al.(2022)]Bitsch_2022 Bitsch, B., Schneider, A. D., and Kreidberg, L. 2022, A&A, 665, A138, doi: 10.1051/0004-6361/202243345
[Bohn et al.(2020)]Bohn_2020 Bohn, A. J., Kenworthy, M. A., Ginski, C., et al. 2020, , 898, L16, doi: 10.3847/2041-8213/aba27e
[Boley et al.(2021)]Boley_2021 Boley, K. M., Wang, J., Zinn, J. C., et al. 2021, AJ, 162, 85, doi: 10.3847/1538-3881/ac0e2d
[Boley et al.(2024)]Boley_2024 Boley, K. M., Christiansen, J. L., Zink, J., et al. 2024, arXiv e-prints, arXiv:2407.13821, doi: 10.48550/arXiv.2407.13821
[Bosman et al.(2021)]Bosman_2021 Bosman, A. D., Alarc´on, F., Bergin, E. A., et al. 2021, , 257, 7, doi: 10.3847/1538-4365/ac1435
[Botelho et al.(2020)]Botelho_2020 Botelho, R. B., Milone, A. d. C., Mel´endez, J., et al. 2020, , 499, 2196, doi: 10.1093/mnras/staa2917
[Brewer and Fischer(2016)]Brewer_Fischer_2016 Brewer, J. M., and Fischer, D. A. 2016, , 831, 20, doi: 10.3847/0004-637X/831/1/20
[Brewer et al.(2016)]Brewer_2016 Brewer, J. M., Fischer, D. A., Valenti, J. A., and Piskunov, N. 2016, , 225, 32, doi: 10.3847/0067-0049/225/2/32
[Brooke et al.(2014)]Brooke_2014 Brooke, J. S. A., Ram, R. S., Western, C. M., et al. 2014, , 210, 23, doi: 10.1088/0067-0049/210/2/23
[Castelli and Kurucz(2003)]Castelli_Kurucz_2003 Castelli, F., and Kurucz, R. L. 2003, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, and D. F. Gray, Vol. 210, A20, doi: 10.48550/arXiv.astro-ph/0405087
[Coria et al.(2023)]Coria_2023 Coria, D. R., Crossfield, I. J. M., Lothringer, J., et al. 2023, , 954, 121, doi: 10.3847/1538-4357/acea5f
[Costes et al.(2024)]Costes_2024 Costes, J. C., Xuan, J. W., Vigan, A., et al. 2024, https://arxiv.org/abs/2404.11523
[Crossfield(2023)]Crossfield_2023 Crossfield, I. J. M. 2023, , 952, L18, doi: 10.3847/2041-8213/ace35f
[Crossfield et al.(2019)]Crossfield_2019 Crossfield, I. J. M., Lothringer, J. D., Flores, B., et al. 2019, , 871, L3, doi: 10.3847/2041-8213/aaf9b6
[de Regt et al.(2024)]de_Regt_2024 de Regt, S., Gandhi, S., Snellen, I. A. G., et al. 2024, arXiv e-prints, arXiv:2405.10841, doi: 10.48550/arXiv.2405.10841
[Delgado Mena et al.(2021)]Delgado_Mena_CO_2021 Delgado Mena, E., Adibekyan, V., Santos, N. C., et al. 2021, A&A, 655, A99, doi: 10.1051/0004-6361/202141588
[Delrez et al.(2016)]Delrez_2016 Delrez, L., Santerne, A., Almenara, J. M., et al. 2016, , 458, 4025, doi: 10.1093/mnras/stw522
[Dulick et al.(2003)]Dulick_2003 Dulick, M., Bauschlicher, C. W., J., Burrows, A., et al. 2003, , 594, 651, doi: 10.1086/376791
[Evans et al.(2016)]Evans_2016 Evans, D. F., Southworth, J., Maxted, P. F. L., et al. 2016, A&A, 589, A58, doi: 10.1051/0004-6361/201527970
[Evans et al.(2018)]Evans_2018 Evans, T. M., Sing, D. K., Goyal, J. M., et al. 2018, AJ, 156, 283, doi: 10.3847/1538-3881/aaebff
[Fabbian et al.(2012)]Fabbian_2012 Fabbian, D., Moreno-Insertis, F., Khomenko, E., and Nordlund, A. 2012, A&A, 548, A35, doi: 10.1051/0004-6361/201219335
[Finnerty et al.(2024)]Finnerty_2024 Finnerty, L., Xuan, J. W., Xin, Y., et al. 2024, AJ, 167, 43, doi: 10.3847/1538-3881/ad1180
[Fortney(2012)]Fortney_2012 Fortney, J. J. 2012, , 747, L27, doi: 10.1088/2041-8205/747/2/L27
[Gandhi et al.(2023)]Gandhi_2023 Gandhi, S., de Regt, S., Snellen, I., et al. 2023, , 957, L36, doi: 10.3847/2041-8213/ad07e2
[Goldman(1982)]Goldman_1982 Goldman, A. 1982, Appl. Opt., 21, 2100, doi: 10.1364/AO.21.002100
[Goorvitch(1994)]Goorvitch_1994 Goorvitch, D. 1994, , 95, 535, doi: 10.1086/192110
[Gordon et al.(2017)]Gordon_2017 Gordon, I. E., Rothman, L. S., Hill, C., et al. 2017, Journal of Quantitative Spectroscopy and Radiative Transfer, 203, 3, doi: https://doi.org/10.1016/j.jqsrt.2017.06.038
[Grevesse et al.(2007)]Grevesse_2007 Grevesse, N., Asplund, M., and Sauval, A. J. 2007, SSRv, 130, 105, doi: 10.1007/s11214-007-9173-7
[Gustafsson et al.(2008)]Gustafsson_2008 Gustafsson, Edvardsson, B., Eriksson, K., et al. 2008, A&A, 486, 951, doi: 10.1051/0004-6361:200809724
[Hands and Helled.(2022)]Hands_Helled_2022 Hands, T. O., and Helled, R. 2022, , 509, 894, doi: 10.1093/mnras/stab2967
[Hawkins et al.(2020)]Hawkins_2020 Hawkins, K., Lucey, M., Ting, Y.-S., et al. 2020, , 492, 1164, doi: 10.1093/mnras/stz3132
[Hejazi et al.(2023)]Hejazi_2023 Hejazi, N., Crossfield, I. J. M., Nordlander, T., et al. 2023, , 949, 79, doi: 10.3847/1538-4357/accb97
[Hejazi et al.(2024)]Hejazi_2024 Hejazi, N., Crossfield, I. J. M., Souto, D., et al. 2024, arXiv e-prints, arXiv:2407.07869, doi: 10.48550/arXiv.2407.07869
[Hood et al.(2024)]Hood_2024 Hood, C. E., Mukherjee, S., Fortney, J. J., et al. 2024, arXiv e-prints, arXiv:2402.05345, doi: 10.48550/arXiv.2402.05345
[Ilee et al.(2017)]Ilee_2017 Ilee, J. D., Forgan, D. H., Evans, M. G., et al. 2017, , 472, 189, doi: 10.1093/mnras/stx1966
[Johnson(2010)]Johnson_2010 Johnson, E. M. 2010, PhD thesis, University of Oregon
[Kempton et al.(2018)]Kempton_2018 Kempton, E. M.-R., Bean, J. L., Louie, D. R., et al. 2018, ASP, 130, 114401, doi: 10.1088/1538-3873/aadf6f
[Kobayashi et al.(2020)]Kobayashi_2020 Kobayashi, C., Karakas, A. I., and Lugaro, M. 2020, arXiv, doi: 10.48550/ARXIV.2008.04660
[Kolecki et al.(2022)]Kolecki_2022 Kolecki, J. R., and Wang, J. 2022, The Astronomical Journal, 164, 87, doi: 10.3847/1538-3881/ac7de3
[Kovacs et al.(2019)]Kovacs_2019 Kovacs, G., and Kovacs, T. 2019, A&A, 625, A80, doi: 10.1051/0004-6361/201834325
[Kurucz et al.(1995)]Kurucz_1995 Kurucz, R. L. 1995, ASP, Vol. 78, Astrophysical Applications of Powerful New Databases, ed. S. J. Adelman and W. L. Wiese, 205
[Lew et al.(2024)]Lew_2024 Lew, B. W. P., Roellig, T., Batalha, N. E., et al. 2024, arXiv e-prints, arXiv:2402.05900, doi: 10.48550/arXiv.2402.05900
[Lincowski et al.(2019)]Lincowski_2019 Lincowski, A. P., Lustig-Yaeger, J., and Meadows, V. S. 2019, AJ, 158, 26, doi: 10.3847/1538-3881/ab2385
[Lincowski et al.(2018)]Lincowski_2018 Lincowski, A. P., Meadows, V. S., Crisp, D., et al. 2018, , 867, 76, doi: 10.3847/1538-4357/aae36a
[Line et al.(2021)]Line_2021 Line, M. R., Brogi, M., Bean, J. L., et al. 2021, Nature, 598, 580, doi: 10.1038/s41586-021-03912-6
[Lopez-Valdivia et al.(2021)]Lopez_Valdivia_2021 Lopez-Valdivia, R., Sokal, K. R., Mace, G. N., et al. 2021, , 921, 53, doi: 10.3847/1538-4357/ac1a7b
[Lothringer et al.(2021)]Lothringer_2021 Lothringer, J. D., Rustamkulov, Z., Sing, D. K., et al. 2021, , 914, 12, doi: 10.3847/1538-4357/abf8a9
[Mace et al.(2018)]Mace_2018 Mace, G., Sokal, K., Lee, J.-J., et al. 2018, SPIE, Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, and H. Takami, 107020Q, doi: 10.1117/12.2312345
[Madhusudhan(2012)]Madhusudhan_2012 Madhusudhan, N. 2012, The Astrophysical Journal, 758, 36, doi: 10.1088/0004-637x/758/1/36
[Madhusudhan(2019)]Madhusudhan_2019 Madhusudhan, N. 2019, A&A, 57, 617, doi: 10.1146/annurev-astro-081817-051846
[Madhusudhan et al.(2014)]Madhusudhan_2014 Madhusudhan, N., Amin, M. A., and Kennedy, G. M. 2014, , 794, L12, doi: 10.1088/2041-8205/794/1/L12
[Madhusudhan et al.(2017)]Madhusudhan_2017 Madhusudhan, N., Bitsch, B., Johansen, A., and Eriksson, L. 2017, , 469, 4102, doi: 10.1093/mnras/stx1139
[Mansfield et al.(2022)]Mansfield_2022 Mansfield, M., Wiser, L., Stevenson, K. B., et al. 2022, AJ, 163, 261, doi: 10.3847/1538-3881/ac658f
[Masseron et al.(2014)]Masseron_2014 Masseron, T., Plez, B., Van Eck, S., et al. 2014, A&A, 571, A47, doi: 10.1051/0004-6361/201423956
[Maxted et al.(2013)]Maxted_2013 Maxted, P. F. L., Anderson, D. R., Collier Cameron, A., et al. 2013, PASP, 125, 48, doi: 10.1086/669231
[Mikal-Evans et al.(2019)]Mikal-Evans_2019 Mikal-Evans, T., Sing, D. K., Goyal, J. M., et al. 2019, , 488, 2222, doi: 10.1093/mnras/stz1753
[Mikal-Evans et al.(2018)]Mikal-Evans_2018 Mikal-Evans, T., Sing, D. K., Goyal, J. M., et al. 2018, , 156, 6, 283, doi: 10.3847/1538-3881/aaebff
[Molliere et al.(2022)]Molliere_2022 Molliere, P., Molyarova, T., Bitsch, B., et al. 2022, , 934, 74, doi: 10.3847/1538-4357/ac6a56
[Molliere et al.(2019)]Molliere_2019 Molliere, P., and Snellen, I. A. G. 2019, A&A, 622, A139, doi: 10.1051/0004-6361/201834169
[Mordasini et al.(2016)]Mordasini_2016 Mordasini, C., van Boekel, R., Molli`ere, P., Henning, T., and Benneke, B. 2016, , 832, 41, doi: 10.3847/0004-637X/832/1/41
[Mortier et al.(2013)]Mortier_2013 Mortier, A., Santos, N. C., Sousa, S., et al. 2013, A&A, 551, A112, doi: 10.1051/0004-6361/201220707
[Morton et al.(2015)]Morton_2015 Morton, T. D. 2015, isochrones: Stellar model grid package, Astrophysics Source Code Library, record ascl:1503.010
[Nissen et al.(2018)]Nissen_Gustafsson_2018 Nissen, P. E., and Gustafsson, B. 2018, A&A Rv, 26, 6, doi: 10.1007/s00159-018-0111-3
[Nomura et al.(2023)]Nomura_2023 Nomura, H., Furuya, K., Cordiner, M. A., et al. 2023, ASP, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, and M. Tamura, 1075
[Oberg et al.(2011)]Oberg_2011 Oberg, K. I., Murray-Clay, R., and Bergin, E. A. 2011, AJ, 743, L16, doi: 10.1088/2041-8205/743/1/l16
[Pacetti et al.(2022)]Pacetti_2022 Pacetti, E., Turrini, D., Schisano, E., et al. 2022, , 937, 36, doi: 10.3847/1538-4357/ac8b11
[Park et al.(2014)]Park_2014 Park, C., Jaffe, D. T., Yuk, I.-S., et al. 2014, SPIE, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, ed. S. K. Ramsay, I. S. McLean, and H. Takami, 91471D, doi: 10.1117/12.2056431
[Petigura et al.(2018)]Petigura_2018 Petigura, E. A., Marcy, G. W., Winn, J. N., et al. 2018, AJ, 155, 89, doi: 10.3847/1538-3881/aaa54c
[Petrovich et al.(2015a)]Petrovich_2015a Petrovich, C. 2015a, , 805, 75, doi: 10.1088/0004-637X/805/1/75
[Petrovich et al.(2015b)]Petrovich_2015b Petrovich, C. 2015b, , 799, 27, doi: 10.1088/0004-637X/799/1/27
[Plez et al.(2012)]Plez_2012 Plez, B. 2012, Turbospectrum: Code for spectral synthesis, Astrophysics Source Code Library, record ascl:1205.004
[Polanski et al.(2022)]Polanski_2022 Polanski, A. S., Crossfield, I. J. M., Howard, A. W., Isaacson, H., and Rice, M. 2022, https://arxiv.org/abs/2207.13662
[Pollacco et al.(2006)]Pollacco_2006 Pollacco, D. L., Skillen, I., Collier Cameron, A., et al. 2006, PASP, 118, 1407, doi: 10.1086/508556
[Prantzos et al.(2018)]Prantzos_2018 Prantzos, N., Abia, C., Limongi, M., Chieffi, A., and Cristallo, S. 2018, , 476, 3432, doi: 10.1093/mnras/sty316
[Reggiani et al.(2024)]Reggiani_2024 Reggiani, H., Galarza, J. Y., Schlaufman, K. C., et al. 2024, AJ, 167, 45, doi: 10.3847/1538-3881/ad0f93
[Reggiani et al.(2022)]Reggiani_2022 Reggiani, H., Schlaufman, K. C., Healy, B. F., Lothringer, J. D., and Sing, D. K. 2022, AJ, 163, 159, doi: 10.3847/1538-3881/ac4d9f
[Romano(2022)]Romano_2022 Romano, D. 2022, The Astronomy and Astrophysics Review, 30, doi: 10.1007/s00159-022-00144-z
[Romano et al.(2020)]Romano_2020 Romano, D., Franchini, M., Grisoni, V., et al. 2020, A&A, 639, A37, doi: 10.1051/0004-6361/202037972
[Rosenthal et al.(2021)]Rosenthal_2021 Rosenthal, L. J., Fulton, B. J., Hirsch, L. A., et al. 2021, , 255, 8, doi: 10.3847/1538-4365/abe23c
[Rothman et al.(2021)]Rothman_2021 Rothman, L. S. 2021, Nature Reviews Physics, 3, 302, doi: 10.1038/s42254-021-00309-2
[Ryabchikova et al.(2015)]Ryabchikova_2015 Ryabchikova, T., Piskunov, N., Kurucz, R. L., et al. 2015, PhyS, 90, 054005, doi: 10.1088/0031-8949/90/5/054005
[Ryabchikova et al.(2022)]Ryabchikova_2022 Ryabchikova, T., Piskunov, N., and Pakhomov, Y. 2022, Atoms, 10, doi: 10.3390/atoms10040103
[Schneider and Bitsch(2021a)]Schneider_Bitsch_2021a Schneider, A. D., and Bitsch, B. 2021a, A&A, 654, A71, doi: 10.1051/0004-6361/202039640
[Schneider and Bitsch(2021b)]Schneider_Bitsch_2021b Schneider, A. D., and Bitsch, B.. 2021b, A&A, 654, A72, doi: 10.1051/0004-6361/202141096
[Seligman et al.(2022)]Seligman_2022 Seligman, D. Z., Rogers, L. A., Cabot, S. H. C., et al. 2022, PSJ, 3, 150, doi: 10.3847/PSJ/ac75b5
[Shchukina et al.(2016)]Shchukina_2016 Shchukina, N., Sukhorukov, A., and Trujillo Bueno, J. 2016, A&A, 586, A145, doi: 10.1051/0004-6361/201526452
[Sing et al.(2019)]Sing_2019 Sing, D. K., Lavvas, P., Ballester, G. E., et al. 2019, AJ, 158, 91, doi: 10.3847/1538-3881/ab2986
[Smith et al.(2015)]Smith_2015 Smith, R. L., Pontoppidan, K. M., Young, E. D., and Morris, M. R. 2015, , 813, 120, doi: 10.1088/0004-637x/813/2/120
[Sneden et al.(2012)]Sneden_2012 Sneden, C., Bean, J., Ivans, I., Lucatello, S., and Sobeck, J. 2012, MOOG: LTE line analysis and spectrum synthesis, Astrophysics Source Code Library, record ascl:1202.009
[Sneden et al.(2014)]Sneden_2014 Sneden, C., Lucatello, S., Ram, R. S., Brooke, J. S. A., and Bernath, P. 2014, , 214, 26, doi: 10.1088/0067-0049/214/2/26
[Sneden(1973)]Sneden_1973 Sneden, C. A. 1973, PhD thesis, University of Texas, Austin
[Souto et al.(2017)]Souto_2017 Souto, D., Cunha, K., Garc´ıa-Hern´andez, D. A., et al. 2017, , 835, 239, doi: 10.3847/1538-4357/835/2/239
[Souto et al.(2018)]Souto_2018 Souto, D., Unterborn, C. T., Smith, V. V., et al. 2018, , 860, L15, doi: 10.3847/2041-8213/aac896
[Souto et al.(2022)]Souto_2022 Souto, D., Cunha, K., Smith, V. V., et al. 2022, , 927, 123, doi: 10.3847/1538-4357/ac4891
[Ting et al.(2018)]Ting_2018 Ting, Y.-S., Conroy, C., Rix, H.-W., and Asplund, M. 2018, , 860, 159, doi: 10.3847/1538-4357/aac6c9
[Turrini et al.(2021)]Turrini_2021 Turrini, D., Schisano, E., Fonte, S., et al. 2021, , 909, 40, doi: 10.3847/1538-4357/abd6e5
[Unterborn et al.(2017)]Unterborn_2017 Unterborn, C. T., Hull, S. D., Stixrude, L. P., et al. 2017, arXiv, https://arxiv.org/abs/1706.10282
[Unterborn et al.(2014)]Unterborn_2014 Unterborn, C. T., Kabbes, J. E., Pigott, J. S., Reaman, D. M., and Panero, W. R. 2014, , 793, 124, doi: 10.1088/0004-637x/793/2/124
[Valenti and Fischer(2005)]Valenti_Fischer_2005 Valenti, J. A., and Fischer, D. A. 2005, in Protostars and Planets V Posters, 8592
[Vogt et al.(1994)]Vogt_1994 Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, SPIE, Vol. 2198, Instrumentation in Astronomy VIII, ed. D. L. Crawford and E. R. Craine, 362, doi: 10.1117/12.176725
[Wang and Fischer(2015)]Wang_Fischer_2015 Wang, J., and Fischer, D. A. 2015, AJ, 149, 14, doi: 10.1088/0004-6256/149/1/14
[Woods(2009)]Woods_2009 Woods, P. M. 2009, arXiv e-prints, arXiv:0901.4513, doi: 10.48550/arXiv.0901.4513
[Woods and Willacy(2009)]Woods_Willacy_2009 Woods, P. M., and Willacy, K. 2009, , 693, 1360, doi: 10.1088/0004-637X/693/2/1360
[Wu and Lithwick(2011)]Wu_Lithwick_2011 Wu, Y., and Lithwick, Y. 2011, , 735, 109, doi: 10.1088/0004-637X/735/2/109
[Wu and Murray(2003)]Wu_Murray_2003 Wu, Y., and Murray, N. 2003, , 589, 605, doi: 10.1086/374598
[Xuan et al.(2024a)]Xuan_2024 Xuan, J. W., Wang, J., Finnerty, L., et al. 2024, , 962, 10, doi: 10.3847/1538-4357/ad1243
[Xuan et al.(2024b)]Xuan_2024b Xuan, J. W., Hsu, C.-C., Finnerty, L., et al. 2024, arXiv e-prints, arXiv:2405.13128, doi: 10.48550/arXiv.2405.13128
[Yoshida et al.(2024)]Yoshida_2024 Yoshida, T. C., Nomura, H., Furuya, K., et al. 2024, arXiv e-prints, arXiv:2403.00626, doi: 10.48550/arXiv.2403.00626
[Zhang et al.(2017)]Zhang_2017 Zhang, K., Bergin, E. A., Blake, G. A., Cleeves, L. I., and Schwarz, K. R. 2017, Nature Astronomy, 1, 0130, doi: 10.1038/s41550-017-0130
[Zhang et al.(2021a)]Zhang_2021a Zhang, Y., Snellen, I. A. G., and Molliere, P. 2021a, Astronomy and Astrophysics, 656, A76, doi: 10.1051/0004-6361/202141502
[Zhang et al.(2021b)]Zhang_2021b Zhang, Y., Snellen, I. A. G., Bohn, A. J., et al. 2021b, Nature, 595, 370, doi: 10.1038/s41586-021-03616-x
[Zink and Howard(2023)]Zink_2023 Zink, J. K., and Howard, A. W. 2023, , 956, L29, doi: 10.3847/2041-8213/acfdab
|
http://arxiv.org/abs/2409.03284v1 | 20240905064914 | iText2KG: Incremental Knowledge Graphs Construction Using Large Language Models | [
"Yassir Lairgi",
"Ludovic Moncla",
"Rémy Cazabet",
"Khalid Benabdeslem",
"Pierre Cléau"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.IR"
] |
iText2KG: Incremental Knowledge Graphs Construction Using LLMs
Y. Lairgi et al.
INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, LIRIS, UMR5205, 69621 Villeurbanne {ludovic.moncla, remy.cazabet, khalid.benabdeslem}@liris.cnrs.fr GAUC, Lyon France {yassir.lairgi, pierre.cleau}@auvalie.com
iText2KG: Incremental Knowledge Graphs Construction Using Large Language Models
Yassir LAIRGI1,20000-0002-7284-5489 Ludovic MONCLA10000-0002-1590-9546 Rémy CAZABET10000-0002-9429-3865 Khalid BENABDESLEM1 Pierre CLÉAU2
September 9, 2024
=============================================================================================================================================
§ ABSTRACT
Most available data is unstructured, making it challenging to access valuable information. Automatically building Knowledge Graphs (KGs) is crucial for structuring data and making it accessible, allowing users to search for information effectively. KGs also facilitate insights, inference, and reasoning.
Traditional NLP methods, such as named entity recognition and relation extraction, are key in information retrieval but face limitations, including the use of predefined entity types and the need for supervised learning.
Current research leverages large language models' capabilities, such as zero- or few-shot learning. However, unresolved and semantically duplicated entities and relations still pose challenges, leading to inconsistent graphs and requiring extensive post-processing. Additionally, most approaches are topic-dependent.
In this paper, we propose [The code and the dataset are available at <https://github.com/AuvaLab/itext2kg>], a method for incremental, topic-independent KG construction without post-processing. This plug-and-play, zero-shot method is applicable across a wide range of KG construction scenarios and comprises four modules: Document Distiller, Incremental Entity Extractor, Incremental Relation Extractor, and Graph Integrator and Visualization. Our method demonstrates superior performance compared to baseline methods across three scenarios: converting scientific papers to graphs, websites to graphs, and CVs to graphs.
§ INTRODUCTION
In the contemporary era, most data is unstructured, leading to substantial information loss if not effectively harnessed <cit.>. This unstructured data lacks a predefined format, posing significant challenges for traditional data processing methodologies. Consequently, organizations must employ advanced text understanding and information extraction techniques to analyze and extract meaningful insights from this data effectively.
Text understanding and information extraction are key tasks in Natural Language Processing (NLP) for automatically processing data from unstructured text documents.
The rise of Transformer architectures and pre-trained large language models (LLMs) opens new perspectives for extracting and structuring information from vast amounts of natural language texts <cit.>.
One main aspect deals with Knowledge graphs (KGs) construction. KGs structure representations of knowledge by capturing relationships between entities and hold considerable advantages in analyzing text data collections and inferring knowledge from structured heterogeneous data. For instance, KGs can merge diverse data from multiple sources, offering a cohesive information perspective. They can also give an additional level of explainability to the analysis of text corpora.
Named Entity Recognition, Relation Extraction, and Entity Resolution are NLP techniques usually utilized to transform unstructured text into structured data, capturing entities, their connections, and associated attributes <cit.>. However, these methods encounter several limitations <cit.>. They are frequently restricted to predefined entities and relationships or depend on specific ontologies and mostly rely on supervised learning methods, necessitating extensive human annotation.
To address these challenges, we aim to leverage LLMs in constructing KGs. Recent advancements in LLMs have shown potential and improved performance across a various range of NLP tasks, including knowledge graph completion, ontology refinement, and question answering, offering promising prospects for KG construction <cit.>.
LLMs also show great ability for few-shot learning, enabling plug-and-play solutions, and eliminating the necessity for extensive training or fine-tuning. They can be used to extract knowledge across diverse domains due to their training in a wide range of information sources <cit.>.
Consequently, recent research has started utilizing advancements in LLMs, especially their capabilities in few-shot learning in KGs construction tasks. However, unresolved and semantically duplicated entities and relations still pose significant challenges, leading to inconsistent graphs that require extensive post-processing. These inconsistencies can manifest as redundancies, ambiguities, and a real difficulty for graph extension. Additionally, many current approaches are topic-dependent, meaning their effectiveness heavily relies on the specific use case they are designed to handle. This dependency limits the generalizability of these methods across different domains, necessitating customized solutions for each new topic area.
In this paper, we propose , a zero-shot method to construct consistent KGs from raw documents incrementally, using an LLM. It comprises four modules: 1) Document Distiller reformulates the raw documents, by taking a schema or a blueprint, into predefined and semantic blocks using LLMs. The schema operates like a predefined JSON structure, directing the language model to extract specific textual information associated with particular keys from each document, 2) iEntities Extractor takes the semantic blocks and not only identifies unique semantic entities within the semantic blocks but also resolves any ambiguities, ensuring that each entity is clearly defined and distinguished from others, 3) iRelation Extractor processes the resolved entities along with the semantic blocks to detect the semantically unique relationships. Further details are in the next sections. The final module employs Neo4j [<https://neo4j.com/>] to represent these relationships and entities in a graph format visually.
§ RELATED WORKS
LLM-based solutions for building KGs can be categorized according to three paradigms: ontology-guided, fine-tuning, and zero- or few-shot learning.
The AttacKG+ method, a fully automatic LLM-based framework for constructing attack KGs and capturing the progressive stages of cyber attacks, was introduced by <cit.>.
The framework consists of four modules: rewriter, parser, identifier, and summarizer. The rewriter filters out redundant information and organizes report content into sections to preserve key knowledge, pre-cleans data, and sequences events chronologically. Guided by an ontology, the parser extracts threat actions using a triplet model (subject, action, object). The identifier matches these behavior graphs and rewritten sections to the appropriate format. Finally, the summarizer provides an overview of the situation and state at the end of each tactical stage. A theme-specific KG (ThemeKG) was proposed <cit.>, constructed from a theme-specific corpus using an unsupervised framework (TKGCon) to address two main issues: limited information granularity and deficiency in timeliness. This approach generates KGs with accurate entities and relations by leveraging common sense knowledge from Wikipedia and LLMs for ontology guidance. Their model surpasses GPT-4 in performance due to its consistently precise identification of entities and relations
Text2KGBench, a benchmark designed to evaluate the capabilities of language models to generate KGs from natural language text guided by an ontology, was presented by <cit.>. They define seven evaluation metrics to measure fact extraction performance, ontology conformance, and hallucinations. A semi-automatic method for constructing KGs using open-source LLMs was introduced in recent research <cit.>. Their pipeline includes formulating competency questions (CQs) and developing an ontology derived from them. To assess the accuracy of the generated answers, they devised a judge LLM, which evaluates the content against ground truth. One major challenge with these proposed methods is their difficulty in generalizing their applicability to diverse KG construction scenarios due to their ontology dependency. The Wikipedia concept graph is also not exhaustive, particularly for country-specific concepts. For instance, it may not adequately cover terms like "French Research Collaboration Tax Credit".
An LLM was employed for building a KG from unstructured open-source threat intelligence <cit.>. This approach involves generating a dataset utilizing the zero-shot capability of GPT-3.5. Subsequently, this dataset is utilized for fine-tuning a smaller language model. One major challenge of this method is adapting it to different KG construction scenarios. Especially, the few-shot methods are more resource-efficient than the fine-tuned solutions <cit.>.
An iterative LLM prompting-based pipeline for automatically generating knowledge graphs, which bypasses the need for predefined sets or external ontologies, was proposed by <cit.>. This pipeline employs a sequence of well-formed LLM prompts for each stage, enabling the identification of relevant entities, extracting their descriptions and types, and identifying meaningful relationships. The authors proposed an approach to entity/relation resolution using semantic aggregation and LLM prompting. It starts with semantic aggregation, calculating similarity scores for entities and relations based on label similarity, entity type similarity, and description similarity using methods like Levenshtein distance and cosine similarity with the Universal Sentence Encoder model. The entities and relations are aggregated if their scores exceed predefined thresholds. Even though the proposed approaches present several advantages, it has certain limitations: (1) The entity/relation resolution phase aggregates nodes and relations having the same meaning, and then the LLM suggests a representative for each cluster based on the cluster elements. This could hinder the precision of the graph, especially if "bike" and "motorcycle" need to be separated. Still, the model merges them into "vehicle." (2) The latter phase involves post-processing, which could be computationally intensive. (3) The post-processing phase assumes that entities and relations are extracted. Hence, if entities are not resolved before relation extraction, redundant relations from redundant entities could arise, worsening the quality of the relation extraction.
A comprehensive quantitative and qualitative evaluation of LLMs for KG construction and reasoning was provided <cit.>, using eight diverse datasets across four representative tasks: entity and relation extraction, event extraction, link prediction, and question-answering. Key findings reveal that while GPT-4 performs well in KG construction tasks, it excels even more in reasoning tasks, sometimes surpassing fine-tuned models. The paper also proposes AutoKG, a multi-agent-based approach that utilizes LLMs and external sources for KG construction and reasoning.
§ INCREMENTAL TEXT2KG
This work aims to develop a plug-and-play solution for constructing KGs from documents with resolved entities and relations as output. Adopting a 'zero-shot' approach is essential to ensure the solution's applicability across various KG construction scenarios. This approach means that the prompts used to generate the KG do not require prior examples or predefined ontologies.
§.§ Problem Formulation
A graph can be defined as 𝒢 = (ℰ, ℛ) where ℰ is the set of nodes and ℛ denotes the set of edges <cit.>. Considering the difficulty in merging similar concepts, we defined two constraints for the solution:
* An entity e_i∈ℰ, the set of entities and a relation r_k∈ℛ, the set of relations, should each describe a semantically unique concept.
* The sets of entities and relations should contain semantically unique elements. This means each entity and relation within the knowledge graph must be distinct and unique, with no duplication or semantic overlaps.
These constraints can be mathematically formulated as follows.
∀ e_i, e_j ∈ℰ, i ≠ j ⇒ e_i ≠ e_j
∀ r_k, r_l ∈ℛ, l ≠ k ⇒ r_k ≠ r_l
§.§ Proposed method
We propose the approach composed of four modules (see Figure <ref>): Document Distiller, Incremental Entities Extractor, Incremental Relations Extractor, and Neo4j Graph Integrator.
Each module fulfills a distinct role in constructing the KG. Notably, entity extraction and relation extraction tasks are separated following results described in <cit.> that positively impact the performance. Further details of modules 1 to 3 are as follows, with the fourth module serving to visualize the graph.
§.§.§ Module 1 - Document Distiller:
This module uses LLMs to rewrite input documents into semantic blocks, considering a predefined schema or blueprint. It is important to note that the schema is not an ontology but a blueprint that biases the LLM towards specific classes while maintaining flexibility in others. Practically, the schema functions like a predefined JSON, instructing the LLM to extract particular values (textual information) for specific keys from each document. Some examples of blueprints are available in the Github repository. For each document, we will obtain a JSON semi-filled with the desired information if it exists in the document. Then, we aggregate all these semi-filled JSONs to form the semantic blocks of the documents. We have used Langchain's JSON Parser[<https://python.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/json/>]
to define the schema along with the documents as context. The main goals of this module are: (a) To improve the signal-to-noise ratio by reducing noise that may pollute the graph with redundant information. (b) To guide the graph construction process using the schema, especially for concept keys. For example, for a scientific article, we could extract the “title” and the “authors” and add relations like “HAS TITLE" and “HAS AUTHORS” in addition to the semantic information. To ensure the applicability of our solution across various use cases, the schema is an input that depends on user preferences and the particularity of the use case. The idea of reformulating raw documents to enhance the graph construction process has been proven by the following papers <cit.>. The two papers as mentioned earlier introduced a rewriter module, but it depends on the article's specific use case. However, our module is adaptable to many use cases.
§.§.§ Module 2 – Incremental Entities Extractor:
The Incremental Entities Matcher () iterates over all the Semantic Blocks and extracts the Global Document Entities. The main algorithm of iEntities Matcher is presented in Figure <ref>. Initially, entities are extracted from the first semantic block (document) d_0 using an LLM, forming the global entity set ℰ under the assumption that these entities are pairwise distinct for this first iteration only.
Considering the constraint <ref>, the LLM is prompted to extract entities representing one unique concept to avoid semantically mixed entities (prompts are presented in the GitHub repository).
For subsequent documents d in D, the algorithm extracts local entities ℰ_d. It then attempts to match these local entities with the global entities in ℰ. If a local entity e_i is found in ℰ, it is added to the matched set ℰ_d,matched. If not, the algorithm searches for a similar entity in ℰ using a cosine similarity measure with a predefined threshold. If no match is found, the local entity is added to ℰ_d,matched; otherwise, the best matching global entity e'_i (based on maximum similarity) is added. The global entity set ℰ is then updated by unifying it with ℰ_d,matched. This process is repeated for each document in D, resulting in a comprehensive global entity set ℰ.
§.§.§ Module 3 – Incremental Relations Extractor:
The Global Document Entities ℰ are provided as context along with each Semantic Block to the Incremental Relations Matcher () to extract the Global Document Relations. The same approach used for applies here. We have observed different behaviors in relation extraction depending on whether global or local entities are used as context with the Semantic Block for the LLM. When global entities are provided as context, the LLM extracts both the relations directly stated and implied by the Semantic Block, especially for entities not explicitly present in the Semantic Block. This enriches the graph with potential information but increases the likelihood of irrelevant relations. Conversely, when locally matched entities are provided as context, the LLM only extracts the relations directly stated by the context. This approach reduces the richness of the graph but also lowers the probability of irrelevant relations. The two versions of are presented in Figure <ref>. This result will be further discussed in Section <ref>.
§.§.§ Module 4 – Graph Integrator:
The Global Document Entities and the Global Document Relations are fed into Neo4j to construct the knowledge graph.
§ EXPERIMENTS
We chose GPT-4 in all our experiments due to its performance in KG construction and reasoning capabilities, as demonstrated by <cit.>. Notably, GPT-4 achieves near fine-tuned state-of-the-art performance, even in zero-shot scenarios. To validate our method, it is essential first to evaluate Module 1 to ensure the concordance of the extracted information with the schema and the semantics of the input documents. Moreover, evaluating modules 1 and 2 regarding the extracted triplets and the quality of entity/relation resolution is also important. To ensure the applicability of our method across different KG construction scenarios, we have adopted three use cases: website to KG, scientific article to KG, and Curriculum Vitae to KG.
We have adapted the metrics proposed by <cit.> for Module 1 to our use cases. Hence, we propose the following metrics:
* Schema consistency: Evaluate whether the content of the rewritten text matches the input schema (the blueprint). For each key presented in the schema, we define C_s(k) as the number of elements correctly matched to the schema related to the key k. I_s(k) as the number of elements that were added but did not belong to the schema. The consistency score for a key in the schema is:
SC(k) = C_s(k) - I_s(k)/T_s(k)
Such as:
T_s(k): The total elements in the schema corresponding to the key k.
If C_s(k) < I_s(k), SC(k) = 0.
Hence, the schema consistency score is :
SC = ∑_k ∈ KSC(k)/card(K)
Where K is the set of keys of the schema.
* Information consistency: Evaluate whether the rewritten text's content matches the original report's semantics, categorized as follows: very different (<30%), medium (30-60%), largely consistent (60-90%), and fully consistent (>90%).
For the second and third modules, it is important to ensure that the extracted entities and relations are resolved and that the extracted triplets are relevant to the input documents. Therefore, we propose the following metrics:
* Triplet Extraction Precision: Evaluate the consistency of the triplets with the corresponding text regardless of the entity/relation resolution process. It is important to note that a relevant triplet is implied and not necessarily directly stated by the text. We define the precision score as the number of extracted relevant triplets divided by the total number of extracted triplets.
* Entity/Relation Resolution False Discovery Rate: Evaluate the proportion of unresolved (false positive) entities or relations among the total extracted entities or relations. Specifically, we calculate the ratio of unresolved entities or relations to the total number of extracted entities or relations. This metric provides a clear indication of the reliability of the entity and relation extraction process by highlighting the proportion of errors (unresolved entities/relations) within the total extractions.
§.§ Datasets and Baseline Methods
To evaluate , we have generated 5 CVs using GPT-4, selected 5 company websites, and 5 scientific articles. It is important to note that we have extracted the textual information from websites, which will serve as input to our model.
To evaluate the consistency of triplets extracted by and , we used the annotated dataset from <cit.>. We observed that this dataset is not exhaustive for triplet extraction, leading us to conduct manual checks for triplets not present in the dataset. This manual check combined with the aforementioned dataset composes the ground truth. To assess the False Discovery Rate of the entity/relation resolution process, we performed the KG construction process using different baseline methods.
We have compared our method against baseline methods including Graph Construction using OpenAI Function Method[<https://github.com/tomasonjo/blogs/blob/master/llm/openaifunction_constructing_graph.ipynb>]
, Langchain[<https://python.langchain.com/v0.1/docs/use_cases/graph/constructing/>]
, and LlamaIndex[<https://docs.llamaindex.ai/en/stable/examples/property_graph/property_graph_basic/>].
§.§ First Module Evaluation Results
§.§.§ Schema Consistency
Table <ref> demonstrates that achieves high schema consistency across various document types. Scientific articles and CVs exhibit the highest schema consistency scores, indicating the module's capability to handle structured information, particularly for documents where the data is primarily organized using titles. While still achieving a strong score of 0.94, websites present a slightly lower consistency, which may be attributed to web content's varied and less structured nature. These results highlight the robustness and adaptability of in processing and extracting structured information from diverse document types.
§.§.§ Information Consistency
Figure <ref> illustrates the information consistency across different types of documents: CVs, scientific articles, and websites. For CVs, the majority of the information (74.5%) is fully consistent, with 25.5% being largely consistent and no medium consistency. This indicates that the rewritten text closely matches the semantics of the original content for CVs. This is because CVs are primarily written in clear and concise phrases, making it easier for the LLM to capture the semantics. In the case of scientific articles, 57.1% of the information is fully consistent, and 42.9% is largely consistent, showing a high degree of accuracy in preserving the original semantics, though slightly less than CVs. This is predictable, especially since scientific articles are written in scientific English with more complex phrases. Websites have 56.0% of information fully consistent, 24.0% largely consistent, and 20.0% medium consistency. This may be due to the unstructured nature of web content, which poses a greater challenge for accurate semantic rewriting.
§.§ Second and Third Modules Evaluation Results
§.§.§ Triplet Extraction
Table <ref> shows different behaviors in relation extraction depending on whether global or local entities are used as context with the Semantic Block for the LLM. The precision of relevant triplets when global entities are fed as context is 10% lower than that of relevant triplets when local entities are fed as context. When global entities are used as context, the LLM extracts relations explicitly mentioned and implied within the Semantic Block. This results in a richer graph with more potential information and a higher chance of irrelevant relations. On the other hand, using locally matched entities as context leads the LLM to extract only the directly stated relations, resulting in a less enriched graph but with a lower likelihood of irrelevant relations.
This presents a trade-off that depends on the use case. We leave it to the user to decide whether to accept a 10% decrease in precision in exchange for an enriched graph or to gain 10% precision with a less enriched graph.
§.§.§ Entity/Relation Resolution
To the best of our knowledge, LlamaIndex constructs unconnected sub-graphs with edge-level and node-level textual information for retrieval-augmented generation (RAG); hence, we did not evaluate LlamaIndex against our method. From Table <ref> and Table <ref>, we conclude that our method delivers superior results for the entity and relation resolution process across three different KG construction scenarios: scientific articles to KG, CVs to KG, and websites to KG. Additionally, the results indicate that when the number of input documents is small and they are structured with clear, non-complex phrases, the LLM performs well in entity and relation resolution, as demonstrated by the CVs to KG process.
Moreover, the False Discovery Rates of unresolved entities and relations for websites to KG are higher than in the other KG construction scenarios. This is due to the larger number of documents (chunks) and the unstructured nature of website textual information. Consequently, without an effective resolution process, the LLM struggles to map similar entities or relations. Therefore, as long as the number of documents (chunks) is large and the text is unstructured with complex language, the entity/relation resolution process becomes crucial for building consistent KGs.
§.§.§ Threshold Estimation
To estimate the threshold for merging entities and relationships based on cosine similarity, a dataset of 1,500 similar entity pairs and 500 relationships, inspired by various domains (e.g., news, scientific articles, HR practices), was generated using GPT-4 and is available in the GitHub repository. Entities and relationships were vectorized using the pre-trained model [<https://platform.openai.com/docs/guides/embeddings/embedding-models>]. The mean and standard deviation of cosine similarity for these datasets were then calculated (Table <ref>). An upper threshold (e.g., 0.7) was chosen to ensure high precision, while a lower threshold reduced resolution specificity.
To illustrate the results of KG construction, Figure <ref> presents a comparison between baseline methods and across three distinct scenarios. The observations are as follows:
* The baseline methods reveal the presence of isolated nodes without relations in all three KG construction scenarios. This phenomenon may be attributed to the simultaneous execution of entity extraction and relations extraction, which can induce hallucinatory effects in language models, leading to a "forgetting" effect. This observation supports the findings of <cit.>, which suggest that separating the processes of entity and relation extraction can enhance performance.
* From the 'Website to KG' scenario, an increase in the volume of input documents is associated with the emergence of noisy nodes within the graph. This underscores the critical need for Module 1 to effectively refine and distill the input data.
* The method demonstrates improved entity and relation resolution across the three KG construction scenarios. According to the data in Table <ref> and Table <ref>, when input documents are fewer and composed of straightforward, non-complex phrases, the language model shows high efficiency in entity and relation resolution, as evidenced in the 'CVs to KG' process. Conversely, the challenges increase with more complex and voluminous data sets, as shown in the 'Website to KG' scenario.
Moreover, it is important to highlight the effect of the chunking size of the input document and the threshold on KG construction. Input documents to the Document Distiller can be independent documents or chunks. If the chunk size is smaller, the semantic blocks will capture more specific details from the documents, and vice versa.
§ CONCLUSION
In this paper, we introduced , an approach for incremental KG construction leveraging the zero-shot capabilities of LLMs. Our methodology addressed limitations inherent in traditional KG construction processes, which typically depend on predefined ontologies and extensive supervised training.
A key advantage of the approach is its flexibility, which stems from the use of a user-defined blueprint that outlines the key components to extract during KG construction. This allows the method to adapt to a wide range of scenarios, as there is no universal blueprint for all use cases; instead, the design varies depending on the specific application. Moreover, The method achieves document-type independence by using a flexible, user-defined blueprint to guide the extraction process, allowing it to handle both structured and unstructured texts.
Empirical evaluations across diverse contexts, such as scientific documents, web content, and CVs, demonstrated the superior performance of the approach compared to established baseline methods. The method achieves enhanced schema consistency and high precision in entity and relation extraction, effectively mitigating issues related to semantic duplication and unresolved entities, which are prevalent in traditional methodologies.
Future research will focus on enhancing metrics such as cosine similarity for advanced entity and relation matching, eliminating the necessity to define a threshold as a hyperparameter, and integrating the entity type as a parameter of the matching process.
splncs04
|
http://arxiv.org/abs/2409.03476v1 | 20240905123724 | Photonic beam-combiner for visible interferometry with SCExAO/FIRST: laboratory characterization and design optimization | [
"Manon Lallement",
"Elsa Huby",
"Sylvestre Lacour",
"Guillermo Martin",
"Kevin Barjot",
"Guy Perrin",
"Daniel Rouan",
"Vincent Lapeyrere",
"Sebastien Vievard",
"Olivier Guyon",
"Julien Lozi",
"Vincent Deo",
"Takayuki Kotani",
"Cecil Pham",
"Cedric Cassagnettes",
"Adrien Billat",
"Nick Cvetojevic",
"Franck Marchis"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Exploring the magnetic and thermal evolution of a coronal jet
[
September 9, 2024
=============================================================
§ ABSTRACT
Integrated optics are used to achieve astronomical interferometry inside robust and compact materials, improving the instrument's stability and sensitivity. In order to perform differential phase measurements at the Hα line (656.3 nm) with the 600-800 nm spectro-interferometer FIRST, a photonic integrated circuit (PIC) is being developed in collaboration with TEEM Photonics. This PIC performs the interferometric combination of the beams coming from sub-apertures selected in the telescope pupil, thus implementing the pupil remapping technique to restore the diffraction limit of the telescope. In this work, we report on the latest developments carried out within the FIRST project to produce a high performance visible PIC. The PICs are manufactured by TEEM Photonics, using their technology based on K_+:Na_+ ion exchange in glass. The first part of the study consists in the experimental characterization of the fundamental properties of the waveguides, in order to build an accurate model, which is the basis for the design of more complex functions. In the second part, theoretical designs and their optimization for three types of combiner architectures are presented: symmetric directional coupler, asymmetric directional couplers and ABCD cells including achromatic phase shifters.
*Manon Lallement, [email protected]
2
§ INTRODUCTION
Interferometric techniques in astronomy consist in recovering the source spatial intensity distribution with an angular resolution as fine as λ/2B, with B the baseline length. Implemented on a monolithic telescope, sparse aperture masking (SAM) is a technique providing a spatial resolution down to λ/2D with λ/D the telescope diffraction limit. This technique consists in applying a non-redundant mask with holes at the pupil plane defining several sub-apertures. At the telescope focal plane, the image no longer corresponds to the point spread function of the telescope's pupil but to the superimposition of fringe patterns created by each pair of interfering sub-aperture beams. The non-redundant configuration of the mask means that each baseline, i.e each pair of sub-apertures separated by a baseline vector B⃗, produces a unique fringe pattern. The information carried by each baseline can be retrieved independently. This is well illustrated in the Fourier domain, in which each baseline information, i.e phase and contrast of the associated fringes, is carried by a single peak isolated from the others. All this ensures that there is no blurring effect between the fringes in the presence of residual atmospheric turbulence or phase perturbations due to the optical bench, and that one can recover information at the diffraction limit of the telescope, and even below. The SAM technique limitations are (1) the reduced collecting area and thus the sensitivity limit and (2) the speckle noise which remains at the scale of one sub-aperture, limiting the contrast of the fringes.
To address these limitations, the pupil remapping technique<cit.> theoretically gives access to the whole pupil: instead of using a sparse aperture mask, a micro-lens array samples the whole pupil in several sub-apertures and injects their light in single-mode fibers spatially filtering the wavefront, thus removing the speckle noise. Sub-aperture pairs are recombined non redundantly or pairwise so that the information carried by each baseline can be retrieved independently. In addition, the interferograms can be spectrally dispersed. The spatial intensity distribution of the source is thus recovered with an angular resolution of λ/2D with an increased accuracy compared with SAM and with a better (u,v) plane and spectral coverage.
FIRST, which stands for Fibered Imager FoR a Single Telescope, was built at the Observatoire de Paris to validate the concept of pupil remapping coupled with spectroscopy. From 2010 to 2013, the instrument was used on the 3m-Shane Telescope at the Lick Observatory<cit.>. In 2013, FIRST was installed at the 8.2 m Subaru Telescope <cit.> on the Subaru Coronagraphic Extreme Adaptive Optics platform (SCExAO)<cit.>, enhancing the ultimate angular resolution of the instrument, with λ/D = 16.5 mas at 656.3 nm.
SCExAO delivers a Strehl ratio of 50% to 60% in the visible at 750 nm. FIRST is thus leveraging the wavefront stability provided by SCExAO, making long exposure up to 1 second possible, without loosing the fringe contrast.
Two versions of the FIRST instrument are currently on the SCExAO's bench: FIRST version 1 (FIRSTv1) and FIRST version 2 (FIRSTv2) depending on the interferometric combination method. Table <ref> shows their respective features. Close binary stars were detected and spectrally characterized with FIRSTv1 using closure phase measurements <cit.>. Currently, our efforts are focusing on pushing the detection limits of the instrument, with the ultimate goal of detecting exoplanetary systems. For development purposes, a replica of FIRSTv2 has been built in the laboratory of Observatoire de Paris<cit.>, where the PIC prototypes are characterized prior to further validation on the sky.
Young gas giant exoplanets are particularly interesting for FIRSTv2. Studies based on the populations of exoplanets detected by radial velocities<cit.> show that the distribution of gas giant is supposed to be maximal for systems with a separation of 1-3 au, which corresponds to an angular separation of 7-28 mas at 140 pc, the distance to the Taurus Nuclear Cloud. This region cannot be probed by classical imager on 8m-telescopes, but the angular resolution offered by the interferometric technique places it within reach of FIRST.
Moreover, in the visible, a dynamic range of 10^6 to 10^9 is required to differentiate between the light from an exoplanet and its host star<cit.>. This is currently out of reach for interferometric techniques like FIRST. However, at the protoplanetary state, gas giants are less than 4 Myr old and are still accreting matter from their surrounding disk, inducing a strong emission in the hydrogen line at 656.3 nm (Hα)<cit.>. As a consequence, the contrast between the planet and the star at this particular line is lowered down to 10^2 - 10^3, making them easier to detect in the visible. Three protoplanet detections have been confirmed using Hα imaging. This is the case for protoplanets PDS70b and PDS70c detected using the MUSE integral field spectrograph<cit.> as well as AB Aurigae b, detected at various wavelengths by several instruments and in particular, at the Hα line with VAMPIRES installed on SCExAO<cit.>.
With a spectro-interferometer, Hα differential phase measurement is the equivalent of Hα imaging. It consists in comparing the complex visibility phase at the Hα line (where the protoplanet is brighter) to the complex visibility phase in the continuum (where it is too faint to be detected). This technique has recently been implemented using the high precision phase measurements with the GRAVITY instrument to detect the broad line region around a quasar<cit.>. The FIRST instrument can perform this measurement at high angular resolution in the visible, targeting the Hα line. For that purpose, its spectral resolution, sensitivity and dynamic range are currently being enhanced as specified in Table <ref>.
In the FIRST instrument, the telescope's pupil is divided into sub-apertures thanks to a micro-lens array which couples the light from the sub-apertures into Polarization Maintaining (PM) Single-Mode Fibers (SMF).
Optical delay lines are used to compensate for the fiber length difference and reach the null Optical Path Difference (OPD). In FIRSTv1, SMFs are used to remap the input pupil sub-apertures into a non-redundant configuration in the output pupil<cit.>. Thanks to this multiplexing, each pair of sub-apertures produces a fringe pattern with a unique spatial frequency. The interferometric combination of the beams is achieved in free-space following a Young slit-like experiment. FIRSTv2 is currently under development on a testbed at the Observatoire de Paris. In this upgraded version of the instrument, SMF inject the sub-apertures light into a Photonic Integrated Circuit (PIC) where the beams are recombined pairwise.
The output signal is dispersed with a spectrograph with R ∼ 400 in FIRSTv1 and R ∼ 3600 in FIRSTv2.
Fig. <ref> presents FIRSTv2 optical system and the sample of data acquired in the laboratory: 5 sub-apertures are recombined accounting for 10 baselines. In the configuration presented in Fig. <ref>, each baseline is encoded in four horizontal spectra: the interferometric combinations are performed in the PIC by 2x2 directional couplers, i.e there are two outputs per baseline. The light from both PIC outputs is vertically split by a Wollaston prism to avoid fringe blurring induced by the birefringence in the setup, and horizontally dispersed on the camera thanks to a Volume Holographic Grating (VPH). It covers the 600-800 nm spectral band with a resolution of R∼ 3600 at 670 nm. To further sample the fringes, a segmented mirror is used to temporally modulate the phase of the input beams, by applying piston commands to the segments corresponding to each sub-aperture.
The PIC device is the heart of the upgraded version of the instrument FIRSTv2, that will enhance the stability, accuracy and sensitivity compared to FIRSTv1. The design and characterization of a high performance visible PIC is a critical step in these developments, and we report in this paper on the latest developments carried out in collaboration with TEEM Photonics to optimize the building blocks required to produce a complete device. Specifications for the PIC component suited for FIRSTv2 are listed in Section <ref>, and the manufacturing process developed by TEEM Photonics based on K_+:Na_+ ion exchange technology (ioNext) is described. In Section <ref>, the characterization of TEEM Photonics' standard straight waveguides in terms of single-mode spectral range and mode field profile are presented. These measurements are critical to define a model of the waveguide 3D diffused index profile thanks to the BeamPROP modeling software, which is the working basis for the design of more complex functions. In Section <ref>, we present the theoretical models and design optimization of three types of combiners investigated so far: 1) directional couplers are the simplest design and were characterized in laboratory, showing a weak chromatic behavior, 2) asymmetric directional couplers can be tuned to make the coupler even less chromatic, 3) ABCD cells are more complex and comprise splitters, phase shifters and combiners, but are more convenient as they provide additional measurements of the interference state, thus avoiding the need for temporal modulation of the phase. The latest PIC prototypes and test chips including single combiner functions for characterization and validation are also presented. In Section <ref>, we conclude and present the next steps.
§ FIRSTV2 VISIBLE PHOTONICS INTEGRATED CIRCUIT FOR HIGH PERFORMANCE BEAM COMBINATION
The FIRSTv2's high throughput visible beam combiner is being developed and characterized<cit.> to enhance the stability and accuracy of the measurements of interferometric observables. Compared to bulk optics beam combination, photonic integrated circuit combination is more robust, less sensitive to thermal variations, mechanical constraints and alignment errors. This is critical to meet FIRSTv2 dynamic range specification presented in Table. <ref> as phase measurement accuracy directly limits the achievable dynamic range. In FIRSTv1, the free-space interferometric combination leads to spatially sampled fringes<cit.>. In FIRSTv2, the fringes are temporally modulated by applying piston commands to a segmented mirror located between the pupil and the micro-lens array. The signal of interest is thus condensed into a few pixels, instead of a few hundreds, reducing the read-out noise and enhancing the phase measurement SNR. Our efforts currently focus on performing the combination of 5 input beams, but for future FIRST upgrades, PIC devices can relatively easily be scaled to a higher number of sub-aperture pairs by densifying the design or by duplicating the PIC devices.
§.§ Architecture of the FIRSTv2 photonic beam combiner
FIRSTv2 interferometric combination scheme consists in combining the light of 5 sub-apertures pair by pair, as presented in Fig. <ref>. This design is called a 5-telescopes combiner, or 5T-combiner, by keeping the same name convention as PICs developped for long baseline interferometers combining the light of different telescopes.
Each of the 5 inputs are split in four thanks to two cascaded Y junctions and are recombined with the other four thanks to combiners. As presented in Fig. <ref>, combiners can be Y junctions (10 outputs in total, one per baseline), directional couplers (20 outputs) or ABCD cells (40 outputs). Fig. <ref> presents an overview of these combiners. In order to properly sample the fringes around the null OPD when using directional couplers and Y junctions, a phase modulation of the input beams is required. This phase modulation is achieved by adding piston to the corresponding segments of the segmented mirror following a 20 steps sequence running at 20 Hz.
The ABCD cell provides four measurements of the interference state between the input beams, with four distinct phase differences, allowing the fringe reconstruction with a single image. The use of ABCD cells thus increase the observing time efficiency, however they are more complex to design, as they require splitters, directional couplers and achromatic phase shifters.
§.§ Specifications for the FIRSTv2 5T-combiner
As explained in Sec. <ref>, the FIRSTv2 science case is the detection of accreting protoplanets by differential phase measurement at the Hα line. It consists in comparing the complex visibility phase at the Hα line (at 656.3 nm, where the protoplanet is brighter) to the complex visibility phase in the continuum (from 630 to 650 nm and from 660 to 780 nm, where it is not detected). Therefore, we defined PIC specifications depending on the wavelength, as shown in Table. <ref>, with stronger constraints around the Hα line, where the signal of interest is expected. Indeed, the photon transmission in this spectral channel is critical, while the continuum signal is evaluated on ∼ 700 channels when using the whole spectral range. The specifications concern several aspects:
* overall transmission: FIRSTv2 PIC overall transmission must be greater than 50% in the 630 to 780 nm spectral band and greater than 75% around the Hα line.
* insertion or coupling loss: it specifies the amount of energy lost because of the mismatch between the fundamental modes of the PM-630HP fiber, used for the injection, and the PIC waveguide.
* Polarization Extinction Ratio (PER): it characterizes the capability of the PIC building blocks to keep light linearly polarized and propagating along a principal axis. In that case, the PER should be close to 100%. After injecting linearly polarized light along the principal axis, the Polarization Extinction Ratio (PER) corresponds to the ratio between the output power of the light linearly polarized along the principal axis and the output power of the light orthogonally polarized.
* cross-talk: it defines the amount of unwanted light that gets transferred from one waveguide to another. This specification mainly applies to waveguides crossing each other and does not apply to splitters nor combiners.
Specification of FIRSTv2 5T-combiner building blocks, i.e splitters and combiners, are presented in Table. <ref>. The internal losses are specified in order to keep the overall transmission above the specified level. Transfer rate specifications for splitters and combiners are meant to evenly distribute the light in order to maximize fringe contrast over the spectral band. Each of the 5 inputs are split in four thanks to two cascaded Y junctions. The Y junction transfer rate should be 50± 5% meaning that each of the Y junction outputs contains 50± 5% of the input flux. These 5 inputs are recombined with one another thanks to either the Y junction, directional coupler or ABCD cell combiners. For an ABCD cell, the 25±5% specification means that each of the four ABCD cell outputs contains close to 25% of the input flux.
§.§ TEEM Photonics K_+:Na_+ ion exchange technology (ioNext)
The FIRSTv2 PIC is developed in collaboration with TEEM Photonicshttps://www.teemphotonics.com/, a company based in Grenoble, France. TEEM Photonics ioNext technology consists in K_+:Na_+ ion exchange in a glass substrate. A lithographic mask is used to control the regions where the ion exchange is performed, thus creating gradient-index waveguides featuring a precisely controlled mode field profile and effective index<cit.>. At a final stage, a glue layer and a counter-blade are deposited on top of the glass substrate. They allow for: 1) a larger front surface where V-grooves are bonded ; 2) for a protection of the waveguide which would otherwise be directly on the surface, and thus sensitive to scratches and dust ; and 3) for the symmetrization of the waveguide mode in order to better match the modes of the inserting and collecting optical fibers.
The SM-630 fiber insertion loss is specified by TEEM Photonics to be 0.13 dB over the single-mode spectral band for a 2 μm wide waveguide. Straight waveguide propagation loss is about 0.25 dB/cm at 780 nm.
§.§ The first 5T-beam combiners prototypes
Prior to the present work, two 5T PIC prototypes fabricated with TEEM Photonics ioNext technology were characterized on the FIRSTv2 testbed at the Observatoire de Paris <cit.>. These PICs perform the interferometric recombination of 5 sub-apertures beams with two different types of combiners: one is based on Y junction combiners and is called 5TY, while the other one is based on symmetric directional couplers or X couplers, further detailed in Sec. <ref>, and is called 5TX.
As illustrated in Fig. <ref>, Y junction consists in two single-mode waveguides which merge into one output single-mode waveguide. If the fields propagating in input waveguides are in phase (resp. in phase opposition), they are coupled into the fundamental mode (resp. in radiative modes) of the output waveguide, meaning that half of the interferometric signal is lost in radiative modes in a Y junction. For that reason, Y junctions were not further investigated and are not part of the present study.
The 5T prototypes were characterized in terms of throughput and cross-talk, as defined in Sec. <ref>. Light was injected in one of the five input waveguides and the leakage in the other waveguides was measured. Cross-talk has a mean value of about 1% in both PIC prototype but can reach, for some inputs, 10% for the 5TX PIC and 20% for the 5TY PIC. The throughput was estimated to about 15% (5TY) to 30% (5TX)<cit.>. This low throughput is mainly due to non-optimized combiners and bend curvature radii. Further developments are thus needed to reach the specifications presented in Sec. <ref>, motivating the work presented in the following sections.
The methodology adopted to develop the PICs is indeed an iterative process. A numerical model of the waveguides manufactured with the ioNext technology is required to feed the BeamPROP modeling software. Combiners are designed and optimized based on the numerical simulations of their performance. Once manufactured, these combiners are characterized in the laboratory to refine the numerical model and tune the design parameters for the next manufacturing run.
§ FUNDAMENTAL PARAMETER ESTIMATION FOR WAVEGUIDE MODELING
In order to model the waveguides built with the ioNext technology and design FIRST PIC's combiners, fundamental parameters estimation of standard, i.e straight, waveguides is needed. In particular, the fabrication process, i.e K+ NA+ ion exchange in glass, creates a diffused index profile in a glass substrate which needs to be recovered. This section presents the laboratory measurements of single-mode spectral range and mode field diameter of ioNext standard straight waveguides which are used to defined the 3D diffused index profile of the waveguide on the BeamPROP software.
§.§ Single-mode spectral range laboratory measurement
Figure <ref> presents single-mode spectral range measurement performed by TEEM Photonics engineers. The throughput P=10·log(P_out/P_in) is measured for two straight waveguides test samples G1 and G2. The white light source is fibered and the measurement is performed with a fibered spectrometer. The input power P_in is thus estimated by connecting directly the source fiber to the spectrometer fiber. The output power P_out is measured by inserting the photonic integrated circuit straight waveguide between these two fibers.
For wavelengths greater than 820 nm, the input light is not guided by the waveguide. When the wavelength decreases and as soon as the input light gets guided through the fundamental mode (m=0) of the waveguide, P_out increases explaining the peak at the cutoff wavelength around 820 nm. In the single-mode range, the coupling efficiency between the fiber and the waveguide fundamental modes decreases with the wavelength.
An increasing amount of the input light is coupled into the waveguide second mode (m=1) and immediately lost because of the low confinement of this second mode. As soon as the wavelength gets small enough, the energy coupled in the second mode becomes confined and is transmitted through the waveguide: a second peak in the output power appears at this second cutoff wavelength of about 530 nm. The measured single-mode spectral range extends between these two cut-off wavelengths, approximately between 530 to 820 nm for 2 μm wide waveguides.
It can be noted that the throughput value should be lower than 0 dB because this passive straight waveguide should only insert losses. This could be explained by the measurement noise or by losses induced by the fiber-to-fiber alignment, performed inside a standard connector, while the fiber-to-waveguide alignment is precisely tweaked, potentially leading to an underestimation of the input power measurement. Also, no information is given regarding the input polarization and as the waveguides are known to be birefringent, a different single-mode spectral range is to be expected between p- and s-polarized light.
§.§ Mode field diameter laboratory measurement
TEEM Photonics waveguide mode field diameter (MFD) measurements were performed for p- and s-polarized light injection. In this particular experiment, p-polarized (resp. s-polarized) electric field lies in a plane parallel (resp. orthogonal) to the plane of the PIC. The Thorlabs PM-630-HP reference fiber and TEEM Photonics waveguide outputs are imaged with a x40 objective on a Thorlabs CDD. The MFD are calibrated for p- and s-polarized light injection thanks to a reference polarization maintaining fiber with a known 4.5 ± 0.5 μm MFD. The MFD measurements for a horizontal and vertical cross-section of TEEM Photonics waveguide mode are presented in Fig. <ref> and Table. <ref>.
MFD measurements reveal the mode asymmetry induced by the fabrication process: the ion exchange is performed at the glass substrate surface meaning that the waveguides are not buried into the glass, inducing a form birefringence in spite of the glue layer and the counter-blade being used for symmetrization. Currently, the precision of the mode field diameter measurement is not sufficient to define a polarization-dependent model of the ioNext technology that could be taken into account in the BeamPROP simulation to refine the design. As a consequence, a non-polarized mode field diameter is defined by taking the mean values of polarized mode field dimensions. Effective indexes and throughput in both polarizations will be further investigated thanks to the characterization of new PICs, see Sec. <ref>.
§.§ Modeling of TEEM Photonics waveguide diffused index profile with the BeamPROP software
Based on the single-mode spectral range and non-polarized mode field diameter measurements, a 3D diffused index profile of TEEM Photonics waveguide is derived on the BeamPROP modeling softwarehttps://www.synopsys.com/photonic-solutions/rsoft-photonic-device-tools/passive-device-beamprop.html. The 3D diffused index profile parameters are presented in Fig. <ref>.
The glue layer is about 20 μm thick in the y>0 region and is considered as semi-infinite. Its refractive index is n_c=1.49 at 635 nm. The width of the lithographic mask used for the diffusion process is w. The diffusion length in the horizontal (resp. vertical) direction is h_x (resp. h_y). The maximum refractive index difference between the glass substrate and the waveguide core produced by the diffusion process is Δ n.
The waveguide diffused index profile cross-section n(x,y) in the glass substrate (y<0) is defined by:
n(x,y<0)=n_o+[Δ n· g(x)· f(y)]^γ,
with n_0=1.52 at 635 nm the glass substrate (y<0) refractive index, and where the functions g(x) and f(x) describe the region where the diffusion takes place:
g(x)=1/2{erf[(w/2-x)/h_x)]+erf[(w/2+x)/h_x] },
f(y)=exp(-y^2/h_y^2).
For the sake of simplicity in this preliminary model: 1) the diffusion process is assumed to be equivalent in both x and y directions, i.e. h_x=h_y=h ; 2) the relation between the ion concentration and the index n(x,y) is supposed to be linear, i.e γ=1 in Eq. <ref>. In practice, the diffusion length h and maximum refractive index difference Δ n parameters are optimized thanks to laboratory measurements as reported in Sec. <ref> and Sec. <ref>.
§ DESIGN AND OPTIMIZATION OF THREE TYPES OF COMBINERS
Different types of combiners are considered in this section: symmetric and asymmetric directional couplers, and ABCD cells (composed of Y junction, directional couplers and achromatic phase shifters). Laboratory characterization of symmetric directional couplers and design of asymmetric directional couplers and ABCD cells are presented. Directional couplers, also called X directional couplers according to their shape, recombine the light from two input waveguides and lead to two outputs, corresponding to the coupling of the two waveguides with a phase difference of 0 and π. ABCD cells involve a more complex design, including splitters, phase shifters and directional couplers. They provide four outputs which correspond to the coupling of the two input waveguides with a phase difference of 0, π/2, π and 3π/2, see Fig.<ref>.
§.§ Symmetric directional couplers
§.§.§ Theoretical model
The directional coupler standard geometry is presented in Fig. <ref>: two waveguides come very close to one another, inducing a coupling between the two by evanescent waves in the so-called interaction zone. For simulation and characterization purposes, the waveguide called "Through" (resp. "Cross") is the waveguide in which the light is injected (resp. not injected). The interaction zone is defined by its length L and the gap distance g between the waveguides. The directional coupler is called symmetric if both waveguides in the interaction zone have the same geometrical properties.
The normalized output powers P_through and P_cross of a lossless directional coupler<cit.> are given by:
P_through = 1-P_cross = 1-(κ^2/Δ^2)· sin^2(Δ· L)
with κ the mode coupling coefficient and Δ defined as:
Δ=√((β_through-β_cross/2)^2+κ^2)
with β_through and β_cross the propagation constants of the directional coupler waveguides. In Equation <ref>, F=κ^2/Δ^2 corresponds to the maximum power that can be transferred from one waveguide to the other. For a symmetric directional coupler, as both waveguides in the interaction zone have the same geometrical properties, propagation constants are equal (β_through = β_cross). Therefore F is equal to 1, meaning that all the power in one waveguide can be transferred into the other one, if the interaction zone length is properly chosen. The shortest length of the interaction zone for which the power is entirely transferred from one waveguide to the other is called L_0:100. Equation <ref> shows that the transferred power P_cross is periodic with a period proportional to 2L_0:100, with L_0:100=π/2Δ.
The directional coupler transfer rate A:B (with A+B=1) is defined as the proportion of the output flux in each output :
P_through/P_through+P_cross : P_cross/P_through+P_cross
The specification for the PIC directional coupler is to transmit 50 ± 10% of the light in each output over the 600 to 800 nm spectral band, i.e to have an achromatic transfer rate of 50:50. In order to get a 50:50 transfer rate, the shortest interaction zone length L_50:50 is L_0:100/2. Other solutions L_50:50,k for the interaction length are derived from Eq. <ref>:
L_50:50,k=1/Δ(π/4+kπ/2) = L_0:100(1/2+k) with k∈ℕ
As the interaction zone length L_0:100 depends on the wavelength, the transfer rate of a symmetric coupler is necessarily chromatic<cit.>. However, a directional coupler can be designed with asymmetric waveguides, leading to β_through≠β_cross. The asymmetry can be tuned in order to compensate for the transfer rate chromaticity, as explained in Sect. <ref>.
§.§.§ Laboratory characterization
Three symmetric directional couplers were manufactured by TEEM Photonics using the ioNext technology presented in Sec. <ref>.
The symmetric directional couplers all have a gap g = 2 μm and three different interaction zone lengths: L=50 μm, 100 μm or 150 μm. Measurements are performed using a P2-830A fiber connected to a white SLED source to inject non-polarized light into the PIC's symmetric directional couplers. A x4 objective is used so that the light from the PIC output feeds an OceanOptics spectrometer. A linear polarizer, located between the objective and the spectrometer, selects the p- or s-polarized light. In this particular case, p-polarized (resp. s-polarized) electric field lies in a plane parallel (resp. orthogonal) to the plane of the PIC.
Transfer rate measurements, as defined by Eq. <ref>, are presented in Fig. <ref>, for both p- and s-polarized light. The objective is twofold: 1) find the interaction zone length L_50:50 to get a 50:50 transfer rate and 2) investigate the wavelength dependence of the transfer rate over the 600 to 750 nm spectral band.
The results presented in Fig. <ref> confirm that the coupling coefficient depends on polarization, i.e waveguides are birefringent. Birefringence is not an issue for our application as long as there is no cross-talk between p- and s-polarized light (i.e no polarization cross-talk) that would reduce the fringe contrast. In the present experiment, the input light is not polarized, such that polarization cross-talk cannot be assessed, but it will be the focus of a future study. Despite the polarization dependency, our experimental measurements show that a compromise can be found for the interaction zone length L_50:50 which is situated between 20 and 40 μm for both polarizations.
Furthermore, it is noticeable that these symmetric directional couplers are not highly chromatic, in particular for the P polarization. Indeed, the relative standard deviations are smaller than 5.5% over the 600-700 nm range, as shown in Table. <ref>.
§.§ Asymmetric directional couplers
§.§.§ Asymmetric geometries
This section presents two asymmetric directional couplers geometries defined as uniformly asymmetric (UA) and non-symmetric (NS)<cit.> as shown in Fig. <ref>.
A uniformly asymmetric directional coupler is composed of one waveguide wider than the other by an amount dw in the interaction zone. For non-symmetric directional coupler, one waveguide is wider than the other one in the interaction zone and it is additionally tapered along the interaction zone. This means that the width is continuously varying from one starting value w to a final value w+dw.
§.§.§ Optimization based on BeamPROP simulations
The parameter space (dw,L) is probed to define the optimal set of parameters for each geometry. The optimal solutions found for uniformly asymmetric and non-symmetric directional couplers are detailed in Fig. <ref>. For both types of asymmetric directional couplers, the spectral transfer rates are computed and plotted in Fig. <ref>. For each coupler geometry, the transfer rate is evaluated when injecting in waveguide 1 (“Through” is “waveguide 1” in this case) or waveguide 2 (“Through” is “waveguide 2” in this case). Our simulation results show that the behavior of the coupler weakly depends on the input waveguide.
A study has also been carried out for tolerancing purposes, by computing the spectral transfer rate for various differential widths and interaction zone lengths around the nominal parameters doublet (dw_n,L_n).
The simulation results are presented in Fig. <ref>, showing that the differential width accuracy is critical, while a change of a few tenths of microns of the interaction zone length has no significant impact.
§.§ ABCD cell combiners
Y junctions and directional couplers are the simplest combiners that can be designed. However, they provide only one or two measurement points per interfering pairs, which is not sufficient to determine the fringe phase and amplitude. In this case, an additional temporal phase modulation is required, to reach a minimum of four fringe samples. In the FIRST instrument, a segmented mirror located before the PIC is used to vary the relative piston between the beams, pair by pair. For a large number of baselines, the data acquisition procedure thus becomes long and complex, and sensitive to phase perturbations occurring from one frame to the other.
A way to increase the data acquisition efficiency is to use another type of combiner called an ABCD cell as presented in the following section.
§.§.§ Theoretical model
The ABCD beam combination is an interferometric scheme<cit.> giving a four point interferometric fringe sampling at its output. Each of the four outputs is the combination of the two inputs with a different phase as represented in Fig. <ref>.
ABCD cell input 1 and 2, i.e interfering beams, are split in two and are recombined by pairs thanks to two directional couplers. The phases of the four outputs are specified as follows:
A: φ_12^A
B: φ_12^B=φ_12^A+π
C: φ_12^C=φ_12^A+π/2
D: φ_12^D=φ_12^A+π/2+π=φ_12^A+3π/2
In order to build an achromatic ABCD cell, a π/2 achromatic passive phase shifter is required. A phase shifter is composed of two parallel waveguides: waveguide 1 and 2. Each waveguide of the same total length, is divided into N segments of optimized widths and lengths. The width variations induce variations of the mode effective index, and thus a difference in the optical path length between the two arms. The phase difference for a phase shifter of length L composed of a N=1 segment is given as follows:
Δφ=φ_1-φ_2=2π/λ(n_eff,1-n_eff,2)L
with n_eff,1=A_1+B_1λ+C_1λ^2 and n_eff,2=A_2+B_2λ+C_2λ^2 the effective indexes of waveguides 1 and 2.
This N=1 segment phase shifter is achromatic if:
∀λ, ∂φ/∂λ=-2π/λ(Δ n/λ-∂Δ n/∂λ)L=0
with Δ n=n_eff,1-n_eff,2 the effective index difference between waveguide 1 and 2. Following P. Labeye's PhD thesis<cit.>, these equations can be generalized for a design composed of N ≥ 1 segments shifting the phase by φ_0=π/2:
φ=2π/λ∑_i=1^NΔ n_i L_i=φ_0
∂φ/∂λ=-2π/λ∑_i=1^N(A_i/λ-C_iλ)L_i=0
with Δ n_i=n_eff,1,i-n_eff,2,i=(A_1,i-A_2,i)+(B_1,i-B_2,i)λ+(C_1,i-C_2,i)λ^2=A_i+B_iλ+C_iλ^2 the effective index difference between waveguide 1 and 2 in segment i. Equation <ref> is true for all wavelengths only if:
∑_i=1^NC_iL_i=0 and ∑_i=1^NA_iL_i=0.
Consequently, one can derive that:
∑_i=1^N(B_iL_i)=φ_0/2π
is also a condition for Eq. <ref> to be valid at all wavelengths. Thus, it is possible to find sets of (A_i, B_i, C_i) parameters that will produce an achromatic phase shift over a given bandwidth.
§.§.§ Design and simulation of an achromatic π/2 passive phase shifter
For a given waveguide width, the coefficients A, B and C are estimated based on simulations performed with the BeamPROP software. Effective indexes are computed for widths between 1.8 and 3 μm with a step of 0.1 μm, as shown in Fig. <ref>. A second order polynomial fit is used to estimate the A, B and C coefficients for each waveguide width value. Two- and three-segments solutions for achromatic φ_0=π/2 phase shifters are investigated. The lithographic fabrication error on waveguide width ranges from 0.2 to 0.5 μ m and is considered homogeneous throughout the wafer, the glass substrate on which several PICs are manufactured. Therefore, the width error ϵ_w is similar for the two waveguides of the phase shifter. Fig. <ref> shows that n_eff,w1+ϵ_w-n_eff,w2+ϵ_w = n_eff,w1-n_eff,w2 meaning that the phase shift does not change with the fabrication error if this error is homogeneous throughout the wafer.
The system of equations to be solved is given by Eq. <ref> and Eq. <ref>, which can be rewritten with a matrix formalism:
[ L ]=[ M ]^-1·[ ϕ]
with:
L=[ L_1; L_2; ], M=[ A_1 A_2; B_1 B_2; ] and [ ϕ]=[ 0; φ_0/2π; ] for the two-segment solution, and
L=[ L_1; L_2; L_3; ], M=[ A_1 A_2 A_3; B_1 B_2 B_3; C_1 C_2 C_3; ] and [ ϕ]=[ 0; φ_0/2π; 0; ] for the three-segment solution.
For the N=2 and N=3 segments phase shifters, four and six waveguide widths values have to be optimized respectively. This is done by scanning all parameter grids, and computing segments lengths for all the possible width combinations.
When constraining the waveguide width between 1.8 and 2.3 μm, i.e. the single-mode range for wavelength from 600 to 820 nm, optimal parameter sets lead to achromatic φ_0=π/2 phase shifters about a dozen millimeters long. However they can be made shorter if multi-modal waveguides are use, i.e. width larger than 2.3 μm. Indeed, with larger width, the effective index increases and the light gets slowed down more efficiently. In this case, optimal parameter sets can be found for shorter total lengths: 1.8 mm for the two-segment design, and 2.7 mm for the three-segment design.
Mode coupling between segments is performed with tapers. Each taper present in one waveguide must be included in the other one as well, as they would otherwise induce additional phase difference<cit.>. Simulations are run in the three following cases: without tapers, 10 μm and 100 μm long tapers. Results are reported in Fig. <ref>, highlighting the need for tapers and showing that an achromatic phase shift is achieved over the 600-800 nm bandwidth within an accuracy of 2 deg at best. The three-segment solution offers a better phase shift performance at the cost of a longer total length as described in Table. <ref>.
§.§ Latest prototypes and test functions
A test wafer has been manufactured in order to characterize and validate the models and the designs presented in this paper, in particular regarding the asymmetric directional couplers and the ABCD cells. The laboratory characterization of the different components that have been included in this wafer will be the subject of a future communication.
The complete wafer layout is presented in Fig. <ref>. It comprises different test PICs, labelled from 1 to 8 in the figure. There are test PICs intended for characterizing the individual building blocks, as well as two complete 5T-combiners:
* PIC 1 is composed of straight and curved waveguides to assess some fundamental properties of the waveguides. The width of the straight waveguides range from 1.8 to 3 μm with a step of 0.2 μm in order to characterize the spectral range where the single-mode behavior is achieved. Various S bends with curvature radii ranging from 10 to 40 mm will help determine the minimum acceptable curvature radius based on loss measurements. This PIC also contains straight waveguides with 10 crossings at various angles of 5, 10, 20, 30, and 45 degrees to evaluate excess losses due to waveguide crossing. PIC 8 contains additional straight waveguides with 10 or 20 crossings at angles ranging from 3 to 15 degrees.
* PIC 2 contains asymmetric directional couplers as presented in Sec. <ref>, with various sets of parameters for the interaction zone geometry, i.e. the differential width dw between the coupling waveguides and the interaction zone length L. Characterization will mainly consists in spectral and polarized transfer rate measurements. Because some fundamental values of TEEM Photonics technology are currently not well known, especially concerning birefringence, this PIC also contains Mach-Zehnder interferometers for spectral effective indexes measurement. Effective indexes in both polarizations will allow to refine the BeamPROP model and include both polarizations in simulations.
* PIC 4 contains 8 ABCD cells with various phase shifters.
* PIC 6 contains Mach-Zehnder interferometers with phase shifters presented in Sec. <ref>.
* PIC 7 contains Y junctions with different junction zone geometries.
* PIC 3 is a complete 5T interferometric combination PIC based on ABCD cells, while PIC 5 is a 5T PIC based on uniformly asymmetric directional couplers. Their design has been presented in Fig. <ref>. They are meant to be installed on the FIRST instrument at the Subaru telescope, provided that their performance in terms of throughput and polarization is high enough.
§ CONCLUSION
Photonics integrated circuits constitute promising devices to perform beam manipulation requiring functions such as beam splitters, phase shifters or combiners. They thus offer stable and robust solutions for interferometric instruments, for long baseline interferometers, as well as pupil remapper instruments. While the technology has been well developed at the telecom wavelength range in the infrared, high performance visible PICs are still difficult to achieve. Within the SCExAO/FIRST project, a visible spectro-interferometer performing pupil remapping at the Subaru telescope, we are working with TEEM Photonics to produce a 5T beam combiner with high enough performance in terms of throughput and chromaticity, that would be suited for on-sky observations.
The development process is iterative and started with a first step to characterize the fundamental properties of the waveguides manufactured with the ioNext technology operated by TEEM Photonics. Based on the effective refractive index profile model of TEEM Photonics' standard waveguide validated by laboratory measurements, we have designed and optimized symmetric and asymmetric directional couplers, as well as achromatic phase shifters intended for ABCD cells, working in the 600-800 nm wavelength range. A wafer containing various test PIC designs has recently been manufactured. The next step will be to characterize these PICs in the laboratory, to further evaluate losses due to propagation, bends or crossings, and to identify polarization dependent behaviors. These results will be the subject of a future paper.
§ ACKNOWLEDGEMENTS
A part of this work was previously published as a SPIE proceeding<cit.> for the SPIE Astronomical Telescopes and Instrumentation 2022 event in Montréal, Québec, Canada. This project is supported by the French National Research Agency (ANR-21-CE31-0005) and the doctoral school Astronomy & Astrophysics of Ile de France (ED 127). Authors also acknowledge the funding by ASHRA (Action Spécifique Haute Resolution Angulaire) from INSU-CNRS and thank TEEM Photonics for their support and trust.
spiejour
Manon Lallement (She/Her) is a PhD student in instrumentation for astronomy at the Observatoire de Paris. Her PhD is supervised by Elsa Huby and Sylvestre Lacour and is funded by the doctoral school Astronomy & Astrophysics of Ile de France (ED 127). She received her BS and MS degrees in photonics, theoretical and applied optics from the French Institut d'Optique Graduate School in 2019 and 2021, respectively.
Her current research interests include visible photonics for astronomical interferometry, 3D printed micro-lenses array for injection into single-mode fibers and interferometric data analysis. She is a member of SPIE.
https://spie.org/profile/Manon.Lallement-4374172
Biographies of the other authors are not available.
|
http://arxiv.org/abs/2409.03688v1 | 20240905164325 | Infrared spectroscopy study of kagome material CsTi$_3$Bi$_5$ | [
"Liye Cao",
"Xiangqi Liu",
"Jiayi Cheng",
"Bixia Gao",
"Xiaoting Zhang",
"Yanfeng Guo",
"Fengjie Ma",
"Rongyan Chen"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
School of Physical Science and Technology, ShanghaiTech University, Shanghai 201210, China
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
School of Physical Science and Technology, ShanghaiTech University, Shanghai 201210, China
ShanghaiTech Laboratory for Topological Physics, Shanghai 201210, China
[email protected]
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
[email protected]
Center for Advanced Quantum Studies and School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics (Ministry of Education), Beijing Normal University, Beijing 100875, China
§ ABSTRACT
The kagome material CsTi_3Bi_5, which is isostructural to the extensively studied charge density wave (CDW) compound CsV_3Sb_5, exhibits intriguing electronic features within its two-dimensional kagome lattices of titanium atoms. Here, we perform optical spectroscopic measurements together with the first-principles calculations on single-crystalline CsTi_3Bi_5 to investigate its electronic properties comprehensively. It is found that the overall optical spectra are very similar to those of CsV_3Sb_5, but the existence of CDW instability is ruled out in CsTi_3Bi_5. Via careful comparison to the optical responses of CsV_3Sb_5, we attribute this difference to a significant reduction in the itinerant carrier density of CsTi_3Bi_5, which is associated with the absence of van Hove singularity near the Fermi level at M point. This result supports the scenario that the CDW in CsV_3Sb_5 is driven by the nesting of van Hove singularity. Additionally, we unveil some exotic low-lying absorption features, which provide clear evidence of flat bands in CsTi_3Bi_5. Our findings contribute to a deeper understanding of exotic phenomena in CsTi_3Bi_5 and provide valuable insights into the role of van Hove singularity in CsV_3Sb_5.
Infrared spectroscopy study of kagome material CsTi_3Bi_5
Rongyan Chen
Accepted XXX. Received YYY; in original form ZZZ
=========================================================
§ INTRODUCTION
Recently, kagome material AV_3Sb_5 (A = K, Rb, Cs) has attracted extensive attention in condensed matter physics. There are various exotic physical phenomena observed in the AV_3Sb_5 system, such as unconventional superconductivity (SC) <cit.>, charge-density-wave (CDW) <cit.>, pair density wave <cit.>, electronic nematicity <cit.>, non-trivial topological surface state <cit.>, anomalous Hall effect with the absence of magnetic order <cit.> and so on. Such rich quantum phenomena are rooted in the unique electronic structure of AV_3Sb_5, which contains the general features of kagome lattice materils, such as flat bands <cit.>, Dirac cones with symmetry-protected Dirac points <cit.>, and saddle points <cit.>. This system has provided an intriguing research platform for studying the interplay between SC, CDW order, and nontrivial topological electronic states.
Despite intensive investigations focused on AV_3Sb_5, there are still a number of intractable problems yet to be solved, particularly regarding the underlying mechanism of CDW order in . In this compound, some studies reveal the formation of a 2 × 2 × 2 superlattice in accompany with the CDW transition <cit.>, while others report the presence of a 2 × 2 × 4 superlattice <cit.>. Moreover, anomalous Hall effect linked to time-reversal symmetry-breaking <cit.> and electronic nematicity characterized by rotation symmetry-breaking <cit.> are observed in the CDW state, hinting at a possible influence of CDW order on symmetry breaking and nontrivial topology <cit.>. Fermi surface nesting is believed to play a crucial role in the development of CDW, as the van Hove singularities (vHSs) are located very close to the Fermi level (E_F) and could be perfectly connected by the CDW wave vector q <cit.>. However, the expected divergence of the electronic susceptibility corresponding to this Fermi surface nesting is not observed in theoretical calculation results <cit.>. At the same time, the effect of electron-phonon coupling is suggested to significantly contribute to the lattice distortion and thereby CDW instability <cit.>. Consequently, whether the origin of CDW stems from electron instabilities facilitated by vHSs <cit.> or lattice structure instability <cit.> is not clear yet.
Investigating materials featuring analogous lattice structures as is beneficial to revealing the mysteries in this compound. Specifically, is a newly discovered kagome material that crystallizes in a layered structure with a space group P6/mmm, which is isostructural to <cit.>. As revealed by experiments and density function theory (DFT) calculations <cit.>, the electronic structure of is highly similar to that of , except that the vHSs of are situated at about 0.15 eV and 0.75 eV above E_F <cit.>, instead of being close to E_F as in <cit.>. Meanwhile, neither magnetic susceptibility, resistivity, or heat capacity measurements exhibit any discernible anomalies associated with CDW phase transitions in <cit.>, in contrast to the results of <cit.>. This observation is further confirmed by subsequent studies, such as scanning tunneling microscopy (STM) and low-temperature x-ray diffraction measurements <cit.>. It is proposed that the absence of CDW order is related to the vSHs being pushed far away above the E_F in <cit.>.
Furthermore, a superconducting transition at ∼ 4 K is reported in <cit.>, although there are some controversies concerning its credibility <cit.>. Electronic nematicity associated with orbital ordering is unveiled by some experiments compared with DFT calculations <cit.>, which is analogous to <cit.>. Further investigations are required to understand the underlying physics of these exotic phenomena thoroughly. Optical spectroscopy is a powerful technique for probing the charge dynamics and detecting possible gap-opening phase transitions. Here, we perform infrared spectroscopy measurements on single-crystalline from 300 K to 10 K, which reveal a very similar optical response to that of . The most prominent difference is that the broader Drude component of is greatly suppressed compared to , which is intimately related to the development of CDW in the latter. Additionally, we also observe some peculiar low-lying absorption characteristics at around 3100 , which have no counterpart in . Combined with our DFT based first-principles calculations, these characteristics are attributed to interband transitions associated with flat bands about 0.35 eV below the E_F.
§ METHODS
Single crystals of were grown by using the flux method. The precursor CsBi was prepared by reacting Cs (purity 99.75%) and Bi granules (purity 99.99%) at 300 ^∘C. Starting materials of CsBi, Bi and Ti powder (purity 99.99%) were loaded inside a Canfield crucible with a molar ratio of 3: 9: 1, which were then sealed in a quartz tube. The assembly was heated up to 1100 ^∘C in a furnace within 12 h and kept at this temperature for 10 h. It was subsequently slowly cooled down to 800 ^∘C at a temperature decreasing rate of 50 ^∘C/h and kept for 5 h, and then further slowly cooled down to 400 ^∘C at 2 ^∘C/h. Finally, the assembly was taken out from the furnace and decanted with a centrifuge to separate single crystal from the flux.
Infrared spectroscopy studies are performed on a Fourier transform infrared spectrometer (Bruker Vertex 80v) in the frequency range from 60 to 23 000 . The single crystal with a size of 5.0 × 6.0 × 0.8 mm^3 is used. The surface of this sample is newly cleaved in order to remove the oxide layer. An in situ gold and aluminium overcoating technique is used to get the reflectivity R(ω). The real part of the optical conductivity σ_1(ω) is obtained by the Kramers-Kronig transformation of R(ω). The Hagen-Rubens relation is used for the low-frequency extrapolation of R(ω). We employ the x-ray atomic scattering functions in the high-energy side extrapolation <cit.>.
In our DFT calculations, the plane-wave basis based method and Quantum-ESPRESSO software package were used <cit.>. We adopted the generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhof formula for the exchange-correlation potentials <cit.>. Spin-orbit coupling is included in the simulations by using fully relativistic optimized norm-conserving Vanderbilt pseudopotentials from the Pseudo-Dojo library with a kinetic energy cutoff of 85 Ry <cit.>. A mesh of 12×12×9 k-points grid was used for sampling the Brillouin zone, and a denser uniform k-points grid of 20×20×20 was adopted for the non-self-consistent calculations of optical properties. During the simulations, all structural geometries were fully optimized to achieve the minimum energy.
§ RESULTS AND DISCUSSION
The reflectivity spectra R(ω) of at different temperatures are shown in the main panel of Fig. <ref>(a), while an expanded range up to 23 000 is plotted in its inset. Across the entire measured temperature range, R(ω) increases as frequency decreases and approaches a unit at zero frequency, and the low-frequency reflectivity increases upon cooling, indicating a metallic nature of , which is consistent with previous reports <cit.>.
Furthermore, the overall profile of the R(ω) spectra is very similar to that of at low temperatures in the CDW state, which exhibit a pronounced dip structure at about 1500 along with a hump at around 6500 <cit.>. Here for , the dip and hump features are detected as well, as marked by black arrows in Fig. <ref>(a). However, the dip structure is much weaker and broader, which shifts to about 3000 . Moreover, several even weaker mini-dips could be observed on top of the broad dip at 10 K and 100 K, which is absent in .
It is worth to note that the dip of only appears below T_CDW, thus intimately related to the CDW phase transition <cit.>. Therefore, it is tempting to explore whether the resembling dip character observed in is related to any kind of CDW condensate. In addition, the origin of mini-dips is intriguing as well.
With temperature decreasing, the dip structure of seems to get more obvious, as expected for the CDW-related absorption. However, it is difficult to tell if there is an onset temperature where the absorption begins to show up. To get a better perspective, we further resort to the real part of optical conductivity σ_1(ω) of , as shown in Fig. <ref>(b). It can be seen that the low-frequency σ_1(ω) consistently exhibits clear Drude components throughout the measured temperatures, i.e. low-energy peaks centered at zero energy, which reaffirms the metallic nature of <cit.>. The width at half maximum of the Drude peak represents the scattering rate of free carriers γ, which reduces monotonically with temperature decreasing, as can be seen in Fig. <ref>(b). In correlation with the dip and hump structures presented in R(ω), several Lorentz-type peaks show up in the optical conductivity of . The most interesting one is the one located at around 3100 with a series of mini-peaks superposed on top, which corresponds to the dip structure donated in Fig. <ref>(a). For comparison, there is also a peak in the optical conductivity of associated with the dip feature of its reflectivity, which is identified to stem from the opening of a CDW gap. This similarity again puts forward the possibility of a CDW phase in .
In infrared spectroscopy experiments, CDW orders manifest themselves in two complementary ways. The first one is the emergence of a Lorentz-type peak below T_CDW, whose central position shifts to higher energy, and spectral weight (SW) piles up upon cooling. The second one is the transfer of SW from the itinerant Drude component to the emerging Lorentz peak, due to the loss of density of state accompanying the opening of the CDW gap. To prove the existence of a CDW transition, it is essential to provide evidence of these two manifestations. Here, it can be clearly seen from Fig. <ref>(b) that the 3100 peak persists up all the way to room temperature, which suggests either no CDW transition or a CDW transition with T_CDW higher than room temperature. As for the SW redistribution, we have plotted the SW as a function of frequency in Fig. <ref>. The SW generally increases with temperature increasing in a wide energy range and finally merges together roughly above 10 000 . To further illustrate its relative change with temperature, we plot the SW(T) renormalized by SW(300 K) in the inset of Fig. <ref>. If there is a CDW-induced redistribution, the SW below T_CDW is supposed to be smaller than that above T_CDW at frequencies lower than the gap energy, which usually evidences as a dip structure in the renormalized SW <cit.>. However, no such features are observed in the inset of Fig. <ref>, which means the absorption characteristics around 3100 are not related to a CDW gap.
In order to further reveal the underlying mechanism of this exotic absorption, it is important to track down the quantitative variation of the charge dynamics. Therefore, we decompose the optical conductivity according to the Drude-Lorentz model,
σ_1(ω) = ∑_iω_Pi^2/4πγ_Di/ω^2+γ_Di^2 + ∑_jS_j^2/4πγ_jω^2/(ω_j^2-ω^2)^2+ω^2 γ_j^2.
It's noteworthy that this equation is structured in Gaussian units for the sake of convenience. The first term is the Drude components, which describe the charge dynamics of intraband carriers. The second term is the Lorentz components, which are used to describe excitations across the energy gaps or interband transitions. Here, ω_Pi=√(4π n e^2/m^*) and γ_Di = 1/τ_Di are the plasma frequency and the scattering rate of free carriers of the ith Drude component, where n is the carrier density and m^* is the effective mass. ω_j, γ_j = 1/τ_j, and S_j are the resonance frequency, width, and resonance strength of the jth Lorentz component, respectively. To minimize fitting errors, we use the complex fitting method to fit the real parts of the optical conductivity using a nonlinear least-squares technique. The part of the fitting parameters with error bars at all measured temperatures are listed in Table <ref>. The representative examples of the fitting results of σ_1(ω) at 10 K and 300 K are plotted in Fig. <ref>. Two Drude components with distinct scattering rates are demanded for the fitting, which indicates that the conduction electrons come from different energy bands or Fermi surfaces. Meanwhile, seven Lorentz terms are employed, although the latter five ones are almost temperature-independent and thus not listed. It is worth to remark that the mini-peaks around 3000 are not taken into account in our fitting procedure. The peak positions of these mini-peaks are identified to be at 2362, 2972, 3630, and 4627 at 10 K, as marked by orange arrows in Fig. <ref>(b).
It is instructive to compare the charge dynamics of free carriers in with that in , where two Drude components with significantly different scattering rates are used as well <cit.>. We found that the scattering rates of generally decrease with temperature decreasing, whereas the plasma frequencies are roughly temperature independent. These characteristics are the same as when it is in the normal phase above T_CDW. Furthermore, the plasma frequency of the narrow Drude term (ω_P1) is roughly comparable between the two compounds, whereas ω_P2 of is much lower than that of <cit.>, indicating a much smaller carrier density n or much larger effective mass m^*. It is worth noting that for the narrow Drude term is ascribed to the light-electron-like and multiple Dirac bands at Γ and K points, whereas the broad one is attributed to the heavy hole bands near M point. For comparison, we have calculated the electronic structure of by DFT. The band structure with spin-orbit coupling along the high-symmetry k-paths is presented in Fig. <ref>(a). Similar to , there are also very light-electron bands crossing the E_F around Γ point, and Dirac type of conduction band near K point. These bands usually contribute to Drude components with relatively small scattering rates <cit.>, and thus highly likely to be responsible for narrow Drude term (Drude1). In contrast, the band structure of at M point is quite different from that of . The vHSs are pushed up to 0.15 eV and 0.75 eV above E_F instead of near E_F, as indicated by the pink arrows in Fig. <ref>(a). As a result, the density of states near E_F is expected to be much lower than . Assuming that Drude2 of is also related to the bands near M point, this agrees perfectly with the fact that the ω_P2 of is much lower than .
To discuss the possible contribution of effective mass m^* to the suppression of ω_P2, it is useful to estimate the electron correlation strength, which could be reflected by the ratio of experimental kinetic energy K_exp to theoretical kinetic energy K_band <cit.>. In simple metals with itinerant electrons of band mass, the agreement between the experimental and theoretical results would lead to K_exp/K_band ≃ 1. However, in correlated materials with quasiparticles
of enhanced effective mass, the strong Coulomb interaction would drastically suppress the experimental kinetic energy, and thus generate a much smaller K_exp/K_band.
The kinetic energy can be calculated by
K = 2ħ^2c_0/π e^2∫_0^ω_cσ_1(ω)dω,
where c_0 is the c-axis lattice parameter, and ω_c is the cutoff frequency covering the entire Drude component where σ_1(ω) reaches a minimum value. The theoretical σ_1(ω) calculated by DFT method using the random phase approximation is depicted by the red solid line in Fig. <ref>(b). Here, ω_c = 1700 is used to estimate both K_exp and K_band, yielding a value of K_exp/K_band = 0.74. This value is smaller than that of CsV_3Sb_5, which is about 0.81 <cit.>. Therefore, a stronger correlation is expected in , which is consistent with the results of STM experiments <cit.>. Nevertheless, this slight increase of correlation strength can hardly explain the substantially suppressed ω_P2 of , which is therefore predominantly ascribed to the significant reduction of carrier density, or the shift of vHSs at M point.
It seems that the optical responses of and are highly similar to each other, but this only stands in the normal states. Upon entering the CDW state, the broad Drude component of is significantly reduced, and the lost SW is transferred towards higher frequencies due to the formation of a CDW-related gap. In contrast, the SW of all the Drude and Lorentz terms of are stable against temperature variation, incompatible with a CDW phase transition. Particularly, the suspicious peak located at around 3100 (Lorentz1) is actually robust up until 300 K, whose SW reduces very slightly with warming, as listed in Table <ref>. It only looks obscure at high temperatures in the σ_1(ω) spectra because it overlaps with the broadening Drude peak. Furthermore, the scattering rate of the broad Drude components in shows an abrupt reduction associated with the CDW transition T_CDW <cit.>, which is absent in . Therefore, we believe it is safe to conclude that there is no CDW-related physics in . Nonetheless, a detailed comparison between and would shed light on the origin of CDW in . We found the most prominent difference, from the spectroscopic view, lies in the SW of the broad Drude components, which are linked to the conduction bands near M point. In other words, the itinerant carriers of are substantially reduced compared to , due to the absence of vHSs near the E_F. Therefore, our results strongly suggest that the CDW phase transition in originates from the vHSs near the E_F.
Finally, we would like to discuss the possible origination of the Lorentz1 peak centered at around 3100 (∼ 0.38 eV). Note that this peak does not have a counterpart in , and there are a series of mini-peaks superposed on it. In , the first Lorentz peak in the normal state is located at around 0.7 eV, which is argued to involve the electronic states near the saddle points at M point. Since the saddle points of are significantly shifted upwards, they are unlikely to be responsible for the Lorentz1 peak of 0.38 eV. Instead, an obvious flat band is identified across the entire Brillouin zone in from our DFT results, which is in good agreement with ARPES measurements and previous calculations <cit.>. Especially, the flat band is located at around 0.35 eV below the E_F, which leads to a pronounced peak in the density of states at this energy, as indicated by a black arrow in the inset of Fig. <ref>(b). Moreover, it closely matches with the peak position of Lorentz1 (∼ 0.38 eV). Therefore, we propose that Lorentz1 corresponds to interband transitions from the flat band to the unoccupied conduction band, as indicated by blue arrows in Fig. <ref>(a). Additionally, slight dispersion of the flat band could lead to additional tiny peaks on top of the main peak, as is also reported in some kagome materials, like Fe_3Sn_2 <cit.>. Hence, the mini-peaks on top of Lorentz1 provide clear evidence of the existence of flat bands in , which is not observed in yet.
§ CONCLUSION
In conclusion, we have investigated the electronic properties of the kagome material through optical spectroscopic measurements and band structure calculations. Our findings reveal a metallic response in across the temperature range from 300 K to 10 K, consistent with its transport behaviors. Despite the striking similarity between the optical responses of and , there are no signatures of a charge-density-wave gap in . Moreover, the spectral weight of the broad Drude component of is significantly suppressed compared to , which we attribute to the absence of van Hove singularities near the Fermi level in . This result supports the claim that charge-density-wave formation in is linked to the van Hove singularities at M point. Furthermore, the comparison between the experimental and theoretical optical conductivity of points to an electronic correlation a little stronger than . In addition, we have identified low-lying absorption features linked to interband transitions that involve flat bands in . Our work provides valuable insights into the underlying physics of exotic properties of and highlights the importance of vHSs in the development of charge-density-wave in .
ACKNOWLEDGMENTS
R. Y. Chen and F. J. Ma acknowledges the National Key Projects for Research and Development of China (Grant No. 2021YFA1400400), the National Natural Science Foundation of China (Grant Nos. 12074042 and 12074040), the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 11704033), and the Fundamental Research Funds for the Central Universities (Grant No. 2243300003). Y. F. Guo acknowledges the National Key R&D Program of China (Grant No. 2023YFA1406100) and the Double First-Class Initiative Fund of ShanghaiTech University. This work was supported by the Synergetic Extreme Condition User Facility (SECUF). F. J. Ma was also supported by the BNU Tang Scholar.
|
http://arxiv.org/abs/2409.02337v1 | 20240903235233 | Coaching a Robotic Sonographer: Learning Robotic Ultrasound with Sparse Expert's Feedback | [
"Deepak Raina",
"Mythra V. Balakuntala",
"Byung Wook Kim",
"Juan Wachs",
"Richard Voyles"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.CV"
] |
Coaching a Robotic Sonographer: Learning Robotic Ultrasound with Sparse Expert's Feedback
Deepak Raina^*, Mythra V. Balakuntala^*, Byung Wook Kim, Juan Wachs Senior Member, IEEE,
Richard Voyles Fellow, IEEE
This work was supported by National Science Foundation (NSF) USA under Grant #2140612 and Purdue University seed grants for West Lafayette-Indianapolis campuses collaboration. (Corresponding author: Deepak Raina)
Deepak Raina is with the Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218 USA (e-mail: [email protected])
Mythra V. Balakuntala is with Nikon Research Corporation of America, Belmont, CA 94002 USA (e-mail: [email protected])
Byung Wook Kim is with the Department of Computer Engineering, Purdue University, West Lafayette, IN 47907 USA (e-mail: [email protected])
Juan Wachs is with the School of Industrial Technology, Purdue University, West Lafayette, IN 47907 USA (e-mail: [email protected]).
Richard Voyles is with the School of Engineering Technology, Purdue University, West Lafayette, IN 47907 USA (e-mail: [email protected])
*DR and MVB primarily conducted this research at Purdue University
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Ultrasound is widely employed for clinical intervention and diagnosis, due to its advantages of offering non-invasive, radiation-free, and real-time imaging. However, the accessibility of this dexterous procedure is limited due to the substantial training and expertise required of operators. The robotic ultrasound (RUS) offers a viable solution to address this limitation; nonetheless, achieving human-level proficiency remains challenging. Learning from demonstrations (LfD) methods have been explored in RUS, which learns the policy prior from a dataset of offline demonstrations to encode the mental model of the expert sonographer. However, active engagement of experts, i.e. Coaching, during the training of RUS has not been explored thus far. Coaching is known for enhancing efficiency and performance in human training. This paper proposes a coaching framework for RUS to amplify its performance. The framework combines DRL (self-supervised practice) with sparse expert's feedback through coaching. The DRL employs an off-policy Soft Actor-Critic (SAC) network, with a reward based on image quality rating. The coaching by experts is modeled as a Partially Observable Markov Decision Process (POMDP), which updates the policy parameters based on the correction by the expert. The validation study on phantoms showed that coaching increases the learning rate by 25% and the number of high-quality image acquisition by 74.5%.
Robotic ultrasound, Deep reinforcement learning, Coaching, Learning from expert's feedback
§ INTRODUCTION
Medical ultrasound imaging is one of the most widely used imaging modalities for diagnostic and interventional procedures. Its widespread adoption can be attributed to its affordability, portability, non-invasiveness, absence of ionizing radiation, and real-time feedback. These merits make it particularly suitable for general use in low-income and developing regions. Unfortunately, the increasing demand for medical ultrasound, coupled with the substantial training required to become a proficient sonographer, and the significant impact of sonographer experience on diagnostic accuracy, contribute to a persistent gap in accessibility to this fundamental and valuable diagnostic tool <cit.>.
Robotic Ultrasound (RUS) holds great promise to overcome these drawbacks. Further, RUS broadens the pool of skilled practitioners, enhances the safety of healthcare workers during pandemics, and improves the accessibility of ultrasound in rural areas where trained human practitioners are scarce <cit.>. A robot, once trained to an expert level, not only performs the procedure but also opens the possibility to train novice human operators. This dual functionality can significantly alleviate the training burden and offer a sustainable solution to the persistent gap in the availability of trained sonographers.
Thus, transferring this skillset from an expert human operator to a robot is a key research question <cit.>. Novice sonographers learn this procedure by observing experts, followed by self-practice under the supervision of an expert, who may interrupt and provide corrections as needed. It brings into the picture the critical element of their training: coaching <cit.>. It is defined as an active engagement of human experts in a trainee's learning process.
Coaching by an expert guides novice humans to learn more rapidly and to achieve high levels of performance <cit.>. Surprisingly, there is a dearth of coaching research in RUS. Although, it has started gaining prominence in robot learning literature <cit.>. For RUS, Deep reinforcement learning (DRL)
have been explored to mimic the training process of humans under reward-based self-supervision <cit.>. However, these methods suffer from longer training times, local minima issues, and limited adaptability to unforeseen scenarios. Later, several works leveraged the knowledge of experts in the form of demonstrations and pre-trained a model offline <cit.>. These approaches are often termed as learning from demonstration (LfD). Notably, none of these methods considered real-time engagement of experts during training. The aim of this paper is to develop a coaching framework for RUS to amplify its performance. There is a lack of formalism for coaching in robot learning. In this paper, we treat coaching as zero-shot learning because the RUS (learner) receives unlabeled corrective actions from the expert. These actions will correct the learnt policy and transform it towards an optimal policy.
§.§ Related work
§.§.§ Robotic ultrasound systems
Earlier RUS used visual servoing to automate the ultrasound procedure <cit.>, but these methods lack human-level anatomical knowledge, a crucial factor for sonographers in relating probe motions to the complex anatomy. To supplement this knowledge, later methods used high-end imaging modalities like MRI, CT, or 3D data <cit.> to identify the anatomical landmarks and pre-plan the probe trajectory. However, the sparse availability of these modalities in underserved regions did not solve the underlying issue. Recent works have explored the use of DRL architectures to mimic the self-supervised practice of human trainees <cit.>, yet the clinical applicability of these systems was questioned due to longer training times.
§.§.§ Experts' engagement in learning of RUS
Researchers explored the engagement of ultrasound experts through LfD approaches to reduce training times.
Lonas et al. <cit.> used the Gaussian Mixture Model (GMM) and Gaussian Mixture Regression (GMR) to learn a probe motion model from offline demonstrations. However, the model did not include the probe forces and ultrasound image information, which limited its clinical applicability. Li et al. <cit.> used the dataset curated from experts' demonstrations to sample probe poses during model-free RL.
This approach has two demerits. First, a very large dataset would be required to handle human anatomical variability. Second, sampling from a large dataset would be computationally expensive, hence impractical for RUS. Raina et al. <cit.> proposed to model the Gaussian process (GP) prior and kernel from offline expert demonstrations. Later, this pre-trained GP was used in the Bayesian optimization framework for the robotic acquisition of optimal images.
Jiang et al. <cit.> and Burke et al. <cit.> inferred the reward for optimizing the RUS policy from scanning demonstrations, which assumed that the images shown in the later stage are more important than the earlier images. It is important to note that the above-cited works proposed to learn offline using a dataset collected from expert demonstrations, which often require numerous optimal demonstrations. Additionally, these approaches are goal-driven, while the proposed coaching framework allows updating the policy objectives and parameters through local corrections to the robot's trajectory.
§.§.§ Coaching robots
Early coaching-based methods incorporated diverse human feedback within reinforcement learning methods <cit.>. Macglashan et al. <cit.> proposed COACH (Convergent Actor-Critic by humans) that extended RL with online reward/penalty from experts. However, COACH converged only to local minima and relied on sparse binary evaluations using goal examples.
As such, this method demands substantial sampling, greater learning time, and a large feedback volume. Later, preference-based learning techniques were introduced into DRL, which involved presenting action samples (trajectories) to experts for grading and later enhancing behavior based on their preferences <cit.>.
While preferences aid improvement, the challenge remained in estimating trajectories to elicit preference-based feedback, resulting in inefficient use of expert's input. Expert feedback is not only evaluation but also provides useful information on direct trajectory modifications <cit.>. Therefore, the robot should leverage these corrections to optimize actions and update rewards.
Recent strategies in physical human-robot interaction (pHRI) considered corrections as informative rather than disturbance <cit.>. Online learning from correction aimed to refine objectives and generate the trajectory based on comparisons between corrected and original trajectories <cit.>. Similar techniques have been employed in shared autonomy in manipulation <cit.>.
However, the challenge lies in the robot understanding the context of interactive feedback and updating its objectives and policy accordingly. A formalism for coaching that combines policy updates and learning new objectives is lacking in the robotics literature. In order to address these challenges, we extend Partially Observable Markov Decision Process (POMDP) formalizations from pHRI and shared autonomy to coaching the DRL policy. Moreover, we demonstrate its applicability to previously unexplored and highly expert-dependent modality of medical ultrasound imaging.
§ METHODOLOGY
The pipeline of the methodology is outlined in Algorithm 1, which combines self-supervised practice through DRL with coaching through sparse expert feedback. The robot learns a DRL policy to perform ultrasound under self-supervision. During learning, the expert provides online feedback through kinesthetic corrections, which will update the policy objective and parameters towards optimal policy.
§.§ Self-supervised practice through DRL
This section describes a self-supervised practice through DRL policy, which is learnt using a reward formulation based on ultrasound image quality. The robot model executes in a finite bounded horizon, with a Markov decision process ℳ, with state space 𝒮 and action space 𝒜. For a horizon T, the state transitions according to the dynamics 𝒯:𝒮×𝒜⟶𝒮. A policy π_θ (a|s) represents the probability of taking action a given a state s, with parameters θ. The cumulative expected reward over horizon T is given by eq. (<ref>).
J(π) = 𝔼_π[∑_t=1^Tϕ(s_t,a_t)]
We used an off-policy algorithm, Soft Actor-Critic (SAC) <cit.> to learn the policy. This is due to the better sample efficiency and generation of stable policies by SAC, which is suitable for practical
applications such as manipulation <cit.>. This algorithm maximizes the cumulative reward and entropy to learn the policy.
§.§.§ State space
The state 𝒮 is defined based on the ultrasound image. We have adopted an image quality classification network from our previous work <cit.>, which used ResNet50 as a base network with multi-scale and higher-order processing of the image for conducting the holistic assessment of the image quality. The block diagram of this network is shown in Fig. <ref>.
This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations. Then, it uses second-order pooling (SoP) in the intermediate layers (local) and at the end of the network (global) to exploit the second-order
statistical dependency of features. The local-to-global SoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches.
This network encodes the image into a feature vector of size 2048, which represents the state of the policy.
§.§.§ Action space
The action space 𝒜 for the SAC policy is a combined position, orientation, and forces of the probe. Specifically, the robot controls the position of the probe in the xy-plane; orientation along roll, pitch and yaw; and force along the z-axis (normal to the surface).
§.§.§ Rewards
The reward is based on the ultrasound image quality estimated using the same network <cit.>, which represents the state. The extracted image features are passed through a linear classifier layer to generate a feature vector of size 5. Finally, the index of maximum value for this feature vector gives an integer quality rating between 1-5. In addition, a quality rating of 0 is assigned when the measured force value of the probe along the z-axis is below the minimum required for the appropriate contact. The reward is then defined as
r_u = η (q==q_max) + q/q_max
where q is the quality of the image, and the expression (q==q_max) is 1 if the quality is maximum and 0 otherwise. The constant η is used to amplify the reward when the robot reaches the maximum image quality (i.e., q=q_max). This image quality-based reward guides the self-supervised learning of the ultrasound policy.
§.§ Coaching with sparse expert's feedback
The coaching scenario for RUS is shown in Fig. <ref>. It is treated as learning a hidden goal (g^*) by observing the corrective actions (a^c) provided by the expert. We develop a formalism for representing coaching as a partially observable dynamical system. Coaching aims to improve the objectives and parameters through local corrections to the trajectory.
RUS can only observe the coach's corrections a^c and its actions a^r. It acts according to its optimal policy (π_θ), however, the coach expects it to operate with respect to a true objective whose optimal actions are determined by π_θ^*. The coach does not directly provide parameters θ^*, nor does the RUS know π_θ^*.
In DRL policy learning, we assumed that the goal states (g) are known and the reward is computed based on eq. (2). However, a correction from the coach implies that the goals and the policy need to be updated. If the current policy parameters θ = θ^*, then the formulation is an MDP where the robot is already behaving optimally with respect to expectations. However, when θ≠θ^*, the robot cannot directly observe θ^* to update its policy. Further, the robot cannot observe the coach's expected goals g^* as well. Therefore, the uncertainty in the objectives g^* and corresponding policy parameters θ^* turns this into a Partially Observable Markov Decision Process (POMDP).
§.§.§ POMDP model
In this POMDP, g^* forms the hidden part of the state s̅, and the coach's corrective actions a^c are observations about g^* and θ^* under some observation model as 𝒪(a^c | s̅^* = (s, g^*), a^r). The observation of the coach's correction allows the robot to learn the true objective. The coach's feedback is modeled as corrections that optimize the expected return from the state s̅ while taking action a^r+a^c. The action-value function (Q) captures this expected return. Thus, it can be written as:
𝒪(a^c|s̅,a^r) ∝ e^Q_π_θ(s̅,a^r+a^c)
The relationship for the observation model indicates that the coach provides feedback,
which together with the robot’s action, will lead to the desired behavior. And similar to
human coaching, the robot is expected to continuously learn a better objective by observing
the feedback. This formal approach captures the true essence of human coaching. The uncertainty is in the estimate of the desired goals g^* and the corresponding policy parameters θ^*.
The environmental state s, which is part of the POMDP state s̅, is assumed to be fully observable, similar to POMDP-lite <cit.>. However, the robot cannot observe the goal part of the state s̅. Instead, the robot can learn an action-value function over belief states b(s̅) as follows:
Q^*(b, a^c+a^r) = 𝔼[r(b, a^c + a^r) + 𝔼_b^'[V*(b')]]
Note that solving a POMDP with continuous action and states is expensive and usually intractable. Several approaches to estimate and approximate solutions for a POMDP have been explored in literature, such as hindsight optimization <cit.> or reduction to QMDP <cit.>. We provide an approximation to simultaneously update the policy parameters θ and the action value at the corrected states.
§.§.§ Approximate solution
In order to solve the POMDP, we transform it into a policy update problem. Several approximation are defined to achieve the policy update, which include trajectory correction, reward formulation, and policy parameter computation. These approximations provide an elegant solution to the POMDP model.
Trajectory correction : The Q-value shown in eq. (<ref>) cannot be computed for continuous state and action spaces. Therefore, the reasoning is done in the trajectory space instead of the control action space. The trajectory space is defined based on pose and force profiles resulting from the DRL policy learning. First, the robot estimates a trajectory based on the learned policy π_θ(a|s) derived from the observed goal g. The robot then utilizes an in-built hybrid force-position controller to track this trajectory. Instead of computing the Q-value for each action, we can estimate the total reward resulting from following this trajectory Π^r.
Fig. <ref> shows an example of a scenario where the robot is following the trajectory determined by the DRL policy. The robot trajectory can be represented as a sequence of control inputs resulting from the actions generated by π_θ.
The coach can apply a correction at any instant after some duration of DRL policy learning. This corrective action a^c is provided at a point p^c on the robot trajectory. However, the point correction has to be propagated across a local region of the trajectory. A trajectory optimizer is used to obtain the coach-preferred trajectory. The coach-preferred trajectory (Π^c) is obtained by smoothly deforming the robot trajectory (Π^r) locally using minimum jerk constraints, as follows:
Π^c = Π^r + μΠ^o(a^c)
where μ>0 is a scaling factor for the deformation and Π^o is offset to current trajectory based on action a^c computed using minimum jerk optimization. The minimum jerk trajectory is computed by obtaining a trajectory Π^o that minimizes the integral of a squared jerk over time. The solution to this optimization is a trajectory represented by a quintic polynomial as Π^o_t = ∑_i=0^5k_i t^i. Here, t is the time, and k_i are coefficients of the polynomial to be determined based on control points and boundary conditions. The first three constants can be determined from the initial position, velocity, and acceleration at t = 0. Similarly, the last three can be estimated based on the final or target position, velocity, and acceleration. For the case of offsetting the robot trajectory Π^r, we generate piecewise minimum jerk trajectory with control points at a p^c and two points on either side of p^c spaced at a time distance determined by the scale of the correction. Once the trajectory is updated, we can run the robot along the new trajectory and discover the associated states observed while moving on the expert-preferred trajectory.
Reward modification:
The coach-preferred trajectory is only a trajectory offset based on the corrective action. The robot has not learned how to leverage the knowledge from the preferred trajectory to improve the policy. We propose augmenting the reward objectives with two components as coach reward r_c(s̅, a) and trajectory reward r_Π (s̅,a), as follows:
r(s̅,a) = w_u r_u(s̅,a) + w_c r_c(s̅,a) + w_Π r_Π(s̅,a)
where w_* is the weight associated with corresponding rewards (r_*). The coach reward r_c associates the states observed by following the coach-preferred trajectory Π^c with a small positive reward. The trajectory reward r_π penalizes large changes in poses or forces to ensure that the policy generates a smoother path. The reward weights w_* are learned by maximizing the return to choose the preferred trajectory over the old robot trajectory
Policy update:
In addition to updating the reward or goals, the policy parameters are offset to move towards the coach’s optimal policy π_θ^*. Several iterations of coaching will generate multiple desired states and segments of preferred trajectories. The state transitions, the corresponding rewards from the new reward model, and the actions along these trajectories are stored in a separate replay buffer, named as coach replay buffer. Then, an approximation of the desired optimal policy is computed. The approximate policy π_θ̂(a|s) generates the actions needed to move along the preferred trajectories. The parameters of the policy are represented as θ̂^* because they are an approximation of the true optimal parameters θ^*. The policy can be written as
π_θ̂^*(a|s) →π(a|s, θ̂^*). A Gaussian distribution is used for the approximate policy, i.e., π(a|s, θ̂^*) ∼𝒩(μ(θ̂^*), σ(θ̂^*)). We know the sequence of states s on the expert-preferred trajectories and the corresponding action to take in these states. Therefore, we aim to estimate the policy parameters θ̂^* that fit this state and action data. The parameters are estimated using maximum likelihood estimation. Once the approximate policy is found, the current policy is regularly updated by performing gradient steps of SAC on the coach replay buffer. For the policy update during SAC, we include an additional loss based on KL divergence between the robot’s policy π_θ(a|s) and the approximation π(a|s,θ̂^*).
ℒ_KL = 𝐃_KL(π_θ(a|s) || π(a|s, θ̂^*))
This KL divergence loss ensures actions are taken according to the coach's preferences in the preferred states.
§ RESULTS AND DISCUSSIONS
§.§ Experimental setup
The experimental setup is shown in Fig <ref>. It consists of a 7-DOF Rethink Robotics Sawyer arm, with a Micro Convex MC10-5R10S-3 probe by Telemed Medical Systems, USA, attached to its end-effector using a custom-designed gripper. The robot has a wrist-mounted force-torque sensor, which was used to measure the forces. Two urinary bladder phantoms, P0 and P1, were used for scanning. P1 is a modified variant of P0 with a ballistic gel layer. ROS was used as the middleware to transmit images and commands across the devices.
§.§ Implementation details
To analyze the effect of coaching on learning performance, we conducted experiments by inducing coaching corrections at different iterations of training.
Four policies were trained for the following cases: (i) No Coaching; Coached after every (ii) 20k, (iii) 10k, (iv) 5k timesteps. The policy with no coaching was based on reward in eq. (2) and policies with coaching were based on a combined reward in eq. (6). Each policy was trained for a maximum of 200k steps or until convergence, approximately equal to eight hours of wall clock time. The robot was initialized to a random pose for each episode of training. An episode of training ends upon either achieving the high-quality image or reaching the 50 steps. The value of η in eq. (<ref>) is empirically set to 10 to maximize performance. The action space limits are set as x ∈ (-0.05, 0.05)m, y ∈ (-0.03, 0.03)m, fz ∈ (5 - 30)N, roll ∈ (-0.2, 0.2)rad, pitch ∈ (-0.2, 0.2)rad,
and yaw ∈ (-0.5, 0.5)rad. The roll, pitch and yaw angles were carefully selected to ensure collision-free scanning of the phantoms. The hybrid position-force control mode of Sawyer was used to control the robot.
The coaching corrections are provided by human experts through kinesthetic interactions with the robotic arm. At the instant of interaction, the robot movement is paused, which was hardcoded in the training script. Then, the expert activated the free-drive mode of the robot by pressing the button provided on the wrist and nudged the robot toward the optimal trajectory. Once the expert finished the coaching, which is indicated by a high-quality image (q≥4), the training script was re-initialized from the same time step. Note that the policy weights were updated before re-initialization.
§.§ Performance evaluation
Fig. <ref> compares the normalized average reward over the training timesteps for different policies. One of the primary issues associated with the “no coaching" policy was the tendency to explore sub-optimal states (i.e., low image quality regions). The poses and forces necessary for producing high-quality images constitute a narrow domain within the action space. Consequently, the “no coaching" policy often chooses inappropriate actions (probe poses and force). To correct this behavior, the coach intervened by adjusting either the probe position or orientation relative to the phantom surface, or by modifying the force applied to the phantom. After coaching, the reward from the resulting policy demonstrated improvement at each step of the training, wherever it was introduced. Coaching also improved the learning efficiency, as seen by the faster convergence of coached policies in Fig. <ref>. In the absence of coaching, the policy failed to approach the optimal region even after 100k steps and showed no convergence even at 200k steps. In contrast, policies with coaching introduced at intervals of 20k and 10k timesteps demonstrated convergence at approximately 150k (↓∼25%) and 120k (↓∼40%) steps, respectively. The policy receiving maximum coaching (every 5k steps) converged at 100k steps. These findings underscore the efficacy of coaching in identifying optimal policies at a faster learning rate.
Once the policies were trained, we compared the performance of learned policies during execution. Each policy is executed for 10 trials, and each trial has a maximum of 50 steps. The following metrics are used for comparison: (i) Number of High-Quality Images (HQI) sampled (q≥4), (ii) First instance of HQI sampling, (iii) Errors in probe motion. The metric (iii) measures the error between probe position (p), orientation (o) and force (f_z) at the step with the best reward and their corresponding Ground Truth (GT) values. The evaluated mean ± standard deviation values of these metrics are given in Table <ref>.
For Phantom P0, the results showed an improvement in HQI from 2.1 to 8.2 (↑74.4%) with coaching. Specifically, for the test phantom P1, the “no coaching" policy resulted in a low HQI of 1.5. However, the policy with coaching after every 20k steps resulted in the HQI of 5.9 (↑74.6%). The error in probe motion (p, o and f_z) reduced by 31.6%, 26.6% and 59.0%, with a mean error reduction of ∼40%. The performance improved further when coaching was used more frequently, verifying its effectiveness for training of RUS.
§ CONCLUSION AND FUTURE WORK
This paper presents a coaching framework for RUS to improve the learning efficiency and accuracy of ultrasound image acquisition. Unlike previously proposed LfD methods for RUS, this framework leveraged real-time feedback from experts during the training process. Coaching is modeled as a Partially Observable Markov Decision Process (POMDP), which approximates the trajectory-based corrections from the expert to update the reward, objectives, and parameters of the DRL policy. When tested for an ultrasound of urinary bladder phantom, this methodology improved the convergence rate of learned policy by 25% and increased the number of high-quality image acquisitions by 74.4%. Future work will explore further improvement in performance through policy priors derived using offline expert demonstrations, as done in our previous works <cit.>.
IEEEtran
|
http://arxiv.org/abs/2409.03152v1 | 20240905005927 | A Brief Overview of the Pawns Programming Language | [
"Lee Naish"
] | cs.PL | [
"cs.PL"
] |
headings
Pawns language
L. Naish
Lee Naish
University of Melbourne, Melbourne 3010, Australia
[email protected],
A brief overview of the Pawns
programming language
Lee Naish
Received: 20 June 2024 / Accepted: 05 July 2024
===================================================
September 9, 2024
§ ABSTRACT
Pawns is a programming language under development which supports pure
functional programming (including algebraic data types, higher order
programming and parametric polymorphism) and imperative programming
(including pointers, destructive update of shared data structures and
global variables), integrated so each can call the other and with purity
checked by the compiler. For pure functional code the programmer need not
understand the representation of the data structures. For imperative code
the representation must be understood and all effects and dependencies
must be documented in the code. For example, if a function may update one
of its arguments, this must be declared in the function type signature
and noted where the function is called. A single update operation may
affect several variables due to sharing of representations (pointer
aliasing). Pawns code requires all affected variables to be annotated
wherever they may be updated and information about sharing to be declared.
Annotations are also required where IO or other global variables are used
and this must be declared in type signatures as well. Sharing analysis,
performed by the compiler, is the key to many aspects of Pawns. It
enables us to check that all effects are made obvious in the source
code, effects can be encapsulated inside a pure interface and effects
can be used safely in the presence of polymorphism.
Keywords: functional programming language, destructive update, mutability,
effects, algebraic data type, sharing analysis
§ INTRODUCTION
This paper briefly describes the main features Pawns, a programming
language that is currently under development. The aim is to convey
a feel for the general ideas; <cit.> does the same but
includes significantly more detail, discussion of language design
issues and citation of related work. We assume the reader is familiar
with Haskell and C. Pawns supports pure functional programming with
strict evaluation, algebraic data types, parametric polymorphism,
and higher order programming. It also supports “impure" code, such
using state (including IO) and destructive update of all compound data
types via pointers (references or “refs" for short) but all such
code is highlighted by “!" annotations. A call to a function that
relies on state must be prefixed by “!"; the details of the state(s)
are declared in the type signature. Additionally, variables that are
updated must be prefixed with “!". A function call with no “!" is
guaranteed to behave as a pure function, though Pawns allows impurity
to be encapsulated (and checked by the compiler), so the function
may be implemented using impure features. The representations of
different variables can be shared, so updating one variable may also
update other variables and the Pawns compiler checks that all relevant
variables are annotated with “!" at that point in the source code:
Pawns is an acronym for “Pointer assignment without nasty surprises"
and its most important (and complex) innovation is the way update of
shared data structures is supported and how pure and impure code can
be mixed. Impure programming in Pawns can be like programming in C,
with destructive update of fields of structs representing ADT values
and performance equal to or better than portable C. However, there no
unsafe operations (such as dereferencing possibly NULL pointers, casts,
acessing fields of unions, etc) and all interractions/dependencies due
to sharing must be documented in annotations/declarations.
The rest of this paper is structured as follows.
Section <ref> gives a simple example of pure functional
programming.
Section <ref> describes how destructive update is done in
Pawns and gives some information about data representation.
Section <ref> gives a two examples of code using destructive
update in Pawns, mentioning sharing of data structures but deferring the
details of how sharing is handled.
Section <ref> discusses the distinction between data
structures that can be simply viewed as abstract values (typical in pure
code) and those for which sharing must be understood (a necessity when
destructive update is used).
Section <ref> discusses how sharing and destructive update
information is incorporated into Pawns type signatures and the kind of
sharing analysis done by the compiler.
Section <ref> presents how IO and other forms of “state” can
be used in Pawns.
Section <ref> discusses a Pawns feature that allows renaming of
functions so different type signatures can be given, overcoming some of
limitations of polymorphism, particularly for impure Pawns code.
Section <ref> briefly discusses some of the additional complications
surrounding safety in Pawns.
Section <ref> concludes.
§ PURE FUNCTIONAL PROGRAMMING EXAMPLE - BST CREATION
Consider the task of converting a list of integers into a binary search
tree. Pawns supports typical pure functional programming solutions
such as Figure <ref>, presented using Haskell-like
syntax[Pawns currently only supports a temporary syntax, to
avoid decisions on syntax and the need to write a parser]. Note the use
of polymorphic algebraic data types and the polymorphic higher order
function ; Pawns does not currently support type classes
or existential types.
An advantage of this style of programming that it is not necessary to
understand how values are represented in order to write and reason about
the code. However, builds a new node at
each level of the tree visited so it is much less efficient (a factor of
around twenty in our experiments) than just using destructive update
when a leaf is reached. BST creation is unlikely to be a major time
component of any application but we will use this as a simple example
of how destructive update can be used in Pawns.
§ REPRESENTATION AND DESTRUCTIVE UPDATE OF VALUES
The key thing to note about data representation and update in Pawns is
that arguments of data constructors are stored in main memory and
these are the only things that can be updated. The data constructors
themselves are like pointers (they may be a pointer plus a “tag"
or, for data constructors with no arguments, they may just be a small
integer). The list is represented as a pointer
to two memory cells, containing the integer 42 and a small number that
represents , respectively - the same as a linked list in
C. Similarly, a BST is represented essentially using pointers to structs
with three fields. For types that have more than one data constructor
with arguments (such as the cord data structure discussed in Section
<ref>) the representation uses tags and is more efficient than
portable C code; see <cit.> for details. Pawns allows the kind
of programming we can do in C with pointers to structs and assignment
to fields of structs. There is also additional flexibility because an
ADT can have any number of data constructors with arguments (which is
like having a pointer to any number of different struct types) and any
number of data constructors with no arguments (like have any number of
different NULL values) and all operations are safe (no dereferencing of
NULL values, no casts, et cetera).
Pawns variables are not names for memory locations that can be updated —
it is not possible to assign to an existing variable or get a “pointer
to a variable” as you can in C. However, the representation of the value
of a variable may have mutable components. For example, a variable whose
value is will always be a pointer to
the same two memory cells but the content of these cells can potentially
be updated, changing the overall value of the variable. All update
is done via special pointer (ref) types (similar to in
Haskell and in ML). There is a polymorphic type that is a pointer to a memory cell containing a value of type
. You can think of the memory cell as the argument of the
data constructor for the type, thus it can be updated.
However, Pawns code never uses an explicit data constructor for refs
but instead just uses a dereference operator, “", like C. If
is a Pawns expression of type then
is the value of type that points to. There are no
refs.
The simplest way to create a ref is by using a let binding with
prefixing the let-bound variable. The “let” and “in” keywords of
Haskell are not required in Pawns and “;" is used for sequencing, thus
creates two variables, the first of which equals
42 and the second points to a newly allocated memory cell containing 42
(similar to the Haskell monadic code , or an
ML let expression with ). Destructive update is done
by dereferencing a pointer on the left of the “" (assignment)
operator. All variables[More precicely, all live variables;
those which are never used again can generally be ignored.] that are affected
must be prefixed by “!". Typically there will be a pointer variable on
the left (so is written ) but there
may also be other variables that share its representation; these can be
annotated with “!" at the right of the statement. Figure <ref>
has a simple example.
Without the “" annotation, both and
would be bound to with no intervening occurrence of
in the code, yet they end up with different values. This is typical
of the potentially confusing “surprises” encountered in languages
that support code for destructive update with pointer aliasing and
shared data structures, which is needed for many important algorithms.
Pawns supports such code but insist the programmer documents sharing
and effects, in a way that can be checked by the compiler.
Just as prefixing a variable with in a let binding creates a
pointer variable, the same can be done with pattern bindings. These
“dereference patterns" are an important innovation of Pawns.
For example, the code for could be rewritten
as in Figure <ref>. Instead of the pattern
matching with a creating variables of type BST and Int,
it creates variables of type and ,
which are pointers to the arguments of the data constructor.
Refs are created but no extra memory cells are allocated and no monads or
changes to the type are required; there is no equivalent in
languages such as Haskell and ML. The subsequent code simply dereferences
the pointers to obtain the same values as before and the code is pure —
refs/pointers themselves do not introduce impurity. However, such pointers
could potentially be used to destructively update the
arguments (which is impure).
§ DESTRUCTIVE UPDATE EXAMPLES
We now give two short examples of using destructive update in Pawns.
The first is an alternative way to construct a BST and the second
is an example where the sharing of data structures is more complex.
Building a BST from a list of integers can be done very efficiently by
first allocating a memory cell containing an empty BST then repeatedly
traversing down the tree and destructively inserting the next integer as
a new leaf — see Figure <ref>. Both
and simply return void because the tree is updated
in situ but because of the destructive update (they are not pure
functions), Pawns insists more information is provided in their type
signatures; we will discuss this in Section <ref>. However,
behaves as a pure function, indistinguishable from
, even though it is defined in terms of impure
functions (and is far more efficient). To construct the BST it is
necessary to consider low level details such as the representation
of the tree and any sharing present but after it is returned from
it can be treated as an abstract BST value and safely
used by pure code. We are not aware of other functional programming
languages that can encapsulate destructive update in this way.
In the second example we use another form of tree, for representing
cords. Cords are data types which support similar operations to lists,
but concatenation can be done in constant time. A common use involves
building a cord while traversing a data structure then converting the
cord into a list in O(N) time, after which the cord is no longer used.
Here we use a simple cord design: a binary tree containing lists at
the leaves and no data in internal nodes. Creating a cord from a list
plus append and prepend operations can all be done simply by applying
data constructors.
To convert such a cord to a list, a purely functional program would
typically copy each cons cell in each list. A C programmer is likely
to consider the following more efficient algorithm, which destructively
concatenates all the lists without allocating any cons cells or copying
their contents. For each list in the tree other than the rightmost
one, the pointer at the end of the list is replaced with
a pointer to the first cell of the next list; the first list is then
returned (note this destroys the cord). This algorithm can be coded
in Pawns – see Figure <ref>. The function
creates a pointer to an empty list and calls , which
traverses the cord, updating this list (and the cord), then the list
is returned. is recursive and is always called with
a pointer to a , which is updated with the concatenated
lists from the cord, and it returns a pointer to the in
the updated list. For now we assume there are only lists of Ints (we
will briefly discuss impurity and polymorphism in Section
<ref>).
Compared to pure coding, this kind of coding is complicated and prone
to subtle bugs and assumptions (thus best avoided except where the added
efficiency is important). It may seem that there are several redundent
“!" annotations but the Pawns compiler will complain without them.
For example, in the first recursive call to ,
with , the compiler insists that is
annotated. Although the analysis done by the compiler is unavoidably
conservative and sometimes results in false alarms, in this case it is
correct. It is possible the lists in the two branches of the cord may
share representations and if this is the case a cyclic list is created
and the code does not work! The same can occur if is
called with and sharing, instead of
pointing to an independent . The compiler insisting on extra
annotations hopefully alerts the programmer to these subtleties, leading
to better documentation and defensive coding to avoid the potential bug.
§ PURITY AND ABSTRACTION
The distinction between pure and impure code can be blurred. For
example, some “impure” code can be given “pure” semantics by
introducing/renaming variables, adding function arguments et
cetera. However, Pawns makes a different important distinction,
between data structures that are “abstract” (values for which the
representation is not important and may not be known) versus “concrete”
(where the representation, including sharing, may be important and
should be understood by the programmer). Only concrete data structures
can be updated. Abstract data structures are normally associated with
pure code and concrete data structures with impure code but this is not
always the case.
Consider the function of Figure <ref>. It takes
a pointer to a list, has no effects and always returns a pointer to
, so in that sense it is pure (note that pointers themselves
are not impure). However, for the destructive update code that uses
, it matters which is pointed to in
the result. If allocated a new memory cell, initialised it
to and returned a pointer to this , the result
would be identical from an abstract perspective but the
code would not work. Thus although can be considered
pure, it must work with concrete data structures. Similarly, impure
functions can have abstract arguments and/or results (they cannot update
abstract arguments but may update other arguments).
When a data structure is created by applying a data constructor to
concrete arguments, the result is concrete. Concrete data structures can
become abstract when they are returned from a function (depending
on the type signature of the function) or if they are blended with
abstract data structures. For example, if the of a
concrete list is updated with an abstract list or is
applied to one or more abstract cords the result is abstract. Pawns uses
the sharing system to keep track of the distinction between abstract and
concrete (see Section <ref>). Pure code such as that in
Figure <ref> can be written without considering data
representation or sharing, but values returned from these functions will
be abstract and thus cannot be be updated. Although of
Figure <ref> is pure, the type signature must contain explicit
sharing information because we need a concrete list pointer to be returned
— the representation is important and the data structure is intended to
be updated elsewhere.
§ SHARING ANALYSIS
The Pawns compiler does sharing analysis <cit.>
to approximate how variables share components of their representations
and determine what variables may be updated at each point during evaluation
of each function . It relies on knowing what sharing may exist
between arguments in calls to , what sharing may exist between
arguments and results of functions called by and what arguments
of these functions may be updated. Type signatures in Pawns code have
additional information to help this analysis. Specifically, they declare
which arguments may be updated, plus a “precondition" stating what sharing
between arguments may be present when the function is called and a
“postcondition" stating what additional sharing may be present beween
arguments plus the result when the function returns. As well as the
compiler checking there are sufficient “!" annotations, it checks that
whenever a function is called the precondition must be satisfied and when
a function returns the postcondition must be satisfied. Declaring this
additional information is a burden but it forces the programmer to think
about sharing in data structures that may be updated, documents sharing
for others reading or maintaining the code and helps the compiler conduct
analysis to check when destructive update can safely be encapsulated
inside pure code and used in the presence of polymorphism.
Preconditions can also be used to make code more robust. For example, they
can be used to declare that no sharing should exist between the arguments
of or the functions that build cords: ,
and . Code where such sharing
exists will then result in a compiler error message instead of incorrect
runtime behaviour.
Abstract data structures share with a special pseudo-variable named
abstract (there are different versions of this variable for different
types et cetera). For a function that contains no explicit
information concerning sharing, the default precondition and precondition
specify maximal possible sharing, including sharing with abstract. There
is no restriction on calls to such functions (preconditions are always
satisfied) but results share with abstract. Code that attempts to
update a variable that shares with abstract results in a compiler error.
Similarly, passing an abstract data structure to a function that expects a
concrete data structure result in an error. The data structure will share
with abstract but that will be at odds with precondition of the function.
A function that has no sharing declared can return a concrete data
structure. The implicit postcondition just specifies possible (not
definite) sharing with abstract. This allows code such as the definition
of , where the interface is pure and abstract but the
implementation uses destructive update of a concrete data structure. As
a general rule in programming, if a possibly shared data structure is
updated, the programmer should understand how it has been used, all the
way back to the points where it was created. In Pawns, this must be
documented in the code, by explicit declarations whenever it is passed
to or returned from a function, and these declarations are checked by the
compiler. At some later point we are free to treat it as an abstract value
and not concern ourselves with how it is represented or what may share
with it, but if this is done the value should not be updated further.
In Pawns, this is achieved by explicitly or implicitly adding sharing
with abstract.
Sharing is declared by augmenting type signatures with a pattern that
matches variables with the arguments and result of a function and pre-
and post-conditions that can use these variables. The pattern can also
prefix arguments by “” to indicate the argument may be
updated. Pre-conditions can use the arguments of the function (and
) to declare the maximal sharing allowed when the
function is called. Post-conditions can also use the result and
declare what additional sharing may be added during evaluation of
the function. The keyword is used to indicate no
sharing. Equations and other Pawns code (but not function calls) can be
used to indicate sharing between variables or components of variables
— see Figure <ref>.
The declaration for here is equivalent to the
declaration in Figure <ref> but the sharing with
is made explicit. For the other
construction code there is no sharing. Integers are atomic; with a more
complex data type for elements there would generally be sharing between
the list and tree elements and this would need to be declared. Note that
even with no sharing, it needs to be declared, along with the fact that
the is updated, otherwise sharing with
would be assumed and no update allowed. This applies equally to higher
order arguments such as that in .
The declarations for the cord code illustrate sharing of variables and
their components. Components of variables are discussed further below.
For , the postcondition states that the result, ,
and the argument, , may be equal (and hence share all
components). For , the postcondition states the result,
, may be a whose argument is , the
argument of the function. This is exactly what the function returns but,
due to the imprecision discussed below, it means the argument of any
data constructor in may equal . This
more general interpretation is required for . Similarly,
for , the precondition means a data
constructor argument of the cord may equal the list pointed to by
the second argument. The precondition of prevents
it introducing sharing between different lists in a cord, allowing the
compiler to reject code that has the bug mentioned earlier (the same
should be done for other cord construction functions). The postcondition
is inferred from the function definition — this is supported in Pawns
for definitions that are pure and contain no function calls (potentially,
all postconditions could be inferred but we feel this would detract from
the philosophy of Pawns, which makes sharing obvious in the source code
wherever it must be understood by programmers).
Sharing analysis is unavoidably imprecise but it is conservative,
generally over-estimating the amount of sharing. Potentially, code may
need to have more sharing declared than is actually the case and more
variables annotated with “!”. For each type, the sharing analysis uses a
domain that represents the memory cells that can be used for variables of
that type in the running program. For recursive types, the actual number
of memory cells can be unbounded, but “type folding” is used to reduce
it to a finite number. The domain distinguishes the different arguments
of different data constructors but where there is recursion in the type,
the potential nested components are all collapsed into one. For example,
for lists, there is a component for the head of the list and another for
the tail of the list but because lists are defined recursively, the head
component represents all elements of the list (all memory cells
that are the first argument of a in the list representation)
and the tail represents all tails.
For cords, there are five components: the two arguments of
, the argument of and the two arguments
of . Each left or right branch of a cord is a cord and
type folding makes the five components of the branches the same as the
top level cord. Thus for , the all five components
of may share with the respective components of ,
along with the two components representing arguments sharing
with the respective components of . Sharing analysis keeps
track of what components may exist for each variable. For example,
if a list variable is known to be it has no components at
that point in the sharing analysis. Also note that for two components
to share, they must have the same type and, unless they are pointers,
the same enclosing data constructor and argument. For example, the
argument of a cannot be the same memory location as the
second argument of a and sharing analysis respects this
distinction. However, we can have a pointer that points to either of
these locations, thus sharing analysis treats pointers/refs differently
from other data constructors.
§ IO AND STATE VARIABLES
Like destructive update, IO does not fit easily with pure functional
programming. Pawns models IO by using a value, representing the
state of the world, which is conceptually passed in and returned
from all computations that perform IO. Rather than explicitly using
an extra argument and a tuple for results, is declared as
“implicit” in the type signature of functions (and nothing is actually
passed around). Pawns allows other “state variables” to be defined
and (conceptually) passed around in the same way. In function type
signatures, they can be declared as “ro” (read only — as if they
are passed in as an argument to the function), “wo” (write only —
as if they are initialised/bound by the function and returned) or
“rw” (read and written). The state variable is bound
before the function of a Pawns program is called and all
the primitive IO functions have in their type
signatures; other state variables must be explicitly bound/initialised
before being used. The state variable feature of Pawns is designed
so that pure functional semantics could be defined. However,
calls to functions with implicit arguments/results must be prefixed
by to highlight the fact than there is more going on in
the code than meets the eye, whether or not it is considered pure.
State variables are declared like type signatures of functions except
they are prefixed with and must have a type
(they point to a statically allocated memory cell and can be used for
destructive update like other pointers). They can only be used in code
after a function has been called or in functions where they
are declared implicit in the type signature.
Figure <ref> gives a simple example of summing the elements in
a BST using a state variable instead of passing additional
arguments and results. Although behaves as a pure function,
as the type signature implies, internally it uses to
bind/initialise the state variable, which is updated as
traverses the BST and then its final value is returned. State variables
are similar to mutable global variables in a language such as C but the
code makes it clear when the variables may be used/updated and they can
be encapsulated in a purely functional interface. For example, although
calls (which zeros before
traversing the right subtree), Pawns ensures this does not interfere
with the value in the outer computation.
Functions can have multiple state variables declared as implicit arguments
with no additional complications. There is no ordering required for the
state variables, making some coding simpler compared to mechanisms other
languages use for threading multiple kinds of state in a pure way (such
as nested monads in Haskell). A disadvantage of using state variables is the
code is harder to re-use because it is tied to specific state variables
rather than types. State variables and their components can share and be
updated in the same way as other Pawns variables. The only additional
restriction is that a state variable (or its alias) must not be passed to
code where the state variable is undefined (for example, be passed as an
argument or returned as a result of a function where the state variable
is not declared as an implicit argument). Thus in Figure
<ref> can return but not itself,
even if the return type and/or the type of was changed.
§ POLYMORPHISM AND RENAMING
Sharing in Pawns is not polymorphic to the same extent as
types. Similarly, code that uses a state variable is specific to that
state variable rather than something more general such as the monad type
class in Haskell. For a function such as , the second
and third arguments do not have identical types declared and Pawns does
not allow any sharing to be declared between them. However, for some
calls to the types may be identical and we may want to
declare sharing between them. In Pawns, this can only be done by using
a separate function definition that has a more specific type signature
with identical types and the sharing declared. Pawns provides a mechanism
for renaming groups of functions to simplify this. As an example, Figure
<ref> shows how the code of Figure <ref>
code can be duplicated, making it possible to add different type
signatures where the sharing is declared and hence the resulting
tree can be updated[There is little advantage in having both
abstract and concrete versions of these functions but it does illustrate
renaming]. The first declaration creates definitions
of and , by renamining
the previous definitions and replacing the call to by a
call to . An explicit definition of
could be included but we here simply use another
declaration. Type signatures are needed for all three functions (for
brevity we just include one). Renaming can also be used as a less
abstract alternative to higher order code and for producing code with
the same structure but with different state variables. For example,
we can code a version of that uses and rename
it to use other state variables as needed (this is the Pawns equivalent
of using Haskell's ).
§ COMPLICATIONS
Combining pure functional programming with destructive update and other
impurity is not simple! The design of Pawns aims to support high
level pure functional programming plus low level imperative programming
with as much flexibility as possible while avoiding unsafe operations
(such as dereferencing pointers) and “surprises” (code
with effects that are obscure). Here we briefly mention some of more
complicated issues and how they are dealt with in Pawns, without too
much technical detail.
§.§ Polymorphism and type safety
Mixing polymorphic types with destructive update can result in unsafe
operations if it is not done carefully. Consider the code in Figure
<ref>. The variable is bound to a pointer
to , a list of any type. Without destructive
update, this can be safely used where pointers to lists of integers
and pointers to lists of binary search trees are expected (the type
can be instantiated to either of these without problems). However,
if the variable is updated to be a non-empty list of integers the code
is not type safe — an integer may appear where a tree is expected.
Other functional languages solve the type safety problem by imposing
restrictions on code that has refs (and thus may perform updates). In
Pawns, refs to arguments of data constructors can be created anywhere,
but because the source code explicitly notes where variables can be
updated, the problem can be solved in a more flexible way.
Where a Pawns variable with a polymorphic type is assigned to or passed to
a function that may update it, type variables may become more instantiated
during type checking. For example, at the point where is
passed to in Figure <ref>, its previous
polymorphic type () is instantiated to . The type of is also similarly instantiated —
the two variables share their representations and their types shared the
same type variable, . The subsequent call to
then results in a type error. Pawns treats all variables created with
polymorphic types as live throughout the whole function, so the
annotation on is required even if is never used
again, alerting readers of the source code to a subtlety. The compiler
also prints a warning when types are further instantiated. Warnings can
be avoided by adding explicit casts, as shown in the second example of
Figure <ref> (a previous version of the compiler did
not automatically instantiate types and this cast was required).
§.§ Higher order programming
There are two complications involving higher order code: type checking
and partially applied functions (closures). Type checking is made
more complicated because each “arrow” type has additional information
concerning sharing, destructive update and state variables. Pawns allows
some latitude when matching the type of arguments to higher order
functions with the expected type that is declared. The arguments are
allowed to have less destructive update, less sharing in postconditions,
more sharing in preconditions and some variations in what state
variable operations are declared (for example, is acceptable
where is declared). The intention is to allow as much
flexibility as possible while guaranteeing safety.
Pawns allows functions to be applied to fewer than the declared number of
arguments, resulting in closures being constructed/returned. Closures
can be passed around like other data and later applied, leading to
function evaluation. The arguments inside closures can share with other
data structures and hence they can potentially be updated. Pawns allows
the patterns used for declaring sharing to have additional arguments,
representing the arguments of closures, so sharing of data within closures
can be declared and analysed. Certain equivalence laws that hold for
pure functional programming (such as “eta-equivalence”) do not apply
when sharing is significant and there may be destructive update.
§.§ Foreign language interface
The one feature of Pawns where there is no attempt to guarantee safety
is the foreign language interface. Pawns compiles to C and provides
a simple and flexible interface to C, which has many unsafe features.
Each Pawns function compiles to a C function and Pawns allows the body
of a function definition to be coded in C but for such code there
can be no guarantees of safety or lack of “surprises”. It is up
to the programmer to ensure the C code is safe and compatible with
the Pawns type signature. For example, Figure <ref> gives the
implementation of defined in terms of
in C. The use of the state variable in the type signature
ensures that the code can only be used in a context where the side-effect
is clear and purely functional semantics could be defined. Similarly,
it only requires a few lines of code to interface Pawns to the C standard
library pseudo-random number package in a way that can be encapsulated
and given purely functional semantics, using a state variable — see
Figure <ref> for the type signatures. It is also very easy to
support arrays via the C interface; the current code has no bound checks
(and thus has C-like efficiency but is not safe).
Most foreign language interfaces only allow basic unstructured types to
be passed. However, the Pawns compiler uses the tool
<cit.>, which generates C macros for manipulating the algebraic
data types defined in the program. For example, Pawns code that defines
the type results in C macros for creating an
tree, creating a and various ways of testing if a tree
is or a and extracting the arguments of
the . These macros can be used in hand-written C code
to both operate on a that was created by Pawns code, and
create a that is passed back to Pawns code. Dynamic memory
management is often particularly difficult across language boundaries but
is made very easy in Pawns by using the Boehm-Demers-Weiser conservative
garbage collector.
§ CONCLUSION
There are important algorithms which rely on destructive update of
shared data structures, and these algorithms are relatively difficult to
express in declarative languages and are typically relatively inefficient.
The design of Pawns attempts to overcome this limitation while retaining
many of the advantages of a typical functional programming language,
such as algebraic data types, parametric polymorphism, and higher order
programming.
Pawns supports the creation of pointers to arguments of data constructors,
and these pointers can be used for destructive update of
shared data structures. There are several features which restrict
when these effects can occur and allow them
to be encapsulated, so the abstract declarative view of some functions
can still be used, even when they use destructive update internally.
Type signatures of functions declare which arguments are mutable and
for function calls and other statements, variables are annotated if it
is possible that they could be updated at that point. In order to
determine which variables could be updated, it is necessary to know what
sharing there is. Functions have pre- and post-conditions which
describe the sharing of arguments and the result when the function is
called and when it returns. To avoid having to consider sharing of data
structures for all the code, some function arguments and results can be
declared abstract (this is the default). Reasoning about code which only
uses abstract data structures can be identical to reasoning about pure
functional code, as destructive update is prevented.
Where data structures are not abstract, lower level reasoning must be
used — the programmer must consider how values are represented and
what sharing exists. The compiler checks that declarations and
definitions are consistent, allowing low level code to be safely
encapsulated inside a pure interface. Likewise, the state variable
mechanism allows a pure view of what are essentially mutable global
variables, avoiding the need for source code to explicitly give
arguments to and extract result from function calls. Analysis of
sharing is also required to ensure the use of state variables can be
encapsulated and to ensure safety of code that uses destructive update
of polymorphic data types.
Although Pawns is still essentially a prototype, and is
unlikely to reach full maturity as a “serious” programming language,
we feel its novel features add to the programming language landscape.
They may influence other languages and help combine the declarative and
imperative paradigms, allowing both high level reasoning for most code and
the efficiency benefits of destructive update of shared data structures.
§ ACKNOWLEDGEMENTS
The design of Pawns has benefitted from discussions with many people.
Bernie Pope and Peter Schachte particularly deserve a mention.
|
http://arxiv.org/abs/2409.03699v1 | 20240905165535 | The uniform Turán density of large stars | [
"Ander Lamaison",
"Zhuo Wu"
] | math.CO | [
"math.CO"
] |
Constituent Automorphism Decoding of Reed–Muller Codes
Yicheng Qu, Amir Tasbihi, and Frank R. KschischangThe
authors are with the Edward S. Rogers Sr. Dept. of Electrical and
Computer Engineering, University of Toronto, 10 King's College Road,
Toronto, Ontario M5S 3G4, Canada. Email: ,
, . Submitted for
publication on July 19th, 2024.
September 9, 2024
===============================================================================================================================================================================================================================================================================================
§ ABSTRACT
We asymptotically resolve the the uniform Turán density problem for the large stars. In particular, we show that the uniform Turán density of the k-star S_k is k^2-5k+7/(k-1)^2 for k≥ 48, matching a lower construction by Reiher, Rödl and Schacht.
Introduction
Turán problems constitute one of the central areas of study in extremal combinatorics. Given an r-uniform hypergraph (or r-graph for short) F and a positive integer n, the extremal number (n,F) is the maximum number of edges in an r-graph F not containing F as a subgraph. The Turán density of F is the limit
π(F):=lim_n→∞(n,F)/nr.
While Turán problems on graphs are well-understood <cit.>, much less is known about Turán problems on hypergraphs. For example, the exact value of the Turán density of K_k^(r), the complete r-graph on k vertices, is not known for any k>r≥ 3. In fact, the Turán density of K_4^(3)-, which is obtained by removing an edge from K_4^(3), is currently unknown.
In the conjectured extremal constructions for most hypergraph Turán problems, the set of edges is distributed very unevenly among the vertices. For instance, there is a family of K_4^(3)-free 3-graphs whose density tends to 5/9 (conjectured to be the value of π(K_4^(3))). In each hypergraph in the family, the vertex set can be partitioned into three independent sets. This motivated Erdős and Sós <cit.> to propose studying Turán density problems with a condition on the edge distribution.
A 3-graph H is said to be (d,ε,)-dense
if any subset S⊆ V(H) contains at least d|S|3-ε|V(H)|^3 edges.
The uniform Turán density (F) of a 3-graph F
is defined as the infimum of the values of d,
for which there exists ε>0 and N such that
every (d,ε,)-dense hypergraph
on at least N vertices
contains F as a subgraph.
In the last decade, a number of results about the uniform Turán density of 3-graphs have been published. The first non-zero value of to be computed was (K_4^(3)-)=1/4, solving a conjecture of Erdős and Sós. This was first proved by Glebov, Král' and Volec <cit.> using flag algebras, and later by Reiher, Rödl and Schacht <cit.> using the hypergraph regularity method. The latter set of authors also characterized in <cit.> the 3-graphs F with (F)=0, and as a consequence, deduced that there does not exist a 3-graph F with (F)∈(0,1/27).
Bucić, Cooper, Král', Mohr and Munha-Correia <cit.> determined the uniform Turán densities of all tight cycles of length at least 5. They showed that (C_ℓ^(3)) equals 0 if ℓ is divisible by 3, and 4/27 otherwise. Additionally, 3-graphs with uniform Turán density equal to 1/27 and 8/27 have also been found <cit.>. Further results can be found in a survey by Reiher <cit.>.
One simple hypergraph family whose uniform Turán density has been studied is stars. Let S_k be the 3-graph on k+1 vertices u,v_1, v_2, …, v_k, containing the edges uv_iv_j for all 1≤ i<j≤ k. We call this graph the k-star. For example, S_3 is just the graph K_4^(3)-. Despite the simplicity of their structure, attempts to extend the idea used to determine the uniform Turán density of K_4^(3)- to any larger star have been unsuccessful. The current best bounds were given by Reiher, Rödl and Schacht <cit.>, who showed that
k^2-5k+7/(k-1)^2≤(S_k)≤(k-2/k-1)^2.
In this paper we show that the lower bound is sharp for all large stars, and thereby asymptotically resolve the the uniform Turán density problem for the large stars.
(S_k)=k^2-5k+7/(k-1)^2 for all k≥ 48.
The key concept in the proof of Theorem <ref> is the notion of palette, introduced by Reiher <cit.> as a generalization of a construction of Rödl <cit.>.
A palette is a pair (𝒞,𝒜),
where 𝒞 is a finite set (whose elements we call colors)
and a set of (ordered) triples of colors 𝒜⊆𝒞^3,
which we call the admissible triples.
The density of is d():=|𝒜|/|𝒞|^3.
We say that a 3-graph F admits a palette
if there exists an order ≼ on V(F)
and a function φ:V(F)2→𝒞
such that for every edge uvw∈ E(F) with u≺ v≺ w
we have (φ(uv), φ(uw), φ(vw))∈𝒜.
Palettes can be used to give lower bounds on the uniform Turán density of 3-graphs. Indeed, if the 3-graph F does not admit the palette , then one can use an analogous of Rödl's construction to obtain a (d(),o(1),)-dense family of F-free hypergraphs, meaning that (F)≥ d(). For this reason, it is worth considering the largest density of a palette which is not admitted by F.
The palette Turán density of a 3-graph F is
(F):=sup{d(): palette, F does not admit }.
As discussed above, for every 3-graph F we have (F)≥(F). In <cit.>, the first author showed that equality always holds. This result will be the main tool in the proof of Theorem <ref>.
For every 3-graph F, we have (F)=(F).
Idea. The main technical step we develop to prove Theorem <ref> is transforming the hypergraph condition to a digraph condition (see <ref>). We rely on a property of the structure of S_k: each edge uvw contains a pair of vertices which is not contained in any other edge. That means that, when we consider the palette condition (φ(uv), φ(uw), φ(vw))∈𝒜, one of the three entries can be selected freely without affecting any other edge. For this reason it is enough to consider, for each 1≤ i<j≤ 3, the pairs of colors (a,b) for which there exists a triple (c_1, c_2, c_3)∈ such that c_i=a and c_j=b.
Notation. For any palette =(, ), let _a^1, _a^2,_a^3 be the set of the triples in with the first, second, third entry equal to a, respectively. The palette '_a, i.e, the palette obtained by removing the color a from is
'_a=(∖{a}, (∖{a})^3∩).
Proof of Theorem <ref>
By Theorem <ref>, it is sufficient to show that (S_k)=k^2-5k+7/(k-1)^2. The lower bound was first shown to hold in <cit.>, here we will give a short proof for completeness.
Consider the palette =(𝒞,𝒜) with 𝒞={0, ⋯, k-2},
𝒜={(x,y,z)| x≠ y, y≠ z, z≢x+1k-1}.
It is easy to calculate that has density k^2-5k+7/(k-1)^2. Now we prove that does not admit S_k, which shows that (S_k)≥k^2-5k+7/(k-1)^2.
For contradiction, assume that there exist an order ≼ on V(F) and a function φ:V(F)2→𝒞
such that for every edge uvw∈ E(F) with u≺ v≺ w
we have (φ(uv), φ(uw), φ(vw))∈𝒜. By symmetry, assume that the order ≼ on V(F) is v_1≼⋯≼ v_t ≼ u≼ v_t+1⋯≼ v_k. Consider the following k elements:
φ(uv_1)+1,⋯, φ(uv_t)+1, φ(uv_t+1), ⋯, φ(uv_k).
By the pigeonhole principle, at least two of them are the same modulo k-1. Now we have 3 cases:
* φ(uv_i)+1=φ(uv_j)+1 for some i<j≤ t. Then (φ(v_iv_j), φ(v_iu), φ(v_ju))∉.
* φ(uv_i)+1=φ(uv_j) for some i≤ t<j. Then (φ(v_iu), φ(v_iv_j), φ(uv_j))∉.
* φ(uv_i)=φ(uv_j) for some t<i<j. Then (φ(uv_i), φ(uv_j), φ(v_iv_j))∉.
This completes the proof of the lower bound.
We then prove the upper bound. Indeed, suppose for the sake of contradiction that (S_k)>k^2-5k+7/(k-1)^2. Then there exist palettes with density greater than k^2-5k+7/(k-1)^2 not admitted by S_k. Let =(, ) be one such palette with the minimum number of colors. Removing any color from must decrease the density of .
Let ||=n. Given colors a,b∈, and i≠ j∈{1,2,3}, we say that (a,b) is (i,j)-good if there exists a triple (c_1, c_2, c_3)∈ such that c_i=a and c_j=b, otherwise (a,b) is (i,j)-bad. Let d_i,j(a) denote the number of colors b such that (a,b) is (i,j)-good, and e_i,j(a)=d_i,j(a)/n. Moreover, define d_i,j'(a)=n-d_i,j(a), i.e, the number of colors b such that (a,b) is (i,j)-bad, and e'_i,j(a)=d'_i,j(a)/n. The following claim give an estimate on the lower bound of d_i,j(a).
e_i,j(a)≥ 3d()-2 for all a∈ and all i≠ j∈{1,2,3}.
By symmetry, assume that i=1,j=2. Let '=(', ') be the palette obtained by removing the color a from . Then
n^3d()=||
=|'|+|∖'|
= |'|+|_a^1|+|_a^2∖_a^1|+|_a^3∖(_a^1∪_a^2)|
≤ |'|+nd_i,j(a)+n(n-1)+(n-1)^2
≤ (n-1)^3d()+nd_i,j(a)+2n^2-3n+1,
where the last inequality uses the condition that removing any color from decreases the density of . Hence,
nd_i,j(a)
≥ (n^3-(n-1)^3)d()-2n^2+3n-1
=d()(3n^2-3n+1)-2n^2+3n-1
=(3d()-2)n^2+(3n-1)(1-d())
≥ (3d()-2)n^2,
which proves our claim.
In the next lemma we bound the density of the palette by e_i,j.
d()≤1/4+1/2n∑_a∈∑_i≠ j∈{1,2,3}(e_i,j(a)-1/2)^2.
Define
X_1={(a,b,c)|(a,b,c)∈^3, for all d∈, (d,b,c)∉},
X_2={(a,b,c)|(a,b,c)∈^3,for all d∈, (a,d,c)∉},
X_3={(a,b,c)|(a,b,c)∈^3,for all d∈, (a,b,d)∉}.
Clearly does not contain any element from X_1, X_2 or X_3, so by the inclusion-exclusion principle, we have
|| ≤ |X_1∩X_2∩X_3|
≤ n^3-|X_1|-|X_2|-|X_3|+|X_1∩ X_2|+|X_1∩ X_3|+|X_2∩ X_3|.
Note that (a,b,c)∈ X_1 is equivalent to (b,c) being (2,3)-bad, hence
|X_1|=n∑_a∈d'_2,3(a)=n∑_a∈d'_3,2(a)=n/2(∑_a∈d'_2,3(a)+∑_a∈d'_3,2(a)).
Besides, (a,b,c)∈ X_1∩ X_2 is equivalent to (b,c) being (2,3)-bad and (a,c) being (1,3)-bad, hence
|X_1∩ X_2|=∑_a∈d'_3,1(a)d'_3,2(a).
Using the same argument for X_2,X_3, we have
|| ≤ n^3-|X_1|-|X_2|-|X_3|+|X_1∩ X_2|+|X_1∩ X_3|+|X_2∩ X_3|
≤ n^3-n/2∑_a∈∑_i≠ jd'_i,j(a)+∑_a∈(d'_1,2(a)d'_1,3(a)+d'_2,1(a)d'_2,3(a)+d'_3,1(a)d'_3,2(a)),
which is equivalent to
d()≤ 1+1/n∑_a∈(e'_1,2(a)e'_1,3(a)+e'_2,1(a)e'_2,3(a)+e'_3,1(a)e'_3,2(a)-1/2∑_i≠ je'_i,j(a)).
Note that we have the local inequality
ab-1/2(a+b)=(a-1/2)(b-1/2)-1/4≤1/2((a-1/2)^2+(b-1/2)^2)-1/4,
Hence
d()
≤
1+1/n∑_a∈(e'_1,2(a)e'_1,3(a)+e'_2,1(a)e'_2,3(a)+e'_3,1(a)e'_3,2(a)-1/2∑_i≠ je'_i,j(a))
≤ 1+1/n∑_a∈∑_i≠ j∈{1,2,3}1/2(e'_i,j(a)-1/2)^2-3/4
=1/4+1/2n∑_a∈∑_i≠ j∈{1,2,3}(e_i,j(a)-1/2)^2.
Now we want to use the condition that does not admit S_k to bound e_i,j(a). To give a better characterization of the condition, we define a digraph D(V,E) as follows:
* V=_1∪_2, which consists of two disjoint copies of . Besides, for each color a∈, let a^1 and a^2 denote the copies of a in _1 and _2.
* For a pair of colors (a,b)∈^2,
* we add an edge a^1b^1 in _1 if (a,b) is (2,3)-good,
* we add an edge a^2b^2 in _2 if (a,b) is (1,2)-good,
* we add the edges a^1b^2 and b^2a^1 if (a,b) is (1,3)-good.
Here, we use ab to express the arc from a to b.
One important thing is to show that D is well-defined, i.e, D has no loop. Assume that D contains an edge a^1a^1, then (a,a) is (2,3)-good, hence there exists a triple (b,a,a)∈𝒜. Recall S_k be the 3-graph on k+1 vertices u,v_1, v_2, …, v_k, containing the edges uv_iv_j for all 1≤ i<j≤ k. We define a order ≼ on V(F) as v_1≼⋯≼ v_k ≼ u, and construct a function
φ:V(F)2→𝒞 such that φ(v_iv_j)=b, φ(v_iu)=a. Then, for every edge v_iv_ju∈ E(F) with
v_i≼ v_j≼ u, we have (φ(ab), φ(ac), φ(bc))=(a,a,b)∈𝒜, a contradiction. Similarly, D also cannot contains an edge a^2a^2, hence D is well-defined.
D does not contain a transitive tournament on k vertices as a subgraph.
We prove the claim by contradiction. Assume that K is a transitive tournament on k vertices in D, and V(K)∩_1=K_1,V(K)∩_2=K_2. Let |K_1|=s, |K_2|=t. We can label the vertices of this transitive tournament ℓ_1^1, ℓ_2^1, …, ℓ_s^1, r_1^2, r_2^2, …, r_t^2 in such a way that we have the edge ℓ_i^1ℓ_j^1 for any 1≤ i<j≤ s and the edge r_i^2r_j^2 for any 1≤ i<j≤ t. By the definition of D, we can find some ℓ_ij such that (ℓ_i,j,ℓ_i,ℓ_j)∈, some m_ij such that (ℓ_i,m_ij,r_j)∈ and some r_ij such that (r_i,r_j,r_ij)∈.
Now we show that admits S_k, which is a contradiction. Note that s+t=k, to be convenient, assume the star S_k has vertices u,v_1,⋯,v_s,w_1,⋯,w_t. We define an order ≼ on V(S_k) as follows:
v_1≼⋯≼ v_s≼ u ≼ w_1≼⋯≼ w_t.
We then construct a function φ:V(S_k)2→𝒞
such that for every edge abc∈ E(F) with a≺ b≺ c
we have (φ(ab), φ(ac), φ(bc))∈𝒜. Define
φ(uv_i)=ℓ_i, φ(uw_i)=r_i, φ(v_iv_j)=ℓ_ij, φ(v_iw_j)=m_ij, φ(w_iw_j)=r_ij.
By the definition of S_k, u∈{a,b,c}. Now we have 3 cases:
* a=u,b=w_i, c=w_j. Then (φ(ab), φ(ac), φ(bc))=(r_i,r_j,r_ij)∈.
* b=u,a=ℓ_i, c=w_j. Then (φ(ab), φ(ac), φ(bc))=(ℓ_i,m_i,j,r_j)∈,
* c=u,a=ℓ_i, c=ℓ_j. Then (φ(ab), φ(ac), φ(bc))=(ℓ_i,j,ℓ_i,ℓ_j)∈.
We then introduce a variation of the Caro-Wei theorem in digraph. It is similar to Theorem 4 in <cit.>.
Let H be a digraph on n vertices, which does not contain a transitive tournament on k vertices as a subgraph. For each vertex v, let m(v)=max{d^+(v), d^-(v)}. Then
∑_v∈ V(H)1/n-m(v)≤ k-1.
We proceed by induction on k. The statement is clear for k=2, since H is the empty graph. Suppose that the statement holds for k-1. Let w be a vertex with the maximum value of m(w), and by symmetry assume that m(w)=d^+(w). Let S be the set of out-neighbors of w, and D[S] be the digraph induced by D in S. Then D[S] does not contain a transitive tournament on k-1 vertices, otherwise we could extend it by adding w. Let d^+_S(v) and d^-_S(v) denote the out-degree and in-degree of v in G_S, and let m_S(v)=max{d^+_S(v), d^-_S(v)}. Hence,
∑_v∈ v(H)1/n-m(v) =∑_v∈ S1/n-m(v)+∑_v∈ V(H)∖ S1/n-m(v)
≤∑_v∈ S1/n-m(v)+∑_v∈ V(H)∖ S1/n-m(w) (by the maximality of m(w))
≤∑_v∈ S1/n-m(v)+1
≤∑_v∈ S1/|S|-m_S(v)+1 (m(v)≤ m_S(v)+(n-|S|))
≤ k-1. (by the induction hypothesis)
We then combine <ref> and <ref> to give an estimation of e_i,j.
0.1 em
For each a∈, set
m_A(a) =max{e_2,3(a), e_3,2(a)}, m_C(a)=max{e_1,2(a), e_2,1(a)},
m_B(a) =m_A(a)+e_1,3(a), m_D(a)=m_C(a)+e_3,1(a).
For each a∈, set M_A(a)=1/1-m_A(a), M_C(a)=1/1-m_C(a). Consider the digraph D[_1]. For each a∈, the maximum of its in-degree and out-degree in D[_1] is m_A(a)n, so by Lemma <ref>, we have
∑_a∈M_A(a)=∑_a∈1/1-m_A(a)≤ (k-1)n,
and the inequality is also true when we replace M_A(a) with M_C(a).
0.15 em
Recall that m(v)=max{d^+(v), d^-(v)}. In D, for each a∈, m(a^1)=m_B(a)n, m(a^2)=m_D(a)n. Set M_B(a)=1/2-m_B(a), M_D(a)=1/2-m_D(a), then by Lemma <ref> again we have
∑_a∈(M_B(a)+M_D(a))=∑_a∈(1/2-m_B(a)+1/2-m_D(a))≤ (k-1)n.
Now, by Lemma <ref>, we have
d()
≤1/4+1/2n∑_a∈∑_i≠ j∈{1,2,3}(e_i,j(a)-1/2)^2
≤1/4+1/2n∑_a∈(2(m_A(a)-1/2)^2+(m_B(a)-m_A(a)-1/2)^2)
+1/2n∑_a∈(2(m_C(a)-1/2)^2+(m_D(a)-m_C(a)-1/2)^2).
Note that for all real numbers a,b we have the local inequality
2(a-1/2)^2+(b-a-1/2)^2
=(a-1/2)^2+1/2(b-1)^2+2(a-b/2)^2
≤ (a-1/2)^2+1/2(b-1)^2+4(k-2/k-1-a)^2+4(k-2/k-1-b/2)^2
=f(a)+g(b),
where f(x)=(x-1/2)^2+4(k-2/k-1-x)^2, g(x)=1/2(x-1)^2+4(k-2/k-1-x/2)^2.
0.4 em
Substituting (<ref>) in (<ref>), we have
d()
≤1/4+1/2n∑_a∈( f(m_A(a))+f(m_C(a))+g(m_B(a))+g(m_D(a)) )
= 1/4+1/2n∑_a∈( f_1(M_A(a))+f_1(M_C(a))+g_1(M_B(a))+g_1(M_D(a)) ),
where f_1(x)=f(1-1/x), g_1(x)=g(2-1/x). We can calculate that
f_1(x)=5/x^2-(k+7)/(k-1)x+4/(k-1)^2+1/4, g_1(x)=3/2x^2-(k+3)/(k-1)x+4/(k-1)^2+1/2.
Now we bound the four sums in (<ref>) separately. The idea is to use a tangent-type local inequality.
For any x≥5(k-1)/k-3,
f_1(x)≤(k-3)/(k-1)^3x+k^2-10k+21/(2k-2)^2.
The inequality is equivalent to (x-(k-1))^2((k-3)x-5(k-1))≥ 0.
By Claim <ref>, we have m_A(a)≥ 3d()-2 for every a∈, therefore
M_A(a)≥1/3-3d()≥(k-1)^2/9(k-2)≥5(k-1)/k-3
when k≥ 48. Hence, by Claim <ref> and (<ref>), we have
∑_a∈f_1(M_A(a))
≤∑_a∈((k-3)/(k-1)^3M_A(a)+k^2-10k+21/(2k-2)^2)
≤(k-3)/(k-1)^3(k-1)n+k^2-10k+21/(2k-2)^2n
=n(k-3)^2/(2k-2)^2.
Similarly, we have
∑_a∈f(m_C(a)) ≤ n(k-3)^2/(2k-2)^2.
For any x≥3(k-1)/2k-6,
g_1(x)≤4k-12/(k-1)^3x+k^2-10k+21/2(k-1)^2.
The inequality is equivalent to (2x-(k-1))^2((2k-6)x-3(k-1))≥ 0.
By Claim <ref>, we have m_B(a)≥ 6d()-4 for every a∈, then
M_B(a)≥1/6-6d()≥(k-1)^2/18(k-2)≥3(k-1)/2k-6
when k≥ 30. Similarly, M_D(a)≥3(k-1)/2k-6, hence by Claim <ref> and (<ref>) we have
∑_a∈(g_1(M_B(a))+g_1(M_D(a)) ≤∑_a∈(4k-12/(k-1)^3(M_B(a)+M_D(a))+k^2-10k+21/(k-1)^2)
≤4k-12/(k-1)^3(k-1)n+k^2-10k+21/(k-1)^2n
=n(k-3)^2/(k-1)^2. 8
Hence, putting together (<ref>), (<ref>), (<ref>) and (<ref>), we have
d() ≤1/4+1/2n∑_a∈( f(m_A(a))+f(m_C(a))+g(m_B(a))+g(m_D(a)) )
≤1/4+3(k-3)^2/4(k-1)^2
=k^2-5k+7/(k-1)^2.
which finishes the proof.
Concluding remarks
Using computer calculations, one can show that the inequalities (<ref>), (<ref>) hold for k≥ 40, which means that <ref> is also true for k≥ 40.
Acknowledgements
The authors want to thank Haoran Luo for computer calculation. The authors are also grateful to Daniel Král, Hong Liu and Oleg Pikhurko for careful reading and some writing suggestions.
abbrv
|
http://arxiv.org/abs/2409.03108v1 | 20240904222235 | Loop Series Expansions for Tensor Networks | [
"Glen Evenbly",
"Nicola Pancotti",
"Ashley Milsted",
"Johnnie Gray",
"Garnet Kin-Lic Chan"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn"
] |
[email protected]
AWS Center for Quantum Computing, Pasadena, CA 91125, USA
California Institute of Technology, Pasadena, CA 91125, USA
California Institute of Technology, Pasadena, CA 91125, USA
§ ABSTRACT
Belief propagation (BP) can be a useful tool to approximately contract a tensor network, provided that the contributions from any closed loops in the network are sufficiently weak. In this manuscript we describe how a loop series expansion can be applied to systematically improve the accuracy of a BP approximation to a tensor network contraction, in principle converging arbitrarily close to the exact result. More generally, our result provides a framework for expanding a tensor network as a sum of component networks in a hierarchy of increasing complexity. We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors, where it is shown to improve in accuracy over standard BP by several orders of magnitude whilst incurring only a minor increase in computational cost. These results indicate that the proposed series expansions could be a useful tool to accurately evaluate tensor networks in cases that otherwise exceed the limits of established contraction routines.
Loop Series Expansions for Tensor Networks
Garnet Kin-Lic Chan
September 9, 2024
==========================================
§ INTRODUCTION
Tensor networks (TNs) are powerful tools for the classical simulation of quantum systems <cit.>. Although they were traditionally developed for model systems in the context of condensed matter physics, they have more recently been applied towards the benchmarking of nascent quantum computers <cit.>. Tensor network states were pioneered by the introduction of matrix product states <cit.> (MPS), which possess tensors connected in a 1D geometry. For several decades numeric algorithms based on the MPS, such as the density matrix renormalization group<cit.> (DMRG), have been widely regarded as the gold-standard of classical simulation algorithms for 1D quantum systems; producing high-accuracy and unbiased results for a wide range of problems. A large part of this success can be attributed to the computational efficiency with which MPS can be contracted. Subsequent developments have seen the proposal of sophisticated tensor networks, such as projected entangled pair states<cit.> (PEPS) and multi-scale entanglement renormalization ansatz<cit.> (MERA), designed to efficiently represent quantum states on large 2D systems. However, a significant challenge of these 2D tensor networks is the high computational expense required for their contraction <cit.> even when approximate contraction schemes are used <cit.>. In practice, this expense limits the amount of entanglement that can be encoded in these networks and therefore also limits the range of problems for which they produce accurate results. Thus the development of strategies for more efficient contraction of TNs remains a vital task.
A promising approach for the contraction of TNs is through the adoption of belief propagation<cit.> (BP), a message passing algorithm with a long history in both computer science<cit.> and in statistical physics<cit.>. In recent times the application of BP towards TNs has generated much interest <cit.>, and produced a notable success <cit.> of simulating a quantum experiment performed on IBMs 127-qubit Eagle processor <cit.>, a problem that was seen to be intractable for other 2D TN algorithms <cit.>. However, a major drawback of BP is that it is an uncontrolled approximation; one cannot systematically improve the accuracy of a BP result as is otherwise done in the context of tensor networks by simply growing the bond dimension, although there do exist schemes to improve the accuracy of BP through a preliminary clustering of a network <cit.>.
In this manuscript we present a method to improve upon the accuracy of BP for the contraction of a tensor network by incorporating and building upon the loop series expansion proposed by Chertkov and Chernyak<cit.>. Assuming that a BP fixed point of the tensor network has been found, we describe how the dominant loop excitations, otherwise neglected by BP, can be re-incorporated into the evaluation of this network. This approach can also be understood as providing a general notion of a series expansion for tensor networks; that a complicated tensor network may be expanded as a sum of component networks in a hierarchy of increasing complexity, with the BP approximation as the zeroth-order term in the series. We demonstrate numerically that the proposed series expansion can provide both accurate and computationally cheap evaluations of PEPS, such that it could be a useful tool to contract tensor networks in cases that otherwise exceed the limits of established contraction routines.
§ BELIEF PROPAGATION
Here we provide a brief introduction to BP as a necessary precursor to our main results. Let 𝒯 be a closed tensor network defined of N tensors labelled {T_1, T_2, T_3, …, T_N } and with M indices labelled {i_1, i_2, i_3, … i_M }. Assume that our goal is to contract 𝒯 to a scalar 𝒵(𝒯),
𝒵(𝒯) = ∑_i_1,i_2,… T_1 (ξ_1) T_2 (ξ_2) T_3 (ξ_3) …,
where ξ_r describes the subset of indices ξ_r ⊆{i_1, i_2, … i_M } connected to tensor T_r. The starting point of BP is to define a pair of messages for each edge of the network; if (r,s) denotes an edge shared between tensors T_r and T_s then we define messages {μ_r→ s, μ_s→ r} on this edge. We refer to a message μ_s→ r as outgoing from T_s and incoming to T_r. BP prescribes the following algorithm to update each of the messages: the new outgoing message for T_r towards tensor T_s is given by contracting the incoming messages to T_r on all other indices,
μ_r→ s = ( ∏_q ∈ξ_r / sμ_q→ r) T_r.
The set of 2M messages realize a BP fixed point if all of the equations are simultaneously satisfied up to multiplicative constants. Let us assume that we have found a BP fixed point for our network 𝒯 and that each matching pair of messages have been normalized such that their dot product is unity, ⟨μ_r→ s·μ_s→ r⟩ = 1. Let us define scalar T̃_r as resulting from the contraction of a tensor T_r with all incoming messages,
T̃_r = ( ∏_q ∈ξ_rμ_q→ r) T_r,
which we refer to as the (BP) vacuum contribution from tensor T_r. The BP approximation to the network contraction 𝒵 from Eq. <ref> is given by multiplication of these scalars
𝒵(𝒯) ≈∏_rT̃_r.
It follows that the free energy ℱ = -log𝒵 can be approximated as
ℱ(𝒯) ≈ - ∑_r log( T̃_r ),
which is known as the Bethe free energy <cit.>.
§ CONFIGURATIONS IN THE BP BASIS
We now outline a methodology to improve upon the accuracy of a known BP fixed point by introducing additional terms into Eq. <ref>. For any edge (r,s) of the network, assumed to be of dimension d, we define a rank 1 projector 𝒫_rs onto message subspace and its rank (d-1) compliment 𝒫_rs^C,
𝒫_rs = μ_s→ r⊗μ_r→ s, 𝒫_rs^C = 𝕀 - 𝒫_rs.
with 𝕀 as the d-dim identity, see also Fig. <ref>(a).
We henceforth refer to 𝒫_rs as the projector onto the (BP) ground state of edge (r,s) and 𝒫_rs^C as the projector onto the excited subspace. It follows that the tensor network 𝒯 can be resolved as a sum over 2^M (BP basis) configurations, formed from all combinations of projecting each edge into either the ground or excited sub-space. We define the degree of a configuration as the number of excited edges that it contains, using δ_x to denote a degree-x configuration and W(δ_x) to denote its weight (i.e. the scalar resulting from its contraction). Thus we can recognise δ_0 as the BP vacuum; the unique configuration with all edges projected into their BP ground state. It follows that W(δ_0) evaluates to the standard BP result of Eq. <ref>. Our goal is now to characterise the weights of the remaining configurations, which may be thought of as excitations on top of the BP vacuum, such that the high-weight contributions can then be re-introduced into the approximation.
A significant step towards this goal comes from the understanding derived in Ref. LOOP1 that any configuration with a dangling excitation, i.e. that contains a tensor with a single excited index, has weight zero. This result follows directly from the definition of the BP fixed point; let us assume that edge (r,s) of tensor T_r is projected via 𝒫^C_rs into the excited subspace and all other edges are in the BP ground. It follows that
( ∏_q ∈ξ_r / sμ_q→ r) T_r 𝒫^C_rs = μ_r→ s𝒫^C_rs = 0,
where Eq. <ref> was used to replace T_r with the outgoing message μ_r→ s which is, by construction in Eq. <ref>, orthogonal to the projector 𝒫^C_rs. This result, which implies that non-trivial excitations can only occur in closed loops, is useful in reducing the number of basis configurations that need be considered. For instance, when applied to the network of M=7 edges in Fig. <ref>(b), the number of configurations is reduced from 2^7=128 to only 5 configurations of non-zero weight labelled {δ_0, δ_4, δ_4', δ_6, δ_7 }. Another useful observation, as examined in Ref. LOOP3, is that the weight of a configuration containing separate excitations, i.e. excitations that do not share any common tensors, is given by the product of the individual weights
W(δ_x ⊗δ_y) = W(δ_x) × W(δ_y),
see also the example shown in Fig. <ref>(c). This result is further simplified by re-scaling each tensor T_r so that its vacuum state contribution, i.e. T̃_r from Eq. <ref>, is unity which allows the vacuum component of any configuration to be ignored. This observation implies that one only needs to understand the connected excitations in order to fully characterize a tensor network contraction.
For any reasonably large tensor network the full number of non-trivial excitations will still exceed practical consideration. However, as detailed further in Appendix <ref>, we posit that in most spatially homogeneous tensor networks the weight of a configuration δ_x will be exponentially suppressed in its degree x,
W(δ_x) ≈exp (-k x)
with k a positive scalar. While it is easy to construct counterexamples where this scaling will not hold, it is argued in Appendix <ref> that this behavior will hold for networks where the BP fixed point already provides a reasonably accurate approximation to the network contraction. Thus one can understand that the (BP basis) configurations, when ordered in increasing degree, form a series expansion for a network contraction. While the number of δ_x configurations may still grow (exponentially) quickly with x, the exponential suppression of the contribution of each configuration provides the possibility that a partial summation will rapidly converge to the exact result as higher-degree excitations are added.
§ CONTRACTION OF IPEPS
In this section we discuss how the proposed series expansion can be applied to contract infinite projected entangled pair states (iPEPS). For concreteness we focus on a hexagonal lattice PEPS |ψ⟩, local dimension d and virtual dimension m, formed from a single pair of unique tensors A and B tiled on alternate sites (such that the state has a 2-site unit cell). A closed hexagonal lattice tensor network 𝒯 = ⟨ψ|ψ⟩ of bond dimension m^2 is formed by taking the product of the PEPS with its conjugate and contracting over the site indices, see also Fig. <ref>(a). The network 𝒯 is composed of two unique tensors a and b, which result from contraction of the physical index in (A⊗ A^*) and (B⊗ B^*) respectively. In what follows we assume that we have found a BP fixed point for 𝒯 and that tensors a and b have been re-scaled such that their vacuum contributions, as defined in Eq. <ref>, are unity.
§.§ Free-energy density
A common task in tensor network algorithms is to evaluate the scalar 𝒵 associated to contracting a closed tensor network, as per Eq. <ref>. However in the context of an infinite tensor network this scalar would diverge so we instead evaluate an intensive equivalent: the free-energy density f,
f = lim_N →∞( -log(𝒵_N)/N).
where 𝒵_N represents the evaluation of a patch of N tensors from the full network. Note that, due to our normalization of the BP vacuum, the BP contribution to 𝒵_N is unity for all N (or equivalently the Bethe free energy is zero <cit.>).
Our first step is to evaluate the weights W(δ_x) of the lowest degree excitations δ_x; in the calculations performed in this manuscript we evaluate the excitations up to degree x=12 as shown in Fig. <ref>(b-e). Each of these weights is given by contracting a small tensor network formed by capping the external indices (i.e. of tensors within the excitation) with their fixed point messages and projecting the internal indices onto their excited subspace with projectors 𝒫_rs^C as defined in Eq. <ref>. However, even once these weights W(δ_x) are known, it still remains a non-trivial combinatorial task to compute their impact on the free energy density f. An approximate counting of the possible combinations of loop excitations, as derived in Appendix <ref>, gives the following result for the free energy correction
f ≈ -∑_{δ_x} L(δ_x) W(δ_x) e^S(δ_x) f.
with L(δ_x) as the number of locations per lattice site that excitation can be placed and S(δ_x) as the number of lattice sites that the excitation occupies. For instance, the δ_6 excitation shown in Fig. <ref>(b) has L(δ_6) = 1/2 (since there is one hexagon per two lattice sites) and S(δ_6) = 6. Notice that the terms exp(S(δ_x) f) in Eq. <ref> act to suppress excitations based on the amount of space they occupy. Given that Eq. <ref> contains the unknown f on both sides we cannot solve for f directly. Instead we solve Eq. <ref> iteratively by first setting f=0 on the r.h.s, computing a new value for f, then feeding the new f back into r.h.s and repeating until f is converged. Usually this requires only a few iterations to converge to high precision.
§.§ Transfer matrix
We now examine how one can compute the transfer matrix T_AB associated to cutting open a single edge from the network 𝒯 = ⟨ψ|ψ⟩ as shown in Fig. <ref>(a). Such transfer matrices are utilized in determining the optimal truncation of an internal index from the PEPS and thus play an important role in many PEPS optimization algorithms <cit.>.
The BP vacuum contribution to T_AB is shown in Fig. <ref>(b) together in Fig. <ref>(c) with examples of some of the contributions from loop excitations, all of which individually evaluate to a matrix with the same dimensions as T_AB. The series approximation to T_AB is constructed by evaluating each of the loop-excitation networks (up to some desired degree), then adding the contributions together after weighting each by a suppression factor exp(S(δ_x)f) similar to Eq. <ref>. Note this requires that the free energy density f be evaluated prior to computing T_AB. Finally, we remark that excitation terms that do not include either of the tensors adjacent to an open index do not need to be considered explicitly; their contributions, which are proportionate to the vacuum term, are already accounted for through the suppression factors.
§.§ Density matrix
We now describe how the density matrix ρ_AB associated to a pair of adjacent lattice sites can be computed. Let us first define tensor a' from taking (A⊗ A^*) while leaving the physical indices uncontracted and similarly for b', see Fig. <ref>(e). The network for ρ_AB can be realized from a substitution of a' and b' as impurities in the original network 𝒯 = ⟨ψ|ψ⟩ as shown in Fig. <ref>(d).
Similarly to the transfer matrix, only the loop excitations that include one (or both) of the tensors a' and b' will give non-trivial contribution to ρ_AB. The BP vacuum contribution to ρ_AB is shown in Fig. <ref>(f) together in Fig. <ref>(g) with an example of a loop excitation. The series approximation to ρ_AB is thus constructed in the same way as the transfer matrix; the contributing loop-excitation networks are evaluated up to some desired degree and then added together after weighting each by a suppression factor exp(S(δ_x)f).
§ BENCHMARK RESULTS
We now benchmark our approach, beginning with contraction of a PEPS representing the ground state |ψ⟩ of the hexagonal lattice AKLT model<cit.> in the thermodynamic limit. The AKLT model is particular useful as a test bed, since |ψ⟩ can be represented exactly<cit.> with an iPEPS of bond dimension m=2. Following the methodology prescribed in Sect. <ref> we compute the free energy density f, transfer matrix T_AB, and the density matrix ρ_AB for loop expansions containing up to 12^th degree excitations δ_12 shown in Fig. <ref>. In all cases we compare against evaluations using a boundary MPS contraction<cit.> whose dimension χ has been chosen large enough, χ≈ 30, such that any truncation errors are negligible and the results can be considered as numerically exact. Fig. <ref> shows that the addition of the loop corrections improve the accuracy of the BP results dramatically, such that they appear to converge towards the exact results as the degree of the expansion is increased. The expansion to 12^th degree excitations δ_12 gave the free energy density f accurate to ε = 2×10^-7, which represents an improvement of four orders of magnitude over the BP result. That the infinite hexagonal tensor network can be characterised so accurately from only the BP vacuum plus a few of the low-degree loop excitations is remarkable. Note that we also compare against results obtained from the exact contraction of the PEPS on a finite N× N geometry (with periodic boundaries) for N={6,8,10,12 }. In all cases we see that an N^th degree loop expansion exceeds the accuracy of an exact evaluation on a N× N geometry. This is expected since the N× N systems would contain degree-N corrections (i.e. that `wrap' the whole system) whereas a degree-N loop expansion is only neglecting excitations of degree larger than N. None-the-less, this provides additional confirmation that the loop expansions are properly accounting for all short-range contributions.
In Fig. <ref>(a) we present results for the accuracy of the free-energy density evaluated from randomly defined iPEPS (with 2-site unit cell) of local dimension d and bond dimension m. These were prepared by placing on each edge of the lattice a maximally entangled pair of dim-m particles and then projecting each vertex onto a dim-d space via a (m^3× d) isometry. The isometries were formed by first populating a (m^3× d) matrix with random elements chosen uniformly from the interval [0,1], then enforcing rotational symmetry (useful to reduce the number of unique terms in the loop expansion), and finally orthogonalizing the columns of the matrix. The results from random iPEPS, which are broadly similar to those from the AKLT model, demonstrate that the series expansion seems to be valid for typical instances of iPEPS. Notice that increasing the local dimension d (while keeping a fixed bond dimension m) was generally found to reduce the accuracy of our approach; this could be expected as increasing d also increases the per-site entanglement entropy of the random PEPS.
Fig. <ref>(b-c) presents equivalent results for kagome and square lattices. For iPEPS defined on kagome lattices we perform local manipulations to exactly transform the network into a hexagonal geometry (similar to the form of PEPS described in Ref. PESS1) prior to applying the series expansion, see Appendix <ref> for details. The square lattice results were obtained by applying the expansion directly to the square lattice, evaluated up to the δ_8 excitations as shown in Appendix <ref>. Similar to the hexagonal lattice, the results from the kagome and square lattices also show robust improvements over the BP approximation. However we see that the convergence on the square lattice is slower with respect to increasing the degree of expansion; we attribute this to the fact that the quantity of distinct loop configurations at any given degree grows much faster than on the hexagonal lattice.
§ DISCUSSION
A key result presented in this manuscript is a systematic way to improve on BP results for the contraction of tensor networks, turning BP from an uncontrolled approximation into a well-controlled approximation. This result would be directly useful in contexts where BP has been employed for the contraction of tensor networks, such as the simulation of quantum experiments<cit.>.
Another direct application of these results could be for PEPS optimization algorithms. Given that a recent work<cit.> established the equivalence of BP to the `simple update' strategy for PEPS, the series expansion could significantly enhance the accuracy of the `simple update' strategy while still remaining relatively computationally efficient. For instance, the genus-1 loop excitations on the hexagonal lattice, i.e. the δ_6 and δ_10 terms in Fig. <ref>, can be evaluated with O(m^6) cost for PEPS of bond dimension m, and this cost could also potentially be reduced via approximation strategies like sparse diagonalization. Thus the dominant terms in the expansion are still feasible to compute even for PEPS with bond dims in excess of m=100, well beyond the computational limits of boundary MPS or corner transfer matrix methods for PEPS contractions<cit.>.
We also expect the series expansions to be useful in the context of coarse-graining tensor algorithms for tensor networks, especially in relation to proposals that extend the tensor renormalization group<cit.> (TRG) by including mechanisms to identify and remove short-range loop correlations<cit.>. Given that our results provide a general framework precisely for efficiently identifying loop correlations it is likely that they would find significant utility in this setting.
Although our proposal worked well for the test cases considered, it is clear that there are also several potential ways that it could fail. Most obviously, application of the series expansion first requires finding a BP fixed point. It is unknown how difficult this task is for a PEPS representing the ground state of a local Hamiltonian or other physically relevant tensor networks. Even if a BP fixed point is found, the series expansion would not be expected to converge if the network under consideration possessed multiple degenerate, or almost degenerate, fixed points. In this scenario Eq. <ref> would fail; relative to any single BP fixed point there would exist high-degree configurations that still carry large weight (i.e related to a different BP fixed point). An example of this would be a PEPS encoding of a GHZ state, where a BP fixed point could realize either the |↑↑↑…⟩ or |↓↓↓…⟩ component of the network; relative to one of these fixed points the other component would represent a maximal-degree excitation with equal weight.
Most existing tensor network algorithms are built on the twin pillars of tensor contractions and tensor decompositions/truncations. The results presented in this manuscript, which prescribe a general procedure for resolving a complicated network into its fundamental components, could represent a third pillar for tensor network algorithms; one that could have interesting synergies with established routines. For instance, in evaluating a network 𝒯, one could begin by expanding in terms of the BP vacuum, the short-loop excitations, and the remaining `residual'. Potentially, one could then apply a traditional (approximate) contraction strategy on the residual which, presumably, would be more amenable to truncations due to the separation from the short-loop excitations. Exploring the ways that series expansions could compliment or enhance existing tensor network algorithms remains an interesting avenue for future research.
We thank AWS for supporting the quantum computing program. GKC acknowledges support from the US DOE, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator (QSA), and from a generous gift from AWS.
99
TN1
J. I. Cirac and F. Verstraete, Renormalization and tensor product states in spin chains and lattices, J. Phys. A: Math. Theor. 42, 504004 (2009).
TN2
T.G. Kolda and B.W. Bader, Tensor decompositions and applications, SIAM review, 51(3), pp.455-500 (2009).
TN3
G. Evenbly and G. Vidal, Tensor network states and geometry, J. Stat. Phys. 145, 891-918 (2011).
TN4
R. Orus, A practical introduction to tensor networks: Matrix product states and projected entangled pair states, Ann. Phys. 349, 117 (2014).
TN5
J. C. Bridgeman and C. T. Chubb, Hand-waving and Interpretive Dance: An Introductory Course on Tensor Networks, J. Phys. A: Math. Theor. 50, 223001 (2017).
TN6
S. Montangero, Introduction to Tensor Network Methods - Numerical Simulations of Low-dimensional Many-body Quantum Systems, (Springer, Berlin, 2018).
TN7
R. Orus, Tensor networks for complex quantum systems, Nat. Rev. Phys. 1, 538-550 (2019).
TN8
P. Silvi, F. Tschirsich, M. Gerster, J. Jünemann, D. Jaschke, M. Rizzi, and S. Montangero, The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems, SciPost Phys. Lect. Notes 8 (2019).
TN9
S.-J. Ran, E. Tirrito, C. Peng, X. Chen, L. Tagliacozzo, G. Su, and M. Lewenstein, Tensor Network Contractions Methods and Applications to Quantum Many-Body Systems, (Springer, 2020).
TN10
J. I. Cirac, D. Pérez-García, N. Schuch, and F. Verstraete, Matrix product states and projected entangled pair states: Concepts, symmetries, theorems, Rev. Mod. Phys. 93, 045003 (2021).
QC1
E. S. Fried, N. P. D. Sawaya, Y. Cao, I. D. Kivlichan, J. Romero, and A. Aspuru-Guzik, qTorch: The quantum tensor contraction handler, PLoS ONE 13(12): e0208510. (2018).
QC2
B. Villalonga, S. Boixo, B. Nelson, C. Henze, E. Rieffel, R. Biswas and S. Mandrà, A flexible high-performance simulator for verifying and benchmarking quantum circuits implemented on real hardware, npj Quantum Inf 5, 86 (2019).
QC3
R. Schutski, D. Lykov, and I. Oseledets, Adaptive algorithm for quantum circuit simulation, Phys. Rev. A 101, 042335 (2020).
QC4
Y. Zhou, E. M. Stoudenmire, and X. Waintal, What Limits the Simulation of Quantum Computers?, Phys. Rev. X 10, 041038 (2020). https://doi.org/10.1103/PhysRevX.10.041038DOI: 10.1103/PhysRevX.10.041038.
QC5
M. Levental, Tensor Networks for Simulating Quantum Circuits on FPGAs, arXiv:2108.06831 (2021).
QC6
F. Pan and P. Zhang, Simulation of Quantum Circuits Using the Big-Batch Tensor Network Method, Phys. Rev. Lett. 128, 030501 (2022)
QC7
T. Vincent, L. J. O'Riordan, M. Andrenkov, J. Brown, N. Killoran, H. Qi, and I. Dhand, Jet: Fast quantum circuit simulations with parallel task-based tensor-network contraction, arXiv:2107.09793 (2021).
QC8
F. Pan and P. Zhang, Simulation of Quantum Circuits Using the Big-Batch Tensor Network Method, Phys. Rev. Lett. 128, 030501 (2022). https://doi.org/10.1103/PhysRevLett.128.030501DOI: 10.1103/PhysRevLett.128.030501.
QC9
F. Pan, K. Chen, and P. Zhang, Solving the Sampling Problem of the Sycamore Quantum Circuits, Phys. Rev. Lett. 129, 090502 (2022). https://doi.org/10.1103/PhysRevLett.129.090502DOI: 10.1103/PhysRevLett.129.090502.
QC10
T. Ayral, T. Louvet, Y. Zhou, C. Lambert, E. M. Stoudenmire, and X. Waintal. Density-Matrix Renormalization Group Algorithm for Simulating Quantum Circuits with a Finite Fidelity, PRX Quantum 4, 020304 (2023). https://doi.org/10.1103/PRXQuantum.4.020304DOI: 10.1103/PRXQuantum.4.020304.
QC11
J. Tindall, M. Fishman, M. Stoudenmire, and D. Sels. Efficient tensor network simulation of IBM’s kicked Ising experiment, arXiv: 2306.14887 (2023).
QC12
T. Begušić, J. Gray, and G. K.-L. Chan. Fast and converged classical simulations of evidence for the utility of quantum computing before fault tolerance, Sci. Adv. 10, eadk4321 (2024).
MPS1
M. Fannes, B. Nachtergaele, and R. F. Werner, Finitely correlated states on quantum spin chains, Commun. Math. Phys. 144, 443 (1992).
MPS2
S. Ostlund and S. Rommer, Thermodynamic limit of density matrix renormalization, Phys. Rev. Lett. 75, 3537 (1995).
DMRG1
S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992).
DMRG2
S. R. White, Density-matrix algorithms for quantum renormalization groups, Phys. Rev. B 48, 10345 (1993).
DMRG3
U. Schollwoeck, The density-matrix renormalization group, Rev. Mod. Phys. 77, 259 (2005).
PEPS1
F. Verstraete and J. I. Cirac, Renormalization algorithms for quantum-many-body systems in two and higher dimensions, arXiv:cond-mat/0407066.
PEPS2
F. Verstraete, J.I. Cirac, and V. Murg, Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems, Adv. Phys. 57, 143 (2008).
MERA1
L. Cincio, J. Dziarmaga, and M. M. Rams, Multiscale entanglement renormalization ansatz in two dimensions: quantum Ising model, Phys. Rev. Lett. 100, 240603 (2008).
MERA2
G. Evenbly and G. Vidal, Entanglement renormalization in two spatial dimensions, Phys. Rev. Lett. 102, 180406 (2009).
PEPS3
N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Computational Complexity of Projected Entangled Pair States, Phys. Rev. Lett. 98, 140506 (2007). https://doi.org/10.1103/PhysRevLett.98.140506DOI: 10.1103/PhysRevLett.98.140506.
PEPS4
T. Nishino and K. Okunishi, Corner Transfer Matrix Renormalization Group Method, JPSJ 65, 891 (1996). https://doi.org/10.1143/JPSJ.65.891DOI: 10.1143/JPSJ.65.891.
PEPS5
J. Jordan, R. Orús, G. Vidal, F. Verstraete, and J. I. Cirac, Classical simulation of infinite-size quantum lattice systems in two spatial dimensions, Phys. Rev. Lett. 101, 250602 (2008).
PEPS6
R. Orús and G. Vidal, Simulation of two-dimensional quantum systems on an infinite lattice revisited: Corner transfer matrix for tensor contraction, Phys. Rev. B 80, 094403 (2009).
PEPS7
P. Corboz, T. M. Rice, and M. Troyer, Competing States in the t-J Model: Uniform d-Wave State versus Stripe State, Phys. Rev. Lett. 113, 046402 (2014).
PEPS8
M. T. Fishman, L. Vanderstraeten, V. Zauner-Stauber, J. Haegeman, and F. Verstraete, Faster methods for contracting infinite two-dimensional tensor networks, Phys. Rev. B 98, 235148 (2018).
BP0
H. A. Bethe, Statistical theory of superlattices, Proc. R. Soc. Lond. A 150:552–575 (1935). http://doi.org/10.1098/rspa.1935.0122DOI: 10.1098/rspa.1935.0122.
BP1
J. Pearl. Reverend Bayes on Inference Engines: A Distributed Hierarchical Approach, Proceedings of the Second AAAI Conference on Artificial Intelligence. AAAI’82. Pittsburgh, Pennsylvania: AAAI Press, 1982, pp. 133–136.
BP2
J. Pearl. Fusion, propagation, and structuring in belief networks, Artificial Intelligence 29, 3 (1986), pp. 241–288. https://doi.org/10.1016/0004-3702(86)90072-XDOI: 10.1016/0004-3702(86)90072-X.
BP3
M. Mézard, G. Parisi, and R. Zecchina, Analytic and Algorithmic Solution of Random Satisfiability Problems, Science 297, 812 (2002). https://www.science.org/doi/pdf/10.1126/science.1073287DOI: 10.1126/science.1073287.
BP4
J. S. Yedidia, W. T. Freeman, and Y. Weiss, Understanding Belief Propagation and Its Generalizations, Exploring Artificial Intelligence in the New Millennium. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2003, pp. 239–269. ISBN: 1558608117.
BP5
J. S. Yedidia, W. T. Freeman, and Y. Weiss, Constructing free-energy approximations and generalized belief propagation algorithms, IEEE Transactions on Information Theory 51.7 (2005), pp. 2282–2312. https://doi.org/10.1109/TIT.2005.850085DOI: 10.1109/TIT.2005.850085.
BP6
M. Mézard, G. Parisi, and M. A. Virasoro, Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, Vol. 9 (World Scientific Publishing Company, 1987). https://doi.org/10.1142/0271DOI: 10.1142/0271.
BP7
S. Yoon, A. V. Goltsev, S. N. Dorogovtsev, and J. F. F. Mendes, Belief-propagation algorithm and the Ising model on networks with arbitrary distributions of motifs, Phys. Rev. E 84, 041144 (2011).
BP8
B. Karrer, M. E. J. Newman, and L. Zdeborová, Percolation on Sparse Networks, Phys. Rev. Lett. 113, 208702 (2014).
BP9
M. S. Leifer and D. Poulin, Quantum Graphical Models and Belief Propagation, Annals of Physics 323.8 (2008), pp. 1899–1946. https://doi.org/10.1016/j.aop.2007.10.001DOI: 10.1016/j.aop.2007.10.001.
BP10
D. Poulin and E. Bilgin, Belief propagation algorithm for computing correlation functions in finite-temperature quantum many-body systems on loopy graphs, Phys. Rev. A 77, 052318 (2008). https://doi.org/10.1103/PhysRevA.77.052318DOI: 10.1103/PhysRevA.77.052318.
BP11
A. Wrigley, W. S. Lee, and N. Ye, Tensor Belief Propagation, Proceedings of the 34th International Conference on Machine Learning. Ed. by Doina Precup and Yee Whye Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, June 2017, pp. 3771–3779.
BP12
E. Robeva and A. Seigal, Duality of graphical models and tensor networks, Information and Inference: A Journal of the IMA 8.2 (June 2018), pp. 273–288. ISSN: 2049-8772. https://doi.org/10.1093/imaiai/iay009DOI: 10.1093/imaiai/iay009.
BP13
R. Alkabetz and I. Arad, Tensor networks contraction and the belief propagation algorithm, Phys. Rev. Research 3, 023073 (2021). https://doi.org/10.1103/ PhysRevResearch.3.023073DOI: 10.1103/ PhysRevResearch.3.023073.
BP14
S. Sahu and B. Swingle, Efficient tensor network simulation of quantum many-body physics on sparse graphs, arXiv: 2206.04701 (2022).
BP15
C. Guo, D. Poletti, and I. Arad, Block Belief Propagation Algorithm for 2D Tensor Networks, arXiv: 2301.05844 (2023).
BP16
N. Pancotti and J. Gray, One-step replica symmetry breaking in the language of tensor networks, arXiv: 2306.15004 (2023).
BP17
Y. Wang, Y. E. Zhang, F. Pan, and P. Zhang, Tensor Network Message Passing, arXiv: 2305.01874 (2023).
BP18
J, Tindall, M. Fishman, Gauging tensor networks with belief propagation, SciPost Phys. 15, 222 (2023).
IBM1
Y. Kim, A. Eddins, S. Anand, K. X. Wei, E. Van Den Berg, S. Rosenblatt, H. Nayfeh, Y. Wu, M. Zaletel, K. Temme, and A. Kandala, Evidence for the utility of quantum computing before fault tolerance, Nature 618, 500–505 (2023).
ISO1
M. P. Zaletal and F. Pollmann, Isometric Tensor Network States in Two Dimensions, Phys. Rev. Lett. 124, 037201 (2020).
https://doi.org/10.1103/PhysRevLett.124.037201DOI:10.1103/PhysRevLett.124.037201.
LOOP1
M. Chertkov and V. Chernyak, Loop calculus in statistical physics and information science, Phys. Rev. E 73, 65102(R) (2006).
LOOP2
M. Chertkov and V. Chernyak, Loop series for discrete statistical models on graphs, J. Stat. Mech: Theory Exp. 6, P06009 (2006).
LOOP3
V. Gómez, J. M. Mooij, and H. J. Kappen, Truncating the Loop Series Expansion for Belief Propagation, Journal of Machine Learning Research 8 (2007) 1987-2016.
AKLT1
I Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Rigorous results on valence-bond ground states in antiferromagnets, Phys. Rev. Lett. 59, 799 (1987).
AKLT2
I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Valence Bond Ground States
in Isotropic Quantum Antiferromagnets, Comm. Math. Phys. 115, 477 (1988).
PESS1
Z. Y. Xie, J. Chen, J. F. Yu, X. Kong, B. Normand, and T. Xiang, Tensor Renormalization of Quantum Many-Body Systems Using Projected Entangled Simplex States, Phys. Rev. X 4, 011025 (2014).
TRG1
M. Levin and C. P. Nave, Tensor Renormalization Group Approach to Two-Dimensional Classical Lattice Models, Phys. Rev. Lett. 99, 120601 (2007).
TRG2
G. Evenbly and G. Vidal, Tensor Network Renormalization, Phys. Rev. Lett. 115, 180405 (2015).
TRG3
S. Yang, Z.-C. Gu, and X.-G. Wen, Loop Optimization for Tensor Network Renormalization, Phys. Rev. Lett. 118, 110504 (2017). https://doi.org/10.1103/PhysRevLett.118.110504DOI: 10.1103/PhysRevLett.118.110504.
TRG4
G. Evenbly, Gauge fixing, canonical forms, and optimal truncations in tensor networks with closed loops, Phys. Rev. B 98, 085155 (2018). https://doi.org/10.1103/PhysRevB.98.085155DOI: 10.1103/PhysRevB.98.085155.
TRG5
M. Hauru, C. Delcamp, and S. Mizera, Renormalization of tensor networks using graph-independent local truncations, Phys. Rev. B 97, 045111 (2018).
LBP1
A. T. Ihler, J. W. Fischer, and A. S. Willsky, Loopy Belief Propagation: Convergence and Effects of Message Errors, J. Mach. Learn. Res. 6, 905 (2005).
§ EXPONENTIAL SUPPRESSION OF HIGH-DEGREE EXCITATIONS
The utility of the proposed loop expansions is built on the premise that the weight of excitations (i.e. on top of the BP vacuum) are exponentially diminishing in their degree; this property allows us to accurately approximate a tensor network contraction using only a few low-degree excitations. In this appendix we justify this premise.
Let 𝒯 be a finite tensor network containing at least a single closed loop. We assume that a BP fixed point has been found and that the tensors in network 𝒯 have been normalized such that their vacuum contributions are unity (or equivalently that the Bethe free energy is zero). Finally, we also assume that the BP fixed point accurately approximates the network contraction; that the difference between the full network contraction 𝒵(𝒯) and the BP approximation 𝒵_BP is small. We now consider isolating a single length-x loop of the network by absorbing the fixed point messages on all edges outside of the loop into their adjoining tensors, such that the reduced network is realized as the trace of a product of matrices as shown in the example of Fig. <ref>(a). For simplicity we assume that the matrices within the loop are copies of a single matrix A, which would only occur in practice if the original network possessed sufficient spatial symmetries, although the essence of our argument does not rely on this assumption. It follows that the network contraction can be written as 𝒵(𝒯) = {A^x}. The reduced network can be decomposed as the BP vs the non-BP component
𝒵(𝒯) = {(A𝒫)^x} + {(A𝒫^C)^x},
with 𝒫 as the rank-1 projector onto message subspace and 𝒫^C its complement, see also Fig. <ref>(b). Note that 𝒵_BP = {(A𝒫)^x} = 1 due to the normalization condition imposed. Given that BP fixed points are known to correspond to stationary points of the Bethe free energy, and that we have assumed our known fixed point accurately approximates the network contraction, it follows that the projector 𝒫 onto message subspace should maximize 𝒵_BP. Thus 𝒫 must equal the outer product of the dominant left/right eigenvectors of matrix A (or, equivalently, the fixed point messages within the loop must by the left/right eigenvectors of A). Let {λ_0, λ_1, λ_2,…} be the eigenvalues of A ordered with descending magnitude, where the normalization condition implies that |λ_0|=1. It follows that the weight W(δ_x) of the loop correction term can be written as
W(δ_x) = {(A𝒫^C)^x} = ∑_r=1^d( (λ_r)^x).
Let us now consider how Eq. <ref> would scale in the limit of large length-x (still assuming that the matrices within the loop are identical). Firstly, we should realize that the sub-leading eigenvalues should have magnitude strictly less than unity, i.e. |λ_1| < 1, otherwise the loop correction term would out-weight the BP term, violating the prior assumption that the BP fixed point accurately approximates the full network contraction 𝒵(𝒯). In the large-x limit Eq. <ref> will be dominated be the largest magnitude eigenvalues, so if λ_1 of A is n-fold degenerate it follows
W(δ_x) ≈ n (λ_1)^x
which may be rewritten as
W(δ_x) ≈ e^-kx
with k=-log(λ_1) - (1/x)log(n). Notice that k tends to a positive constant with large-x, given that λ_1 < 1, such that the weight W(δ_x) tends to purely exponential decay in degree-x as was posited Eq. <ref> of the main text.
Although our argument assumed, for simplicity, that the transfer matrices with any loop were identical this condition is not strictly necessary; it is only required that the transfer matrices have a gap between their dominant and sub-leading eigenvalues in order to realize (approximate) exponential decay. Finding a general characterisation of the networks that are expected to have a gap in the transfer matrix spectrum associated to any closed loop is likely a difficult task (and certainly beyond the scope of this manuscript). However, given that the loop series expansion is only expected to be viable for networks where the loop corrections are small in comparison to the BP approximation, it follows that the gap condition must hold in this circumstance. Finally we remark that our argument is only valid for excitations that realize genus-1 loops not, for instance, higher genus loops like the δ_11 excitations shown in Fig. <ref>(d). None-the-less, it seems to be the case that the scaling of Eq. <ref> does hold for higher genus loop excitations.
§ LOOP EXCITATIONS IN THE THERMODYNAMIC LIMIT
In any finite tensor network it is possible, at least in principle, to enumerate over the different possible loop configurations, organize them by degree, and then evaluate the terms out to some desired degree. In contrast, in the thermodynamic limit, even counting the density of terms of some fixed degree may be an intractable combinatorial problem. To this end we now discuss two different approximations which can be applied to use the loop expansion to calculate the free energy density f, as defined in Eq. <ref>, of a homogeneous tensor network on a lattice in the thermodynamic limit (e.g. an iPEPS).
§.§ Single-excitation approximation
The first method that we propose to estimate the contribution of loop excitations to the free energy density involves counting only the configurations that involve only a single excitation on top of the BP vacuum. Let 𝒯_N be a homogeneous tensor network on a square lattice of N sites which is assumed to evaluate to some scalar 𝒵_N. We focus on the task of evaluating the free energy density f_N defined
f_N ≡ -log(𝒵_N)/N.
Let us assume that we have already found the BP fixed point of 𝒯_N, and that the network has been normalized such that the BP vacuum is unity, 𝒵_BP = 1 (or equivalently that the Bethe free energy is zero).
We shall compute the contributions from configurations that have a single excitation, with all other tensor edges set in their BP ground state. For simplicity, here we only consider the corrections arising from the δ_4 excitations (occupying a single square on the lattice) although the method we describe is easily extendable to include arbitrary loop excitations. On the lattice of N sites there are N distinct locations where the δ_4 excitation could be placed, see Fig. <ref>(a), with each configuration having a weight of W(δ_4). It follows that the scalar 𝒵_N of the network can be approximated as
𝒵_N≈ 1 + N W(δ_4),
thus the free energy density can be further approximated as
f_N ≈-log(1 + N W(δ_4))/N
≈ -W(δ_4).
To get to the second line of working in Eq. <ref> we have used log(1+ϵ) ≈ϵ which requires 1/N ≫ W(δ_4) to be valid. That the single-excitation approximation is only valid for small N makes sense; for larger N lattices the contributions from configurations with double-excitations (or higher) will dominate as their multiplicity scales at least as N^2. However, given that the result from Eq. <ref> is independent of N, it may still be used as an approximation to the free energy density f in the thermodynamic limit. Another way of interpreting this result is that we use the single-excitation approximation to calculate the free energy density on only a small patch of the otherwise infinite system. We can easily generalize Eq. <ref> to include arbitrary excitations δ_x,
f ≈∑_{δ_x} -L(δ_x) W(δ_x),
where L(δ_x) is the number of locations per lattice site that the excitation δ_x can be placed.
§.§ Multi-excitation approximation
The second method that we propose to estimate the contribution of loop excitations to the free energy density attempts to directly include contributions from configurations that have multiple excitations. For simplicity we again focus on the δ_4 excitations, noting that the derivation is easily extended to include arbitrary excitations. We now resolve the tensor network 𝒯_N as the BP contribution 𝒵_BP plus N distinct contributions arising from the placement of a δ_4 excitation on the lattice as shown Fig. <ref>(b). However, here in each contribution we freeze only edges connecting the excitation to their BP ground while leaving the other edges in the network free. It follows that each of these contributions has total weight W(δ_4) 𝒵̃_N-4, where 𝒵̃_N-4 represents the scalar from contracting the remaining tensor 𝒯̃_N-4 network on (N-4) sites. Thus the scalar 𝒵_N from the network contraction is approximated as
𝒵_N≈ 1 + N W(δ_4) 𝒵̃_N-4.
Before continuing our derivation it is informative to reflect on the advantages and disadvantages of the proposed approximation. A key advantage of this proposal is that it implicitly includes multi-excitations; when resolving a contribution containing a single W(δ_4) excitation the remaining network 𝒯̃_N-4 could subsequently be resolved into a further sum of excitations. However, a significant disadvantage is that this approximation counts certain configurations multiple times, see Fig. <ref>(c-d) for examples. While it could be possible to use a more sophisticated strategy that subtracts out contributions from multiply-counted configurations, here we simply ignore them as in practice they seem to have minimal impact on accuracy.
In order to make headway with Eq. <ref> another approximation is needed; we assume that the remaining network 𝒯̃_N-4 has the same free energy density f_N as the original network 𝒯_N. In terms of the respective contraction scalars, 𝒵̃_N-4 and 𝒵_N, this implies
𝒵̃_N-4≈ e^4f_N𝒵_N.
Note that this approximation for 𝒵̃_N-4 ignores the boundary effects (i.e. resulting from fixing some of the network edges in their BP ground). Substitution of Eq. <ref> into Eq. <ref> gives
𝒵_N≈ 1 + 𝒵_N(e^4f_N N W(δ_4)),
which can be rearranged to give
𝒵_N≈(1 - e^4f_N N W(δ_4))^-1.
Finally, the free energy density f_N can be expressed as
f_N ≈log(1-e^4f_NN W(δ_4))/N
≈ -e^4f_N W(δ_4),
where the approximation log(1+ϵ) ≈ϵ is used to obtain the second line of working. This formula is easily generalized to include arbitrary excitations δ_x
f ≈∑_{δ_x} -L(δ_x) W(δ_x) e^S(δ_x) f,
with L(δ_x) as the number of locations per lattice site that the excitation can be placed and S(δ_x) as the number of lattice sites that the excitation occupies. Notice that Eq. <ref> is identical to Eq. <ref> apart from the addition of the extra terms exp(S(δ_x) f), which we refer to as bulk suppression factors as they act to suppress excitations that occupy larger regions of the network. The appearance of these factors in the multi-excitation derivation makes intuitive sense: excitations that occupy larger regions leave less `room' for other excitations to occur and are penalised accordingly (whereas this consideration is irrelevant if only single excitations are allowed).
Finally, we remark that, in general, it will not be possible to solve Eq. <ref> directly for (the loop corrections to) the free energy density f as it appears on both sides of the equation. However, given that these corrections are assumed to be relatively weak (otherwise the series expansion itself would not be valid), we are able to use an iterative strategy to approximately solve for f. This strategy is as follows: first we set f=0 on the r.h.s of Eq. <ref> to compute a new value for f, before then feeding the new f back into the r.h.s and repeating until f is converged. In our benchmark calculations we observed that this strategy only required a few iterations in order to converge to high precision.
§.§ Comparison
We have presented two approaches, each based on different approximations, for using the loop expansion to estimate the free energy density on an infinite, homogeneous tensor network. A comparison of these approaches for the hexagonal lattice AKLT model is shown in Fig. <ref>. While both approximations show similar accuracy when the expansion includes only the δ_6 excitation, the accuracy of single-excitation approximation shows minimal improvement as higher-degree terms are included in the expansion. In contrast, the multi-excitation approximation appears to converge to the exact result as the expansion is taken to include higher-degree terms. Similar results were also observed for the AKLT models on square and kagome lattices, and for randomly initialized PEPS tensor networks. These results demonstrate that the bulk suppression factors exp(S(δ_x) f), which appear only in Eq. <ref> derived using the multi-excitation approximation, are vital to converge the loop expansions to high accuracy.
§ KAGOME AND SQUARE LATTICES
In this appendix we provide some of the details of the benchmark results provided for iPEPS on kagome and square lattices. An iPEPS for the kagome lattice is shown in Fig. <ref>(a). Although we could perform the series expansion on the (square norm) of this network directly, we instead found it preferable to perform some preliminary restructuring. The reasoning for this was to remove the triangular loops from the network which are otherwise problematic since (i) the weight of the excitations around the triangular loops are likely to be large (since the loops are short) and (ii) they generate a multitude of low-degree terms that would need to be included in the expansion. The restructuring is depicted in Fig. <ref>(b), where each (5-index) PEPS tensor is first decomposed as a product of three 3-index tensors, then the tensors surrounding each triangle of the kagome lattice are subsequently contracted into a single tensor (located at the center of each kagome lattice triangle). These manipulations result in a PEPS with a hexagonal lattice geometry but where the physical indices are located on tensors set on the edges of the hexagonal lattice as shown in Fig. <ref>(c). Upon forming the square norm ⟨ψ|ψ⟩ a purely hexagonal lattice tensor network is obtained, as shown in Fig. <ref>(d), which can be treated with the same expansion as described for the hexagonal lattice PEPS. Note that the manipulations performed in Fig. <ref>(b) may require an increase in PEPS bond dimension from m→ m^2, although in many cases the new bond dimension could be truncated without incurring significant error. This example of manipulating the kagome lattice into a hexagonal lattice highlights an interesting point: that the accuracy of the series expansion can depend on the particulars of the tensor network representation of a quantum state |ψ⟩, rather than solely on the properties of |ψ⟩ itself (e.g. entanglement and correlations).
The benchmark results of Sect. <ref> also explore contraction of PEPS on the square lattice, both for the ground-state of the AKLT model as well as for randomly initialized networks. The square lattice results were computed similarly to the methods prescribed for the hexagonal lattice, both in terms of evaluating the series expansion and of evaluating the numerically exact results using boundary MPS. Several of the loop correction terms are shown in Fig. <ref>, where it can be seen that the number of distinct loop excitations of degree-x scales more quickly as a function of x than was seen with the hexagonal lattice.
|
http://arxiv.org/abs/2409.02336v1 | 20240903235128 | Comparative Analysis of Learning-Based Methods for Transient Stability Assessment | [
"Xingjian Wu",
"Xiaoting Wang",
"Xiaozhe Wang",
"Peter E. Caines",
"Jingyu Liu"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Algorithm[1][htb]
|
http://arxiv.org/abs/2409.03050v1 | 20240904193755 | Scaling inequalities and limits for Robin and Dirichlet eigenvalues | [
"Scott Harman"
] | math.SP | [
"math.SP",
"35P15"
] |
§ ABSTRACT
For the Laplacian in spherical and hyperbolic spaces, Robin eigenvalues in two dimensions and Dirichlet eigenvalues in higher dimensions are shown to satisfy scaling inequalities analogous to the standard scale invariance of the Euclidean Laplacian. These results extend work of Langford and Laugesen to Robin problems and to Dirichlet problems in higher dimensions.
In addition, scaled Robin eigenvalues behave exotically as the domain expands to a 2-sphere, tending to the spectrum of an exterior Robin problem.
Context-Aware Image Descriptions for Web Accessibility
Amy Pavel
September 9, 2024
======================================================
§ INTRODUCTION
§.§ From Euclidean to spherical and hyperbolic
In Euclidean space, we have the following scaling identity of eigenvalues for every t ∈:
t^2λ(tΩ) = λ(Ω)
where λ is a Dirichlet or Neumann Laplacian eigenvalue on a sufficiently regular domain Ω. Robin eigenvalues λ(Ω, α) with parameter α∈ also satisfy a scale invariance provided one also scales the parameter, namely
t^2 λ(tΩ, α/t) = λ(Ω, α).
Equivalently, one can interpret these identities as the two functionals being constant for every value of t. However, when space is curved, the above identities fail, even after making sense of linear dilation on the new spaces. Our goal is to recover information about these functionals on spherical and hyperbolic spaces.
We will analyze the functionals
λ_k(Ω_t)S(t)^2
where λ_k is a Laplace–Beltrami eigenvalue, Ω_t is a spherical cap or geodesic ball, and S(t) is some natural geometric scaling factor. For Robin eigenvalues, we will also look for natural normalizing factors N(t) and their corresponding functionals
λ_k(Ω_t, α / N(t))S(t)^2.
The three main results in this paper are Robin monotonicity in spherical and hyperbolic space in two dimensions, Dirichlet monotonicity in spherical and hyperbolic space in higher dimensions, and computations of the limits for the Robin functionals. The limiting cases exhibit exotic behavior: as the spherical caps expand to fill up the entire sphere, the scaled Robin functional converges to a negative eigenvalue of the exterior Robin problem on the complement of a Euclidean disk.
§.§ Unifying spherical and hyperbolic notation
Let ^n be the unit sphere of positive curvature 1. We will employ the coordinate system (θ, ξ) where θ∈ (0, π) is the angle of aperture from the north pole and ξ∈^n-1 represents the other angular components. Observe that any point (θ, ξ) has geodesic distance θ from the north pole. The domain of interest is
C(Θ) := {(θ, ξ) : 0 ≤θ < Θ, ξ∈^n-1}
where 0 < Θ < π, or geometrically, the spherical cap of aperture Θ. We will consider the eigenvalue problem on C(Θ):
-_^nu = λ u
subject to Dirichlet, Neumann, or Robin boundary conditions.
We also consider hyperbolic space ^̋n of negative curvature -1. In this space, for Θ < 0 we put C(Θ) as the geodesic disk of radius |Θ|. There is the analogous coordinate system (θ, ξ) where θ∈ (0, ∞) and ξ∈^n-1, and the corresponding eigenvalue problem
-_^̋nu = λ u.
Frequently we will denote the Laplace–Beltrami operator on these sets as
_C(Θ) :=
_^̋n, -∞ < Θ < 0,
_^n, 0 < Θ < π.
§.§ Two-dimensional Robin results
Our goal is to quantify the effect that curving space has on the scaled eigenvalues, that is, how the geometry of the space interacts with its underlying analytic properties. In Euclidean ^n, taking ^n as the unit ball, the constant function t^2 λ(t ^n) takes the appealing geometric form as an eigenvalue times the radius squared. In curved spaces such as ^n and ^̋n, it is unclear what the “scaling factor” should be, i.e., with what geometric quantity one should replace t^2.
In two dimensions, Langford and Laugesen in <cit.> present results and scaling factors for Dirichlet and Neumann eigenvalues in two dimensions. A natural question to ask is whether analogous scaling results apply to Robin eigenvalues. We recall that the Robin problem is
{ -_C(Θ) u = λ u in C(Θ),
∂ u/∂ n + α u = 0 on ∂ C(Θ),
.
where ∂/∂ n is the outer unit normal derivative and α is a real-valued parameter. This problem induces a discrete spectrum of eigenvalues
λ_1(Θ, α) < λ_2(Θ, α) ≤λ_3(Θ, α) ≤…→∞
where λ_k(Θ, α) is the k^th Robin eigenvalue on C(Θ) with parameter α.
The Euclidean scale invariance (<ref>) suggests that the Robin parameter needs to be suitably scaled in the hyperbolic and spherical cases. Additionally, if the Robin parameter is negative, the spectrum now has negative eigenvalues. The Robin problem thus contrasts sharply with Neumann and Dirichlet problems, which have non-negative spectra. The negative eigenvalues of course tend to reverse the monotonicity of the functional, but the main interest for the negative eigenvalues is their peculiar limiting behavior.
First, we handle positive Robin parameters (and hence positive Robin eigenvalues). A straightforward generalization of <cit.> holds. The main adaptation for the Robin problem is to scale the Robin parameter α by the perimeter of the geodesic disk (sinh |Θ|) or by the perimeter of the spherical cap (sinΘ). At Θ = 0, we define λ_k(, α) to be the k^th Robin eigenvalue of the Euclidean unit disk ⊂^2 with parameter α.
Fix n = 2, k ≥ 1, and α > 0.
* The function
Θ↦λ_k(Θ, α/sinh |Θ|) 4 tanh^2(Θ/2), Θ∈ (-∞,0) ,
λ_k(𝔻, α) , Θ=0 ,
λ_k(Θ, α/sinΘ) 4 tan^2(Θ/2) , Θ∈ (0,π),
increases strictly and continuously from 0 to ∞. See Figure <ref>.
* The function
Θ↦λ_k(Θ, α/sinh |Θ|) sinh^2(Θ), Θ∈ (-∞,0) ,
λ_k(𝔻, α), Θ=0 ,
λ_k(Θ, α/sinΘ) sin^2(Θ) , Θ∈ (0,π),
decreases strictly and continuously from ∞ to 0. See Figure <ref>.
The function (<ref>) has the geometric interpretation of an eigenvalue times stereographic radius squared, where the stereographic transformation is detailed in (<ref>). The function (<ref>) has the interpretation of an eigenvalue times perimeter squared.
Langford and Laugesen showed the Neumann case α = 0 in <cit.>. For the Dirichlet case (i.e., α→∞) the theorem was also proven in <cit.>; in that case the function (<ref>) increases from 1 to ∞.
For negative Robin parameters in the next theorem, we encounter more obstacles. The eigenvalue can turn negative, which reverses monotonicity and necessitates different techniques for the limiting values. In particular, a negative eigenvalue causes strange limiting behavior towards a seemingly unrelated unbounded Robin problem.
We define λ_k^ext(, α)
as the k^th Robin eigenvalue on the exterior of the Euclidean unit disk . (The full definition of the PDE system is (<ref>). More information on exterior Robin problems can also be found in <cit.>.) We show that this exterior eigenvalue arises as a limit when the Robin eigenvalue is negative.
Fix n = 2, k ≥ 1, and α < 0 where α∈ [-m - 1, -m) and m is a non-negative integer.
* If k ≤ 2m + 1, then the eigenvalue function (<ref>) is negative and decreases strictly and continuously from 0 to -∞ as a function of Θ∈ (-∞, π). If k = 2m+2, 2m+3, and α = -m-1, then λ_2m+2 = λ_2m+3 = 0 for all Θ and (<ref>) is identically zero. Otherwise, λ_k > 0 for all Θ and Theorem <ref>(i) continues to hold. See Figure <ref>.
* If k ≤ 2m + 1, then the eigenvalue function (<ref>) is negative and increases strictly and continuously from -∞ to λ_k^ext(, α) as a function of Θ∈ (-∞, π). If k = 2m+2, 2m+3, and α = -m-1, then λ_2m+2 = λ_2m+3 = 0 for all Θ and (<ref>) is identically zero. Otherwise, λ_k > 0 for all Θ and Theorem <ref>(ii) continues to hold. See Figure <ref>.
The appearance of the exterior Robin eigenvalue problem is quite surprising, but we will provide some intuition when we prove the limiting values. Compared to the previous theorem, monotonicity and limits reverse sign when the eigenvalue is negative. Another fact that emerges from the theorem is that the normalized eigenvalues cannot change sign as a function of Θ: they remain negative or positive (or zero) for all Θ.
We will often make use of the following function definitions:
(Θ) =
sinh(|Θ|) for Θ < 0,
sin(Θ) for Θ≥ 0,
(Θ) =
tanh(|Θ|) for Θ < 0,
tan(Θ) for Θ≥ 0.
These definitions differ from <cit.> in using sinh (|Θ|) for Θ < 0 rather than sinh( Θ).
These functions make our statements more succinct, e.g., the first part of Theorem <ref> reads that λ_k(Θ, α/(Θ))4^2(Θ/2) is an increasing function.
Using this notation, we provide graphs of the first spectral curve (k = 1) in terms of α and Θ. See Figures <ref>, <ref>, <ref> and <ref>. These plots help visualize Theorems <ref> and <ref>. In addition, Table <ref> gives an overview of the limiting values.
§.§ Higher-dimensional results
Higher dimensions pose completely different challenges to two dimensions. The Dirichlet integral is no longer conformally invariant and the proof techniques break down. In addition, previously natural choices of scaling factors in lower dimensions are no longer so in higher dimensions. The initial goal then is to determine which geometric quantities do suitably generalize to higher dimensions and scale the eigenvalue appropriately.
Table <ref> gives examples of natural quantities on ^n which could potentially work, along with their geometric interpretation. The quantity sin(Θ) is one such geometric factor that does not experience a monotonicity defect in higher dimensions, and it is this scaling quantity we focus on for n ≥ 3. Volume also appears to be a monotonic scaling factor by the numerics in the Dirichlet case, but we postpone discussing this quantity for a brief moment.
Let λ_k(Θ) represent the k^th Dirichlet eigenvalue of the Laplace–Beltrami operator on C(Θ). The scaling factor ^2(Θ) is shown to produce monotonicity in higher dimensions, thus generalizing Langford and Laugesen's result to all dimensions, in the Dirichlet case. At Θ = 0, we define λ_k(^n) to be the k^th Dirichlet eigenvalue of the Euclidean ball ^n ⊂^n.
The function
Θ↦λ_k(Θ) sinh^2(Θ), Θ∈ (-∞,0) ,
λ_k(^n) , Θ=0 ,
λ_k(Θ) sin^2(Θ) , Θ∈ (0,π) ,
decreases strictly from ∞ to 0, for all k ≥ 1 and n ≥ 3. See Figure <ref>.
The result does not exclude other geometric scaling factors from working, nor does it state that other eigenvalues such as Neumann or Robin do not behave monotonically when scaled. Indeed, we will briefly discuss open problems below.
§.§ Open monotonicity problems
The statement analogous to Theorem <ref> for the Neumann spectrum is supported by numerical evidence. However, the proof of the theorem does not translate well to the Neumann case, and so we state the analog as a conjecture. Let μ_k(Θ) be the k^th Neumann eigenvalue on C(Θ).
The function
Θ↦μ_k(Θ) sinh^2(Θ), Θ∈ (-∞,0) ,
μ_k(^n) , Θ=0 ,
μ_k(Θ) sin^2(Θ) , Θ∈ (0,π/2) ,
decreases strictly from ∞ to 0, for all k ≥ 2 and n ≥ 3. See Figure <ref>.
The lower-dimensional case n = 2 of Conjecture <ref> was shown to hold by Langford and Laugesen in <cit.>. One could further conjecture that the Robin case satisfies a monotonicity result in higher dimensions, but the proof would encounter similar obstacles to the Neumann case.
For another open problem, we look at the scaling factor volume. Write n-dimensional volume of the geodesic ball of radius |Θ| as
V(Θ) := c_n∫_0^|Θ|(θ)^n-1 dθ,
where c_n is the appropriate normalizing factor, equal to the surface area of the Euclidean (n-1)-sphere ^n-1. Consider the scaled Dirichlet and Neumann eigenvalues
λ_k(Θ)V(Θ)^2/n, μ_k(Θ)V(Θ)^2/n,
with λ_k(Θ)V(Θ)^2/n = λ_k(^n)|^n|^2/n at Θ = 0 by convention. This scaling factor is somewhat peculiar since Figure <ref> numerically supports that λ_k(Θ)V(Θ)^2/n is always decreasing whereas Figure <ref> numerically supports that μ_k(Θ)V(Θ)^2/n is always increasing.
We look at the Dirichlet case first, for k =1,2. Observe that after a change of variable t = θ/Θ, we have for Θ∈ (0, π) that
V(Θ)/Θ^n = c_n ∫_0^1 (sin(Θ t)/Θ t)^ n-1 t^n-1 dt,
implying that V(Θ)^2/n/Θ^2 is a decreasing function since sin(x)/x is decreasing. (The Θ < 0 case is proven similarly.) One may also verify that the displayed ratio converges to |^n| as Θ→ 0. Since λ_1(Θ)Θ^2 decreases from ∞ to 0 by <cit.>, we see in all dimensions n ≥ 2 that
λ_1(Θ)V(Θ)^2/n
decreases strictly from ∞ to 0 for Θ∈ (-∞, π), passing through the value λ_k(^n)|^n|^2/n continuously at Θ = 0. Note that <cit.> also manifests as a consequence of a theorem by Cheng (see <cit.> and <cit.>.) Furthermore, for the second eigenvalue, from <cit.> we see for all n ≥ 2 that
λ_2(Θ)V(Θ)^2/n
decreases strictly from ∞ to λ_k(^n)|^n|^2/n for Θ∈ (-∞, 0]. However, we cannot make further deductions on the behavior of the function on the interval (0, π) using this theorem.
For the Neumann case, <cit.> implies in dimension n =2
μ_2(Θ)V(Θ)^2/n
increases strictly and continuously on (-∞, π) from 2π to 8π. Observe how the function converges to non-zero values, highlighting the peculiar behavior of this scaling factor.
We state monotonicity conjectures for these two functionals.
The function
λ_k(Θ) V(Θ)^2/n
decreases strictly for Θ∈ (-∞, π) for k ≥ 2 and n ≥ 2. See Figure <ref>.
The function
μ_k(Θ) V(Θ)^2/n
increases strictly for Θ∈ (-∞, π) for k ≥ 2 and n ≥ 2. See Figure <ref>.
(The Neumann case where both k = 2 and n =2 has already been proven; see Langford and Laugesen <cit.>.)
The limiting values of the functionals are not obvious. Certainly at the origin they should pass through λ_k(^n)|^n|^2/n and μ_k(^n)|^n|^2/n but the endpoint Θ = -∞ is not clear. The Neumann case k=2, n =2 has already been shown to converge to a non-zero limit as Θ→ -∞, and Figure <ref> supports that this phenomenon continues in higher dimensions. For the Dirichlet eigenvalues, since λ_1(Θ)V(Θ)^2/n→∞ as Θ→ -∞, it must also be the case that λ_k(Θ)V(Θ)^2/n→∞. For Θ→π, the Dirichlet and Neumann spectra should converge to the spectrum of the whole sphere; see <cit.> and <cit.>. More investigation and analysis is needed for this scaling factor.
§.§ Related results
Upper and lower bounds and some exact values are known for the Dirichlet eigenvalues in both hyperbolic and spherical spaces. For spherical caps in dimension n = 3, we have an exact formula for the first Dirichlet eigenvalue λ_1(Θ) = π^2/Θ^2 - 1, which one may verify is monotonically decreasing after multiplying by the scaling factor sin^2(Θ). From Borisov and Freitas <cit.>, we provide a two-sided estimate on the first eigenvalue, which we compute here for dimension n = 4 as
λ_1(^4)/Θ^2 - 2 ≤λ_1(Θ) ≤λ_1(^4)/Θ^2 - 3/4(1/sin^2(Θ) - 1/Θ^2) for Θ > 0,
λ_1(^4)/Θ^2 + 2 ≤λ_1(Θ) ≤λ_1(^4)/Θ^2 + 3/4(1/sinh^2(Θ) - 1/Θ^2) for Θ < 0.
Such bounds cannot prove monotonicity. However, they are sharper estimates than the bounds acquired from Theorem <ref>, which are
λ_1(Θ) ≤λ_1(^4)/sin^2(Θ)for Θ > 0,
λ_1(Θ) ≥λ_1(^4)/sinh^2(Θ)for Θ < 0.
There is other literature involving ratios of eigenvalues and Faber–Krahn type inequalities by Ashbaugh, Benguria, and Linde <cit.>. Estimates for higher Dirichlet eigenvalues can be found in a paper by Berge <cit.>.
§ TWO-DIMENSIONAL ROBIN MONOTONICITY — THEOREMS 1.1 AND 1.2
For this section, we will work exclusively in two dimensions and prove the monotonicity statements in Theorems <ref> and <ref>. The proofs of the limiting values are in the next section.
Let C(Θ) remain as defined above, either as a hyperbolic or spherical cap.
We consider the Robin eigenvalue problem
{ -_C(Θ) u = λ u in C(Θ),
∂ u/∂ n + α u = 0 on ∂ C(Θ),
.
with discrete spectrum
λ_1(Θ, α) < λ_2(Θ, α) ≤λ_3(Θ, α) ≤…→∞.
The variational characterization of λ_k(Θ, α) is
λ_k(Θ, α) = min_ℒmax_0 ≠ u ∈ℒ∫_C(Θ) |_^2 u|^2 dA + α∫_∂ C(Θ) |u|^2 dS/∫_C(Θ) |u|^2 dA
where ℒ varies over k-dimensional subspaces of H^1(C(Θ)), dA is the 2-dimensional area element, and dS is the appropriate 1-dimensional measure on the boundary. When α < 0, at least one negative eigenvalue exists, which impacts monotonicity calculations and proofs. We will separate then our monotonicity discussion into two cases: positive α parameters and negative α parameters.
§.§ Positive Robin parameter — Theorem 1.1
Fix α > 0. We perform the following conformal transformation to the plane under polar coordinates:
(θ, ξ) ↦
(tanθ/2, ξ) if Θ > 0,
(tanhθ/2, ξ) if Θ < 0.
This map is exactly stereographic projection in the spherical cap case, mapping the spherical cap C(Θ) to a disk in the plane.
The conformal map defined above transforms the Rayleigh quotient and eigenequation in
a systematic way. Define the weight functions
w_±(r) = 4/(1 ± r^2)^2.
Consider the eigenequation -_C(Θ) u = λ_k(Θ, α) u. Letting v be the pushforward of u under the above map, the equation transforms as
-_^2 v = λ_k(Θ, α) w_± v
where the weight is w_+ for positively curved spherical caps and w_- for negatively curved geodesic disks. One can also consult <cit.> for more detail on the transformations.
We define R := tan(Θ/2) (or tanh(|Θ|/2) in the hyperbolic case) for the radius of the disk where v is defined. The Rayleigh quotient for λ_k(Θ, α) transforms by conformal invariance of the Dirichlet integral to
∫_R | v(r, ϕ)|^2 dA + α√(w_±(R))∫_∂ (R ) v(R, ϕ)^2 dϕ/∫_R v(r, ϕ)^2 w_± dA
where dA = r dr dϕ.
The rescaling g(z) = v(Rz) further transforms the expression into
∫_ | g(r, ϕ)|^2 dA + α√(w_±(R))∫_∂ g(1, ϕ)^2 dϕ/∫_ g(r, ϕ)^2 w_± (Rr) R^2 dA.
After normalizing α by the factor present in Theorem <ref>, and noting that sinΘ = √(w_+(R)) and sinh |Θ| = √(w_-(R)), the Rayleigh quotient for λ_k(Θ, α/ (Θ)) becomes
∫_ | g(r, ϕ)|^2 dA + α∫_0^2π g(1, ϕ)^2 d ϕ/∫_ g(r, ϕ)^2 w_± (Rr) R^2 dA.
As the numerator is independent of R and positive (recall α > 0), to show parts (i) and (ii) of the theorem we need only show monotonicity with respect to R in the denominator after multiplying by the appropriate scaling factor. The scaling factors transform as
^2(Θ) = 4R^2/(1± R^2)^2, 4^2(Θ/2) = 4R^2,
where (Θ) and (Θ) were defined in the remark after Theorem <ref>.
Let S(R) be equal to the appropriate scaling factor.
The variational characterization for the scaled eigenvalue is thus
λ_k(Θ, α/(Θ)) S(R):= min_ℒmax_0 ≠ g ∈ℒ∫_ | g(r, ϕ)|^2 dA + α∫_0^2π g(1, ϕ)^2 d ϕ/∫_ g(r, ϕ)^2 w_± (Rr)/S(R) R^2 dA
where ℒ varies over k-dimensional subspaces of H^1(). The crucial point is that the only dependence on R in the expression is the quantity
w_±(Rr)/S(R)R^2
in the denominator. Langford and Laugesen's proof of <cit.> analyzes this quantity and justifies the strict monotonicity statements in Theorem <ref>.
We still need to show the limiting values in the theorem are correct, but we will delay the proof until after analyzing the negative parameters.
§.§ Negative Robin parameter — Theorem 1.2
Negative parameters require extra care as the eigenvalue could be negative, which reverses the monotonicity results. For spherical caps and geodesic disks, the behavior of negative eigenvalues is fortunately quite tractable, as seen in the following lemma.
Let α < 0 and Θ≠ 0. If α∈ [-m-1, -m), then the first 2m+1 perimeter-normalized eigenvalues λ_k(Θ, α/ (Θ)) are negative. The other eigenvalues are positive, except that when α = -m-1 there are two zero eigenvalues, λ_2m+2 = λ_2m+3 = 0. The analogous results hold when Θ = 0 for λ_k(, α).
The proof relies on the fundamental connection between Robin and Steklov eigenvalues, and so we will mostly focus on integer α. Consider the Steklov problem on the unit disk , which is
{ -_^2 g = 0 in ,
∂ g/∂ n = σ g on ∂.
.
The eigenfunctions and eigenvalues of this problem are well-understood (see <cit.> and <cit.>.) The eigenvalues repeated according to multiplicity are
σ∈{0, 1, 1, 2, 2, 3, 3, …}.
Suppose Θ > 0. We will transform the Steklov problem on the disk into a Steklov problem on a spherical cap by reversing the transformations performed in the preceding section. Letting R := (Θ/2), we perform the rescaling v(z) = g(z/R), so that v is defined on the disk R. Then, we take the pullback of v along the conformal map (<ref>) and call the pullback u(θ, ξ), whose domain is the spherical cap C(Θ). Under these conformal mappings, the Steklov problem on the disk transforms into the perimeter-normalized Steklov problem
{ -_^2 u = 0 in C(Θ),
∂ u/∂ n + (α / (Θ)) u = 0 on ∂ C(Θ),
.
where α = -σ.
So, the perimeter-normalized Robin problem on a spherical cap or geodesic disk has a zero eigenvalue, λ(Θ, α/ (Θ)) = 0, precisely when α is a non-positive integer. Recall that Robin eigenvalues are strictly increasing and continuous as a function of α. This fact along with the systematic form of the Steklov spectrum on the disk implies there are at least 2m+1 negative eigenvalues when α = -m-1. If there were more than 2m+1, then the Neumann spectrum (α = 0) would have a negative eigenvalue, which is impossible. It is easy to see λ_2m+1 < λ_2m+2 = λ_2m+3 = 0 when α = -m -1, as one simply uses the Steklov eigenfunctions as Robin ones. Furthermore, λ_2m+1 < 0 < λ_2m+2 when α∈ (-m-1, m).
For Θ = 0, the argument is even simpler as one does not need to perform any conformal mappings and the result is immediate.
The graphs in <cit.> give a detailed picture of what is occurring with the Robin eigencurves on the disk. The perimeter normalization is crucial for us so that the number of negative eigenvalues is independent of Θ. That is, if λ_k(Θ, α/(Θ)) < 0 for some Θ, then it remains negative for all Θ values.
We can now show the monotonicity statements in Theorem <ref>.
All calculations resulting in (<ref>) are still valid, and Theorem <ref> continues to hold for positive eigenvalues. For negative eigenvalues, one notes that if we already know that λ_k(Θ, α/(Θ)) < 0, then we can modify the family of subspaces ℒ in (<ref>) to only be those for which the maximizer of the Rayleigh quotient is negative. Using the lemma above, the negativity of the eigenvalue is independent of Θ, and so we can use this modified variational characterization for each value of Θ.
Now, since the numerator is uniformly negative in (<ref>), the monotonicity results reverse direction, and we arrive at the desired statements in Theorem <ref>.
§ POSITIVE ROBIN LIMITING VALUES FOR THEOREM <REF>
We move forward to proving the limits for the scaled Robin eigenvalues. These proofs are long and involved, so we split them over three sections. A plethora of disparate techniques will be employed, with the negative eigenvalues in the next two sections requiring the most interesting and challenging ones. In this section, we prove the limits for positive eigenvalues in Theorems <ref>. Each functional has three limits that we need to compute: Θ→ -∞, Θ→ 0, and Θ→π.
The first and simplest case is when Θ→ 0. We want to show it converges to the Euclidean eigenvalue. Observe that in (<ref>), simple computations show the quantity w_±(Rr)R^2/S(R) → 1 uniformly as R → 0 and we recover the Euclidean Rayleigh quotient and eigenvalues.
There are quite a few cases to consider for the two other limiting values of Θ, so we will be doing casework. Before doing so, we state the following Lemma, a Robin generalization of <cit.> which states that certain weighted Euclidean eigenvalues have an infinite limit. The Laplacian in the following lemma is interpreted as the Euclidean Laplacian _^2.
(Pointwise vanishing of the weight)
Let Ω⊂^2 be a bounded, regular domain. Fix α∈. For each R > 0, let w_R: Ω→ (0,1] be a measurable function with ess inf w_R > 0. Let λ_k(w_R, α) be the k^th Robin eigenvalue of -w_R^-1. Then, there exists an index N independent of R such that λ_k(w_R, α) > 0 if and only if k ≥ N. Furthermore, if lim_R →∞ w_R = 0 a.e., then
lim_R →∞λ_k(w_R, α) = ∞ for each k ≥ N.
First, we show the existence of the index N. The variational characterization reads
λ_k(m_R, α) = min_ℒmax_0 ≠ f ∈ℒ∫_Ω | f|^2 dA + α∫_∂Ω|f|^2 dS/∫_Ω |f|^2 w_R dA
where ℒ varies over k-dimensional subspaces of H^1(Ω), dA = r dr dϕ and dS is the appropriate 1-dimensional Hausdorff measure.
For each R, let N_R be the first index k such that λ_N_R(w_R,α) > 0. Let R_1, R_2 be arbitrary positive R-values. If N_R_1 = 1, then the spectrum is positive, and so α > 0 and N_R_1 = N_R_2 = 1. So, assume N_R_1≥ 2 and k < N_R_1. Then, λ_k(w_R_1, α) ≤ 0. By homogeneity of the Rayleigh quotient, we may assume that each trial function f in (<ref>) is L^2-normalized so that ∫_Ω |f|^2 dA = 1. Then, the denominator of the Rayleigh quotient is bounded above and below:
ess inf w_R_1≤∫ |f|^2 w_R_1 dA ≤ 1.
Since the denominator is bounded and λ_k(w_R_1, α) ≤ 0, the variational characterization (<ref>) implies there exists a k-dimensional subspace ℳ of H^1(Ω) such that
max_0 ≠ f ∈ℳ(∫_Ω | f|^2 dA + α∫_∂Ω|f|^2 dS) ≤ 0.
Using ℳ as a trial subspace in the variational characterization of λ_k(w_R_2, α) implies λ_k(w_R_2, α) ≤ 0. It follows that N_R_1≤ N_R_2, and N_R_1=N_R_2 by symmetry. Thus, the index N_R is indeed independent of R and we may write it simply as N.
We move on to the second statement in the lemma. Fix 0 < δ < 1 and define a new weight m_R := max (δ, w_R) so that m_R is uniformly bounded away from zero. The k^th Robin eigenvalue of the operator -m_R^-1 has variational characterization
λ_k(m_R, α) = min_ℒmax_0 ≠ f ∈ℒ∫_Ω | f|^2 dA + α∫_∂Ω|f|^2 dS/∫_Ω |f|^2 m_R dA.
Assume k ≥ N. Then, the numerator of the maximizer is positive, and by comparing Rayleigh quotients, we see that λ_k(w_R, α) ≥λ_k(m_R, α) for each R. Furthermore, since m_R is a function into (0,1] with ess inf m_R > 0, we see that λ_k(m_R, α) > 0 as well.
By the variational characterization, we see that
δ^-1λ_k(1, α) ≥λ_k(m_R, α) ≥λ_k(1, α)
where λ_k(1, α) is the k^th Robin eigenvalue of the unweighted Laplacian. We will show that lim inf_R →∞λ_k(m_R, α) ≥δ^-1λ_N(1, α) which implies λ_k(w_R, α) →∞ as R →∞.
For each R, take a k^th Robin eigenfunction f_R of -m_R^-1. The eigenfunction satisfies the weak equation
∫_Ωϕ· f_R dA + α∫_∂Ωϕ f_R dS= λ_k(m_R, α) ∫_Ωϕ f_R m_R dA
for all ϕ∈ H^1(Ω). We normalize the eigenfunction f_R in L^2(Ω) so that ∫_Ω |f_R|^2 = 1. Now take a sequence of R values tending to ∞. We see that
∫_Ω | f_R|^ 2 dA + α∫_∂Ω f_R^2 dS = λ_k(m_R, α) ∫_Ω f_R^2 m_R dA ≤δ^-1λ_k(1, α) ∫_Ω f_R^2 dA
where the inequality used (<ref>) and that m_R ≤ 1.
Applying the L^2 normalization of f_R, we derive an upper bound δ^-1λ_k(1, α) which is independent of R.
When α≥ 0, the above estimate gives us a bound on the H^1 norm of f_R that is independent of R. For α < 0, some more work has to be done. From the calculations done in <cit.>, justified by the proof of the trace theorem in Evans <cit.>, one deduces that
∫_Ω | f_R|^2 dA + α∫_∂Ω |f_R|^2 dA ≥1/2∫_Ω | f_R|^2 dA + Cα∫_Ω |f_R|^2 dA
for some constant C depending only on Ω. Since f_R is normalized in L^2, we again derive a uniform upper bound for the H^1 norm of f_R that is independent of R.
After applying the Rellich–Kondrachov theorem, we pass to a subsequence of f_R and yield a function f ∈ H^1(Ω) such that f_R ⇀ f weakly in H^1(Ω) and f_R → f strongly in L^2(Ω). Furthermore, by <cit.>, the trace operator T: H^1(Ω) → L^2(∂Ω) is compact, and since f_R ⇀ f weakly in H^1, Tf_R → T_f strongly in L^2(∂Ω).
We may further assume λ_k(m_R, α) converges to a finite limit as R →∞, after extracting a subsequence. As the weak Robin eigenequation (<ref>) is preserved under weak limits, the calculations in <cit.> still apply, and we deduce
- f = (δlim_R →∞λ_k(m_R, α)) f
in the weak sense. (This step uses the fact that w_R → 0 a.e. to imply m_R →δ a.e.)
Therefore, f is an eigenfunction of the unweighted Robin Laplacian on Ω. The eigenvalue δlim_R →∞λ_k(m_R, α) must be positive by (<ref>) since k ≥ N. It follows that δlim inf_R →∞λ_k(m_R, α) ≥λ_N(1, α), concluding the proof.
§.§ Scaling factor 4^2(Θ/2)
Now we prove the limits for Theorem <ref>(i). Suppose either α > 0 and k ≥ 1 or α∈ [-m-1, -m) and k ≥ 2m+2 (≥ 2m+4 if α = -m-1.) By Lemma <ref>, we know λ_k(Θ, α / (Θ)) > 0.
First consider Θ→π. We apply the lemma above to the weighted eigenvalue λ_k(Θ, α/sinΘ)4tan^2(Θ/2). The variational characterization (<ref>) reads (recalling that 4tan^2(Θ/2) = 4R^2)
λ_k(Θ, α/sinΘ ) 4tan^2(Θ/2):= min_ℒmax_0 ≠ g ∈ℒ∫_ | g(r, ϕ)|^2 dA + α∫_0^2π g(1, ϕ)^2 d ϕ/∫_ g(r, ϕ)^2 (1+R^2r^2)^-2 dA.
The weights w_R := (1+R^2r^2)^-2 satisfy the hypotheses of Lemma <ref>, and it follows that λ_k(Θ, α/sinΘ)4tan^2(Θ/2) →∞ as R →∞, that is, as Θ→π.
Let Θ→ -∞. A modified proof of the limiting case R → 1 and Ω = in <cit.> will be applied. Refer there for more details on the following proof. Choose h ∈ C^1([0,1]) such that h is increasing with h = 0 on [0, 1/4] and h = 1 on [3/4, 1]. Take the subspace spanned by the H^1 functions h(r) cos jϕ, 1 ≤ j ≤ k, where (r, ϕ) are the polar coordinates. By <cit.>, these functions are L^2-orthogonal to the constant and to each other, with respect to the weight (1-R^2r^2)^-2. Further recall {cos(jϕ)} is orthogonal within L^2([0, 2π]). Using this subspace as a trial function space in (<ref>), and by homogeneity of the Rayleigh quotient, we deduce the upper bound (recalling that tanh(|Θ|/2) = R)
0 ≤λ_k(Θ, α/ sinh |Θ|) 4tanh^2(Θ/2)
≤max_|c| = 1∑_j = 1^k c_j^2 ∫_ | (h(r)cos jϕ)|^2 dA + α∑_j = 1^k c_j^2 ∫_0^2 π |cos j ϕ|^2 d ϕ/∑_j = 1^k c_j^2 ∫_ |h(r) cos j ϕ|^2 (1-R^2r^2)^-2 dA
where c = (c_1, …, c_j) is the coefficient vector.
As R → 1, each integral in the denominator tends to ∞, and the numerator is independent of R. It follows that λ_k(Θ, α/ sinh |Θ|) 4tanh^2(Θ/2) → 0 when R → 1 (i.e. when Θ→ -∞).
§.§ Scaling factor ^2(Θ)
Now consider the limits for Theorem <ref>(ii). Let Θ→π. The inequality 0 ≤λ_k(Θ, α / sinΘ)sin^2(Θ) ≤λ_k(Θ) sin^2(Θ), where λ_k(Θ) is the k^th Dirichlet eigenvalue, always holds for the positive eigenvalues. By <cit.>, λ_k(Θ) sin^2(Θ) → 0 and thus so does λ_k(Θ, α / sinΘ)sin^2(Θ).
Let Θ→ -∞. We will apply Lemma <ref> again. The variational characterization of the weighted eigenvalue λ_k(Θ, α/ sinh |Θ|)sinh^2(Θ) (recalling that sinh |Θ| = 2R/(1-R^2)) is
λ_k(Θ, α/sinh|Θ|) sinh^2(Θ):= min_ℒmax_0 ≠ g ∈ℒ∫_ | g(r, ϕ)|^2 dA + α∫_0^2π g(1, ϕ)^2 d ϕ/∫_ g(r, ϕ)^2 (1-R^2/1-R^2r^2)^2 dA.
Taking Θ→ -∞ is equivalent to taking R → 1, and one sees the weights (1-R^2/1-R^2r^2)^2 satisfy the hypotheses of Lemma <ref>. (Although the lemma is stated for R →∞, a simple relabeling lets it work for R → 1.) We conclude that λ_k(Θ, α/ sinh |Θ|)sinh^2(Θ) →∞.
§ NEGATIVE EIGENVALUE LIMITS FOR THEOREM <REF>
The negative eigenvalues are more delicate and require a thorough analysis. We need some stronger tools than those used in the previous section.
Dirichlet eigenvalues satisfy domain monotonicity, where shrinking the domain causes the eigenvalues to increase. Furthermore, for weighted Laplacian operators with positive eigenvalues, decreasing the weight also increases the eigenvalues. For negative Robin eigenvalues, these phenomena tend to reverse. That is, the Robin eigenvalues have a tendency to decrease (become more negative) for these processes, letting us derive tractable lower bounds on the eigenvalues.
To be precise, we consider the weighted eigenvalue problem
{ - u = η w u in Ω,
∂ u/∂ n + α u = 0 on Σ,
∂ u/∂ n = 0 on ∂Ω∖Σ,
.
where Ω is a bounded domain with piecewise smooth boundary, Σ⊂∂Ω is a piecewise smooth, non-empty portion of the boundary, and w is some smooth positive weight function on Ω. Then, η_k(Ω, Σ, α, w) := η_k forms a discrete spectrum
η_1 < η_2 ≤η_3 ≤⋯
(In the case Σ = ∂Ω, we recover a standard weighted Robin problem.) We will typically suppress notation when it is clear from context. If α and w are clear from context, or are unchanging, then we usually denote the eigenvalues as η_k(Ω, Σ). If the Robin boundary portions are clear from context, or are unchanging, then we usually write η_k(Ω, w).
The weighted Robin-Neumann eigenvalue problem allows us to control negative eigenvalues from below by shrinking the domain or decreasing the weight:
Fix the Robin parameter α < 0, and let k ≥ 1.
* (Domain monotonicity.) Let Ω_1, Ω_2 ⊂^n be bounded domains with piecewise smooth boundaries. Suppose Σ_1 ⊂∂Ω_1 and Σ_2 ⊂∂Ω_2 are piecewise smooth, non-empty portions of the boundaries and w is positive and smooth on Ω. Assume Ω_1 ⊃Ω_2 and Σ_1 ⊂Σ_2. If η_k(Ω_1, Σ_1) < 0, then
η_k(Ω_2, Σ_2) ≤η_k(Ω_1, Σ_1) < 0.
* (Weight monotonicity.) Let Ω⊂^n be a bounded domain with piecewise smooth boundary. Suppose w_1, w_2 are two positive smooth weight functions on Ω. Assume w_1 ≥ w_2. If η_k(Ω, w_1) < 0, then
η_k(Ω, w_2) ≤η_k(Ω, w_1) < 0.
We will focus on (i) first.
The variational characterization of (<ref>) reads
η_k(Ω, Σ) = min_ℒmax_0 ≠ f ∈ℒ∫_Ω | f|^2 dA + α∫_Σ |f|^2 dS/∫_Ω |f|^2 w dA
where ℒ varies over k-dimensional subspaces of H^1(Ω). Suppose η_k(Ω_1, Σ_1) < 0. We allow ℒ to vary only over k-dimensional subspaces of H^1(Ω_1) such that the quantity
∫_Ω_1 | f|^2 dA + α∫_Σ_1 |f|^2 dS
is negative for all f ∈ℒ∖{0}. These subspaces must exist since the eigenvalue is negative. Call the class of such subspaces 𝔑. It follows immediately that if Ω_1 ⊃Ω_2 and Σ_1 ⊂Σ_2 then
η_k(Ω_1, Σ_1) =
min_ℒ∈𝔑max_0 ≠ f ∈ℒ∫_Ω_1 | f|^2 dA + α∫_Σ_1 |f|^2 dS/∫_Ω_1 |f|^2 w dA
≥min_ℒ∈𝔑max_0 ≠ f ∈ℒ∫_Ω_2 | f|^2 dA + α∫_Σ_2 |f|^2 dS/∫_Ω_2 |f|^2 w dA≥η_k(Ω_2, Σ_2),
since the denominator is being made less positive and the numerator more negative.
The proof of (ii) is similar: one restricts to negative numerators, and then the integrals decrease as the weight is decreased.
The above lemma gives a tractable lower bound on negative Robin eigenvalues (perhaps with a weight) by taking Σ_1 = Σ_2 = ∂Ω_2 and then comparing Robin eigenvalues of the disk Ω_1 with Robin-Neumann eigenvalues of an annulus Ω_2 as in Figure <ref>.
§.§ Scaling factor 4^2(Θ/2)
Let Θ→π. We will show λ_k(Θ, α/sinΘ)4tan^2(Θ/2) → -∞. Since the eigenvalue is negative for all Θ by Lemma <ref>, it is negative at Θ = 0 with eigenfunction span {f_1, …, f_k}. Computing (<ref>), we have the upper bound
λ_k(Θ, α/sinΘ)4tan^2(Θ/2) ≤max_f ∈span{f_1, …, f_k}∫_ | f|^2 dA + α∫_∂ f^2 dS/∫_ |f|^2 (1+R^2r^2)^-2 dA.
The numerator is negative for all linear combinations f by negativity of the eigenvalue at R = 0. Additionally, the only dependency of this expression on R is in the denominator, and tending R →∞ (i.e., Θ→π) causes the denominator to decrease to 0 for all r > 0. Thus, the left-hand side must go to -∞.
Now let Θ→ -∞. To prove λ_k(Θ, α/sinh|Θ|)4tanh^2(Θ/2) → 0, it is sufficient to show this is the case for the first eigenvalue. Since the Robin ground state is radial on the geodesic disk, the variational characterization (<ref>) can be written (recalling R = tanh(|Θ|/2))
λ_1(Θ, α/sinh|Θ|)4tanh^2(Θ/2) = min_g ∈ C^1([0,1])∫_0^1 g'(r)^2 r dr + α g(1)^2/∫_0^1 g(r)^2 (1-R^2r^2)^-2r dr
where g varies over the space of C^1 functions on [0,1].
We may assume the numerator is negative since the eigenvalue is. From Lemma <ref>, we may shrink the domain from [0,1] to [R,1] and decrease the variational characterization. We thus see that
λ_1(Θ, α/sinh|Θ|)4tanh^2(Θ/2) ≥min_g ∈ C^1([R,1])∫_R^1 g'(r)^2 r dr + α g(1)^2/∫_R^1 g(r)^2 (1-R^2r^2)^-2r dr.
By making the numerator more negative and the denominator less positive, we further deduce
∫_R^1 g'(r)^2 r dr + α g(1)^2/∫_R^1 g(r)^2 (1-R^2r^2)^-2r dr≥∫_R^1 g'(r)^2 R dr + α g(1)^2/∫_R^1 g(r)^2 (1-R^4)^-2R dr,
where we used that r ≥ R and (1-R^2r^2)^-2≥ (1-R^4)^-2 for r ∈ [R,1].
After performing the linear rescaling r ↦r-1/R-1 and doing some algebra on the R-terms, we conclude that
λ_1(Θ, α/sinh|Θ|)4tanh^2(Θ/2) ≥(1-R^4)^2/(1-R)^2 min_g ∈ C^1([0,1])∫_0^1 g'(r)^2 dr + (1-R)/Rα g(0)^2/∫_0^1 g(r)^2 dr.
The variational characterization on the right-hand side represents the lowest eigenvalue of a mixed Robin-Neumann problem on the interval [0,1] with Robin parameter (1-R)α/R. By letting R → 1 (which is Θ→ -∞), the parameter tends to 0 and the eigenvalue converges to the lowest pure Neumann eigenvalue, which is 0. (One can justify that this limit is accurate by bounding below by a pure Robin problem, or by formally letting R → 1 in the Rayleigh quotient.) Since (1-R^4)^2/(1-R)^2→ 16, we conclude that λ_1(Θ, α/sinh |Θ|)4tanh^2(Θ/2) → 0 and the claim follows.
§.§ Scaling factor ^2(Θ)
The case Θ→π is long and involved, and we place it in the next section
Let Θ→ -∞. To see that λ_k(Θ, α/sinh|Θ|)sinh^2(Θ) → -∞, let f_1, …, f_k be the first k Robin eigenfunctions for the Euclidean unit disk (Θ = 0). By (<ref>), we have the upper bound (recalling sinh|Θ| = 2R/(1-R^2))
λ_k(Θ, α/sinh|Θ|)sinh^2(Θ)≤max_|c| = 1 ∑_j=1^k c_j^2( ∫_ | f_j|^2 dA + α∫_∂ |f_j|^2 dS)/∑_j=1^k c_j^2 ∫_ |f_j|^2 ( 1-R^2/1-R^2r^2)^ 2 dA,
where c = (c_1, …, c_k) is the coefficient vector. The numerator is independent of R and negative, so we wish to show the quantities ∫_ |f_j|^2 ( 1-R^2/1-R^2r^2)^ 2 dA tend towards 0 as R → 1. Note that the weight is bounded above by 1 so that each |f_j|^2 (1-R^2/1-R^2r^2)^2 is bounded above by the integrable function |f_j|^2, and the weights tend pointwise to 0 as R→ 1. The claim follows from the dominated convergence theorem.
§ EXOTIC LIMIT TOWARDS EXTERIOR EIGENVALUE
Consider the scaled spherical eigenvalue λ_k(Θ, α/sinΘ)sin^2(Θ) and let Θ→π. In this case, we are claiming the scaled eigenvalue approaches the eigenvalue for a Robin exterior problem. Namely, we define the eigenvalue problem
{ - u = λ_k^ext(, α)u in ℝ^2 ∖𝔻,
∂_ν u - α u = 0 on ∂𝔻,
.
where ∂_v is the inner normal derivative, i.e., pointing into ^2 ∖. We call λ_k^ext(, α) the k^th exterior Robin eigenvalue, and we claim for negative eigenvalues that
λ_k(Θ, α/sinΘ)sin^2(Θ) →λ_k^ext(, α)
as Θ→π.
At first glance, the eigenvalue problem does not seem well-posed since the embedding H^1(^2∖) into L^2(^2 ∖) is not compact. Indeed, the problem has essential spectrum equal to [0, ∞) by <cit.>. The negative eigenvalues, however, are discrete and satisfy a variational characterization by <cit.>. If there are k negative eigenvalues, then
λ_k^ext(, α) = min_ℒmax_0 ≠ f ∈ℒ∫_^2 ∖ | f|^2 dA + α∫_∂ |f|^2 dS/∫_^2 ∖ |f|^2 dA
where ℒ varies over k-dimensional subspaces of H^1(^2∖).
Consider the variational characterization (<ref>) of the spherical eigenvalue with scaling factor sin^2(Θ) = 4R^2/(1+R^2)^2. After one performs the conformal map z ↦ 1/z on the Rayleigh quotient in (<ref>), the variational characterization reads
λ_k(Θ, α/sinΘ)sin^2(Θ)= min_ℒmax_0 ≠ f ∈ℒ∫_^2 ∖ | f|^2 dA + α∫_∂ |f|^2 dS/∫_^2 ∖ |f|^2 ( 1+R^2/r^2+R^2)^ 2 dA
where ℒ now varies over k-dimensional subspaces of the function space
H_ext(^2 ∖) := { f : ∫_^2 ∖ |f|^2 |z|^-4 dA + ∫_^2 ∖ | f|^2 dA < ∞}.
The function space H_ext(^2 ∖) is not the same as H^1(^2 ∖). Note that H^1(^2 ∖) ⊂ H_ext(^2 ∖).
Formally, if one takes the variational characterization (<ref>) and lets R →∞, without regard for interchanging limits or different function spaces, the variational characterization (<ref>) is recovered. Of course, we do care about interchanging limits and different function spaces, and so we must proceed quite delicately.
There are quite a few steps to the proof, and so we give a brief overview of the attack plan:
* Show that the number of discrete negative eigenvalues is the same for both problems, so that the convergence makes sense.
* Bound the spherical eigenvalues above by the exterior eigenvalues, and below by the eigenvalues of a problem related to the exterior problem, namely a Robin-Neumann problem on a large annulus.
* Justify that separation of variables applies to the exterior and annular eigenvalues, and reduce the problem to convergence of their respective radial modes.
* Prove the radial modes of the annular eigenvalues converge to the radial modes of the exterior eigenvalues as the outer radius of the annulus goes to ∞, by analyzing the implicit eigenvalue equations.
Each step is proven in their respectively numbered subsections: <ref>, <ref>, <ref>, <ref>.
§.§ Step (<ref>): counting negative eigenvalues
We first show that the dimension of the negative eigenspace is equal for both the exterior problem and the spherical cap problem, and thus there is a variational characterization to converge to as R →∞. We computed the dimension of the negative eigenspace for the spherical cap previously in Lemma <ref>. The following lemma shows the dimension remains the same for the exterior problem.
If α∈ [-m-1, -m) where m is a non-negative integer, then there exist 2m+1 negative discrete exterior eigenvalues λ_k^ext(, α), k = 1, …, 2m+1.
Our proof will be a modified version of <cit.>. Denote the exterior Laplacian with parameter α as -_α^ext. By <cit.>, -_α^ext is diagonalized into an orthogonal sum of fiber operators 𝖧_α, j acting on L^2((1, ∞), r dr ) for each j ≥ 0. Each 𝖧_α, j is of the form
𝖧_α, j f:= -f”(r) - f'(r)/r + j^2f(r)/r^2,
dom 𝖧_α, j = {f : f, f” + f'/r∈ L^2((1,∞), r dr), f'(1) = α f(1)}.
Let λ_1(𝖧_α, j) be the infimum of the spectrum of 𝖧_α, j. By <cit.>, there exists a constant j_*(α) such that λ_1(𝖧_α, j) is negative and a discrete eigenvalue of -_α^ext if and only if 0 ≤ j ≤ j_*(α). Furthermore, by <cit.>, each 𝖧_α, j has at most one negative eigenvalue, λ_1(𝖧_α, 0) is a simple eigenvalue of -_α^ext, and λ_1(𝖧_α, j) is a double eigenvalue for 1 ≤ j ≤ j*(α). The exterior problem thus has exactly 2j_*(α) + 1 negative eigenvalues and so our goal is to show j_*(α) = m.
If α < 0, the two eigenvalues λ_1^ext(, α) and λ_1(Θ, α/ sinΘ) are both simple and negative. Our goal is then to detect when the exterior problem begins to accrue further negative eigenvalues. Let j ≥ 1. Modifying <cit.>, the operator 𝖧_α, j has a negative eigenvalue if and only if the system
{ -f”(r) - f'(r)/r + j^2 f(r)/r^2 = σ f(r) in (1, ∞),
f'(1) = α f(1),
.
has a non-trivial eigenfunction in L^2((1, ∞), r dr) with eigenvalue σ < 0. Using <cit.> and the fact an eigenfunction of the above system belongs to L^2((1,∞), r dr), we obtain the solution
f(r) = K_j(μ r)
where K_j is a modified Bessel function of the second kind and μ := √(-σ). Thus, σ is a discrete negative eigenvalue of 𝖧_α, j if and only if μ satisfies the Robin boundary condition at r = 1:
μ K_j'(μ) = α K_j(μ).
By using the recurrence K_j'(z) = -K_j-1(z) - j/zK_j(z) <cit.>, we may rewrite the equation as
-μ K_j-1(μ)/K_j(μ) = α + j.
Define
h_j(x) := K_j-1(x)/K_j(x)
so that (<ref>) reads -xh_j(x) = α + j. By <cit.> and <cit.> we have the bounds
x/j - 1/2 + √((j- 1/2)^2 + x^2) < h_j(x) ≤ 1
for x > 0, j ≥ 1. It follows that x h_j(x) tends to 0 as x → 0 and tends to ∞ as x →∞. We want to show that xh_j(x) is strictly increasing so that the solution of (<ref>) is unique. From <cit.>, we see h_j(x) is strictly increasing in a neighborhood of the origin. The proof of <cit.> shows h_j'(x) > 0 for all x > 0. It follows that xh_j(x) is strictly increasing.
We conclude (<ref>) has a unique positive solution μ if and only if α + j < 0. Since α∈ [-m-1, -m), the condition α + j < 0 means j ≤ m and so j_*(α) = m. Hence, the dimension of the negative eigenvalues on the spherical cap and on the exterior of the unit disk are equal in the case α∈ [-m-1, -m).
We have managed to show step (<ref>), justifying that the spherical eigenvalues can converge to the exterior ones. Now, we must show that they do converge. We start with some upper and lower bounds.
§.§ Step (<ref>): upper and lower bounds.
Henceforth, we assume k ≤ 2m+1 so that λ_k(Θ, α) and λ_k^ext(, α) are both negative. Perform the conformal map z ↦ 1/z on (<ref>) to yield the characterization (<ref>) of λ_k(Θ, α/sinΘ)sin^2(Θ). Observe in (<ref>) that the weight in the denominator is bounded by 1, and we may assume the numerator is negative. Furthermore, we may restrict ℒ to vary over k-dimensional subspaces of H^1(^2 ∖) rather than H_ext(^2 ∖) (defined in (<ref>)), in light of the inclusion H^1(^2 ∖) ⊂ H_ext(^2 ∖). Hence, we obtain the upper bound
lim_Θ→πλ_k(Θ, α/sinΘ)sin^2(Θ) ≤λ_k^ext(, α) < 0.
We cannot directly bound below by the exterior eigenvalues nor take the limit of (<ref>) as R →∞. Instead, we will chop off the unbounded portion of the domain and consider a large annulus with a Robin boundary on the inner radius and Neumann boundary on the outer annulus. We are able to bound the scaled spherical eigenvalue below by these annular eigenvalues.
Define A(M) as the annulus of inner radius 1 (the Robin portion) and outer radius M (the Neumann portion), and define w as the weight (1+R^2/r^2+R^2)^ 2.
By reworking Lemma <ref> slightly for unbounded domains and different function spaces, we see that
λ_k(Θ, α/sinΘ)sin^2(Θ) = min_ℒmax_0 ≠ f ∈ℒ∫_^2 ∖ | f|^2 dA + α∫_∂ |f|^2 dS/∫_^2 ∖ |f|^2 ( 1+R^2/r^2+R^2)^ 2 dA
≥min_ℒmax_0 ≠ f ∈ℒ∫_A(M) | f|^2 dA + α∫_∂ |f|^2 dS/∫_A(M) |f|^2 ( 1+R^2/r^2+R^2)^ 2 dA
where ℒ varies over k-dimensional subspaces of H_ext(^2 ∖). The key point here is that if f is in H_ext(^2 ∖), the restriction of the function to A(M) must be in L^2(A(M)) since |z|^4 is a bounded function on the annulus. That is, when we restrict the function space to exist on A(M), we can revert to the standard H^1 space.
We thus generate the lower bound
λ_k(Θ, α/ sinΘ) sin^2(Θ) ≥η_k(A(M), ∂, α, w).
Since the weight w converges uniformly to 1 on the annulus, taking R →∞ yields the bound
lim_Θ→πλ_k(Θ, α/sinΘ) sin^2(Θ) ≥η_k(A(M), ∂, α, 1).
For the right-hand side, we will suppress notation and denote the unweighted Robin-Neumann eigenvalue on the annulus as η_k(M, α). Take care to note the inner radius at R = 1 has the Robin boundary condition for this eigenvalue problem. Again, using Lemma <ref>, this eigenvalue is an increasing function in M. By combining (<ref>) and (<ref>), we see the limit lim_M →∞η_k(M, α) is a number η_k^*(α) < 0. We will show η_k^*(α) = λ_k^ext(, α).
Step (<ref>) is now complete. To analyze further, we will look at the implicit equation that characterizes the exterior eigenvalues and the one that characterizes the annular eigenvalues. In order to do this, we will separate variables and reduce our analysis to the radial modes. The radial modes satisfy certain well-behaved implicit equations characterizing the eigenvalues completely, and we will show uniform convergence of these equations.
§.§ Step (<ref>): reduction to the radial modes
First, we state properties of the exterior eigenvalues. Observe the domain ^2 ∖ is radially symmetric. Then, referring to Subsection <ref>, we separate variables into polar coordinates and find the angular components of the modes are cos(jϕ), sin(jϕ) and the radial components are found by solving the modified Bessel ODE system
{ -f”(r) - f'(r)/r + j^2 f(r)/r^2 = σ f(r) in (1, ∞),
f'(1) = α f(1).
.
For α∈ [-m-1, -m), the system (<ref>) has a unique eigenvalue σ_j(α) < 0 for each 0 ≤ j ≤ m from the proof of Lemma <ref>, and by <cit.>, we have that σ_k(α) < σ_j(α) < 0 if 0 ≤ k < j ≤ m. The eigenvalue σ_j(α) has associated eigenfunction
K_j(μ_j(α) r)
where μ_j(α) := √(-σ_j(α)). So, separation of variables is quite quickly seen to be justified in the exterior case.
For annular eigenvalues, there is more work to be done, since less literature exists on this subject. Freitas and Krejčiřík provide a treatment of the first Robin eigenvalue with negative parameter on disks and annuli in <cit.>. Here, we will be considering all negative eigenvalues, including the first.
Rotational symmetry of the annulus again allows us to separate variables into polar coordinates, for the mixed Robin-Neumann problem. The angular components are of the form cos(j ϕ) and sin (j ϕ) while the radial components are found by solving the modified Bessel ODE system
{ -f”(r) - f'(r)/r + j^2 f(r)/r^2 = ρ f(r) in (1, M),
f'(1) = α f(1),
f'(M) = 0.
.
We need to prove for each j = 1, …, m that the system possesses a unique eigenvalue ρ_j(M, α) < 0 satisfying the eigenfunction equation and boundary conditions. Furthermore, we also want to prove that increasing j increases the eigenvalue ρ_j(M,α). After proving these properties, we can justify separation of variables in the annular case.
Let α∈ [-m-1, -m) where m is a non-negative integer. If 0 ≤ j ≤ m, then there exists a unique eigenvalue ρ_j(M,α) < 0 of system (<ref>). Furthermore, if 0 ≤ k < j ≤ m, then ρ_k(M,α) < ρ_j(M, α) < 0.
System (<ref>) induces a discrete spectrum of eigenvalues
ρ_j,1(M, α) < ρ_j,2(M,α) ≤ρ_j,3(M,α) ≤⋯→∞
and we claim ρ_j,1(M,α) < 0 while ρ_j,2(M,α) ≥ 0.
The variational characterization for the lowest eigenvalue reads
ρ_j,1(M,α) = min_f ≠ 0∫_1^M (f'(r)^2 + j^2/r^2f(r)^2) r dr + α f(1)^2/∫_1^M f(r)^2r dr
where f varies over H^1([1,M], r dr). Furthermore, also consider the variational characterization for the lowest eigenvalue σ_j(α) of System (<ref>):
σ_j(α) = min_f ≠ 0∫_1^∞(f'(r)^2 + j^2/r^2f(r)^2) r dr + α f(1)^2/∫_1^∞ f(r)^2r dr
where f varies over H^1([1, ∞], r dr).
As α∈ [-m-1, -m) by assumption, the eigenvalue σ_j(α) is negative from the proof of Lemma <ref>, so we may restrict its variational characterization to functions f such that the numerator is negative. By reducing the domain of integration in the numerator and denominator (such as in Lemma <ref>), we see immediately that
ρ_j,1(M, α) ≤σ_j(α) < 0.
We show non-negativity for the rest of the spectrum. Let f_1 and f_2 be eigenfunctions for ρ_j,1(M,α) and ρ_j,2(M,α) respectively. Since f_1 cannot change sign on [1,M] and f_2 is orthogonal to f_1 in L^2([1, M], r dr), we see that f_2 must change sign at some point x^* ∈ (1,M). Then, the restriction of f_2 to the interval [x^*, M] is an eigenfunction of the system
{ -f”(r) - f'(r)/r + j^2 f(r)/r^2 = ρ f(r) in (x^*, M),
f(x^*) = 0,
f'(M) = 0.
.
The system above represents an eigenvalue problem for a mixed Dirichlet-Neumann operator on the interval [x^*, M]. The variational characterization for this system implies the operator's eigenvalues must be non-negative, and it follows that ρ_j,2(M,α) ≥ 0. Hence, ρ_j,1(M, α) is the unique negative eigenvalue of (<ref>), and we denote it ρ_j(M, α).
By comparing the variational characterizations (<ref>), we see ρ_k(M, α) < ρ_j(M,α) if k < j and the claim follows.
Note that Lemma <ref> is not stating there are exactly 2m+1 negative eigenvalues, rather that there are at least 2m+1 negative eigenvalues.
For the exterior and annular radial modes, we see that increasing the parameter j increases both eigenvalues σ_j(α) and ρ_j(M, α). Furthermore, for j ≥ 1, each of the eigenvalues is repeated twice in the unseparated form of their respective PDE systems due to the existence of the two angular components cos(jϕ) and sin(jϕ). These facts, along with σ_j(α) and ρ_j(M,α) being unique negative eigenvalues for the separated Bessel ODEs, imply that
σ_j(α) = λ_2j^ext(, α) = λ_2j+1^ext(, α) ,
ρ_j(M, α) = η_2j (M, α) = η_2j+1(M, α),
for 1 ≤ j ≤ m, where we recall λ_k^ext(, α) is the k^th exterior eigenvalue and η_k(M,α) is the k^th Robin-Neumann eigenvalue on the annulus A(M). For j = 0, the exterior and annular eigenvalues are simple and radial so that they coincide with their radial modes. Thus, proving lim_M→∞η_k(M, α) = λ_k^ext(, α), 0 ≤ k ≤ 2m+1, is equivalent to proving the radial modes converge:
lim_M →∞ρ_j(M, α) = σ_j(α), 0 ≤ j ≤ m.
We have successfully completed step (<ref>) of the proof. We will finish the final step of the proof by showing the implicit equations corresponding to the radial eigenvalues converge uniformly.
§.§ Step (<ref>): convergence of the radial modes
By Subsection <ref>, μ_j(α) = √(-σ_j(α)) is the unique root in (0, ∞) of the implicit function
A(x, α) := x K_j'(x) - α K_j(x).
Similarly to how Equation (13) is derived in <cit.>, we find ω_j(M, α) := √(-ρ_j(M, α)) is the unique root in (0, ∞) of the implicit function
G_M(x, α) := A(x, α)D_M(x, α) - B(x, α)C_M(x, α)
with coefficients given in terms of the modified Bessel functions I_j and K_j as
A(x, α) := x K_j'(x) - α K_j(x),
B(x, α) := x I_j'(x) - α I_j(x),
C_M(x, α) := x K_j'(Mx),
D_M(x, α) := x I_j'(Mx).
(Despite the lack of dependence of C_M and D_M on α, we keep the notation for consistency.)
Since D_M is non-vanishing on (0, ∞), we may divide G_M by D_M and preserve the location of the unique root of G_M. Furthermore, recall that η_k(M,α) is increasing as a function of M and bounded above by λ_k^ext(, α) (see the paragraph after Equation (<ref>)). It follows that ω_j(M, α) is decreasing as a function of M and bounded below by μ_j(α) > 0, and so we may restrict our analysis to the compact interval
S = [μ_j(α)/2, ω_j(M_0, α)]
where M_0 is large and finite. We will show that
G_M(x,α)/D_M(x, α) converges uniformly to A(x,α) on the interval S as M →∞, or, equivalently:
-B(x, α)C_M(x, α)/D_M(x, α) 0 on S as M →∞.
Since the convergence is uniform, the roots ω_j(M, α) must converge to a root of A(x,α), and as A(x, α) has μ_j(α) ∈ S as its unique root, we will be able to conclude the proof.
By <cit.>, we have the recurrence relations
I_j' = 1/2(I_j-1 + I_j+1), -K_j' = 1/2(K_j-1 + K_j+1), (j ≥1)
I_0' = I_1, -K_0' = K_1, (j = 0).
From these relations we see that I_j' increases strictly from 0 to ∞ (or 1/2 to ∞ if j = 1) and -K_j' decreases strictly from ∞ to 0. Thus, it follows the ratio
-C_M(x,α)/D_M(x,α) = -K_j'(Mx)/I_j'(Mx)
is positive and decreases to 0 as M increases. Furthermore, the ratio achieves its maximum value at the left endpoint of S for each fixed M. Since B is positive and does not depend on M, we deduce that
sup_x ∈ S-B(x,α)C_M(x,α)/D_M(x,α)≤ B(μ_j(α)/2, α) -K_j'(M μ_j(α)/2)/I_j'(M μ_j(α)/2)→ 0 as M →∞.
It follows that the convergence is uniform, and we conclude that
lim_Θ→πλ_k(Θ, α/sinΘ)sin^2(Θ) = lim_M →∞η_k(M, α) = λ_k^ext(, α).
§ HIGHER-DIMENSIONAL DIRICHLET MONOTONICITY — THEOREM 1.3
For this section, we jump up to dimensions three and higher, and the reader needs to keep in mind we now analyze higher-dimensional Dirichlet eigenvalues, not two-dimensional Robin eigenvalues. One may find it helpful to refer to Appendix <ref> for a refresher on the Laplace–Beltrami operator in our local coordinate system. The techniques here are quite different from the proofs of Theorems <ref> and <ref>, as we use direct trial functions instead of conformal maps.
There are three main steps in the proof of Theorem <ref>:
* Transform an eigenspace on C(Θ_1) to a trial subspace on C(Θ_2).
* Rescale the Rayleigh quotient on C(Θ_2) back to C(Θ_1), incurring the cost of certain coefficient functions.
* Bound the coefficient functions and recover the standard Rayleigh quotient for λ_k(Θ_1).
The main point of interest in the proof is in the first step, where the transformation is performed to preserve the area elements of C(Θ_1) and C(Θ_2). We will first consider the spherical case corresponding to Θ > 0, as the proof is slightly more demanding there.
§.§ Spherical case
We first note that λ_k(Θ) is a positive decreasing function of Θ by Dirichlet domain monotonicity, and as sin^2 decreases strictly on [π/2, π] and is positive, we need only show that λ_k(Θ)sin^2(Θ) is decreasing strictly on the interval (0, π/2).
For any Θ we have the variational characterization.
λ_k(Θ) = min_ℒmax_0 ≠ f ∈ℒ∫_C(Θ) |_^n f|^2 dV /∫_C(Θ) |f|^2 dV =: min_ℒmax_0 ≠ f ∈ℒℛ_Θ(f)
where ℒ varies over k-dimensional subspaces of H_0^1(C(Θ)). In coordinates, the Rayleigh quotient takes the form
ℛ_Θ(f) =∫_^n-1∫_0^Θ(|∂_1 f|^2 + 1/sin^2θ|_^n-1 f|^2)sin^n-1θ dθ dξ/∫_^n-1∫_0^Θ |f|^2sin^n-1θ dθ dξ,
where we write ∂_1 f to denote the derivative in the first component, the aperture coordinate. Since we frequently pass between different spherical caps, this notation is employed for clarity.
Let 0 < Θ_1 < Θ_2 ≤π/2. Our goal is to show λ_k(Θ_1)sin^2(Θ_1) > λ_k(Θ_2)sin^2(Θ_2). Let {ϕ_1, …, ϕ_k} be the first k Dirichlet eigenfunctions on C(Θ_1). Call the subspace generated by these functions 𝒦. We want to extend these functions suitably to C(Θ_2) to be trial functions in the Rayleigh quotient for λ_k(Θ_2). It is unclear what such an extension ϕ̃_j should be. A simple linear extension in the θ variable would not interact well with the Rayleigh quotient. The volume element would transform into sin^n-1(Θ_1θ/Θ_2) in both the numerator and denominator and not cancel out with any factors, crushing our hopes of proving monotonicity. However, if we normalize the trial function by the volume element, we can defeat this obstacle. We are thus motivated to define the following transformation: let f ∈ H_0^1(C(Θ_1)) and define
f̃ (θ, ξ) := f(Θ_1 θ/Θ_2, ξ) √(sin^n-1(Θ_1 θ/Θ_2))/√(sin^n-1(θ))∈ H_0^1(C(Θ_2)).
(At the origin, the ratio of sines is understood as its limit lim_θ→ 0sin(Θ_1 θ/Θ_2)/sin(θ) = Θ_1/Θ_2.) In essence, this transformation is the spherical equivalent of linear dilation in the plane. Linear dilation in the plane stretches a function uniformly to fit into a larger domain while only incurring a constant multiplicative cost to the L^2 norm. Here one sees the same phenomenon and visualizes it suitably adjusted to a spherical cap scenario. One stretches the function along the spherical cap as one might stretch rubber, while scaling the L^2 norm by a constant, with
∫_C(Θ_2) |f̃|^2 dV = Θ_2/Θ_1∫_C(Θ_1) |f|^2 dV.
We will state the slightly stronger lemma:
Let 0 < Θ_1 < Θ_2 ≤π/2 and f ∈ H_0^1(C(Θ_1)). Then, as defined above, f̃∈ H_0^1(C(Θ_2)). Further, if 𝒦 = span{f_1, …, f_k} is linearly independent in H_0^1(C(Θ_1)), then 𝒦̃ := span{f̃_̃1̃, …, f̃_̃k̃} is linearly independent in H_0^1(C(Θ_2)), and the map f ↦f̃ is linear.
The calculations done below resulting in Equation (<ref>) show that _^nf̃_L^2(C(Θ_2))^2 is bounded above by sin^2(Θ_1)/sin^2(Θ_2)f̃_L^2(C(Θ_2))^2ℛ_Θ_1(f). One can also clearly see f̃_L^2(C(Θ_2))^2 is finite. The linear independence and linearity are trivial and stated for completeness.
Consider the trial functions on C(Θ_2) defined to be ϕ̃_j for 1 ≤ j ≤ k, where the ϕ_j are Dirichlet eigenfunctions on C(Θ_1). The ϕ̃_j are linearly independent, so let their span be our trial subspace 𝒦̃ in the variational characterization for λ_k(Θ_2). The maximizing function is thus achieved for some linear combination g̃ = ∑ a_j ϕ̃_j so that
λ_k(Θ_2) ≤∫_^n-1∫_0^Θ_2(|∂_1 g̃|^2 + 1/sin^2θ|_^n-1g̃|^2)sin^n-1(θ) dθ dξ/∫_^n-1∫_0^Θ_2 |g̃|^2sin^n-1(θ) dθ dξ.
Define g := ∑ a_j ϕ_j ∈ H_0^1(C(Θ_1)) and note that
|_^n-1g̃|^2 = sin^n-1(Θ_1 θ/ Θ_2)/sin^n-1θ |_^n-1g|^2
as the transformation only affects θ and not ξ. After the linear change of variables θ = Θ_2 t /Θ_1, we find an upper bound for λ_k(Θ_2) of
∫_^n-1∫_0^Θ_1(|(∂_1 g̃)(Θ_2 t/Θ_1, ξ)|^2 sin^n-1(Θ_2 t/Θ_1)/sin^n-1(t)+ 1/sin^2(Θ_2 t/Θ_1)|_^n-1 g(t, ξ)|^2)sin^n-1(t) dt dξ/∫_^n-1∫_0^Θ_1 |g(t, ξ)|^2sin^n-1(t) dt dξ.
Our goal is to compare this last displayed formula with a Rayleigh quotient over C(Θ_1) involving the function g. A routine calculation shows in the numerator that
|(∂_1 g̃)(Θ_2 t/Θ_1, ξ)|^2 sin^n-1(Θ_2 t/ Θ_1)/sin^n-1(t) = 1/4 Θ_2^2( ( n-1)^2( (t)Θ_1 - ( Θ _2 t/Θ_1)Θ_2 )^2g(t, ξ)^2
+ 4Θ_1^2∂_1 g(t, ξ)^2 + 4 (n-1)Θ_1 ( (t) Θ_1 - (Θ_2 t/ Θ_1)Θ_2 )g(t, ξ) ∂_1 g(t, ξ) ).
At this point, we would like to perform a direct comparison of Rayleigh quotients, but g∂_1 g need not be positive. So, we write 2g∂_1 g = (g^2)_1 and integrate by parts. The boundary terms vanish by the Dirichlet boundary conditions and the fact that sin^n-2(t) (t) = 0 when t = 0 and n ≥ 3.
We thus obtain an upper bound for λ_k(Θ_2)sin^2(Θ_2) (observe here we multiply the Rayleigh quotient by the scaling factor sin^2(Θ_2)) of
λ_k(Θ_2)sin^2(Θ_2)≤∫_^n-1∫_0^Θ_1( P|∂_1 g|^2 + Q|_^n-1g|^2 + R|g|^2 )sin^n-1(t) dt dξ/∫_^n-1∫_0^Θ_1 |g|^2 sin^n-1(t) dt dξ
where the coefficient functions are
P := sin^2(Θ_2)Θ_1^2/Θ_2^2,
Q:= sin^2(Θ_2)/sin^2(Θ_2 t / Θ_1),
R := sin^2(Θ_2)(n-1)/4 Θ _2^2(Θ_2^2
((n-1) ^2(Θ_2 t/Θ _1)-2
^2(Θ _2 t/Θ _1))
-Θ _1^2 ((n-1) ^2(t) - 2 ^2(t)) ).
We wish to recover the original form of the Rayleigh quotient by bounding P, Q, and R above by the proper quantities. Namely, we want to show P is bounded by sin^2(Θ_1), Q is bounded by sin^2(Θ_1)/sin^2(t), and R is non-positive.
§.§.§ Bound on P
As sin^2(x)/x^2 is a strictly decreasing function up to π, we readily see that P < sin^2(Θ_1).
§.§.§ Bound on Q
Consider the function sin^2(x)/sin^2(x t/Θ_1) where t < Θ_1. By taking a derivative in x, we see the function is strictly decreasing for x ∈ (0, π/2) if the inequality tan(tx/Θ_1) < t/Θ_1tan(x) holds. As tan(x)/x is a strictly increasing function, it follows that Q < sin^2(Θ_1)/sin^2(t).
§.§.§ Bound on R
We will show that R is negative. Since sin^2(Θ_2)(n-1)/4Θ_2^2 is positive, we need only show negativity of the quantity within the parentheses. It is enough to show
x^2 ((n-1) ^2(x t/Θ _1)-2
^2(x t/Θ _1))
is a decreasing function for x ∈ (0, π/2). By differentiating with respect to x, we want to show
(n-1)^2y - (n-3)y y^2 y-2^2 y
is negative for y ∈ (0, π/2). Rewrite the above expression as
(n-3)^2(y)(1 - y y) - (n-1).
Since y y < 1 on (0, π/2) and n ≥ 3, the expression is clearly negative and (<ref>) is decreasing.
With these bounds on the coefficient functions, it follows that
λ_k(Θ_2)sin^2(Θ_2) < ℛ_Θ_1(g)sin^2(Θ_1).
Since g ∈span{ϕ_1, …, ϕ_k}, we know ℛ_Θ_1(g) ≤λ_k(Θ_1)
and thus λ_k(Θ)sin^2(Θ) is a strictly decreasing function on (0, π/2).
§.§ Hyperbolic case
The proof in this case is virtually identical. Hyperbolic trig functions replace their Euclidean counterparts.
For Θ < 0, the Rayleigh quotient is
ℛ_Θ(f) =∫_^n-1∫_0^|Θ|(|∂_1 f|^2 + 1/sinh^2θ|_^n-1 f|^2)sinh^n-1θ dθ dξ/∫_^n-1∫_0^|Θ| |f|^2sinh^n-1θ dθ dξ.
Let Θ_1 < Θ_2 < 0. We wish to show that λ_k(Θ_1)sinh^2(Θ_1) > λ_k(Θ_2)sinh^2(Θ_2). We rework Lemma <ref> and define
f̃ (θ, ξ) := f(|Θ_1 |θ/|Θ_2|, ξ) √(sinh^n-1(|Θ_1| θ/|Θ_2|))/√(sinh^n-1(θ)).
Let Θ_1 < Θ_2 < 0 and f ∈ H_0^1(C(Θ_1)). Then, f̃∈ H_0^1(C(Θ_2)) and the mapping f ↦f̃ is linear and preserves linear independence.
The proof of the theorem proceeds the same, making necessary modifications, and we arrive at the bound
λ_k(Θ_2)sinh^2(Θ_2) ≤∫_^n-1∫_0^|Θ_1|( P|∂_1 g|^2 + Q|_^n-1g|^2 + R|g|^2 )sinh^n-1(t) dt dξ/∫_^n-1∫_0^|Θ_1| |g|^2 sinh^n-1(t) dt dξ
where the new hyperbolic coefficient functions are
P := sinh^2(|Θ_2|)|Θ_1|^2/|Θ_2|^2,
Q:= sinh^2(|Θ_2|)/sinh^2(|Θ_2| t / |Θ_1|),
R := sinh^2(|Θ_2|)(n-1)/4 |Θ _2|^2(|Θ_2|^2
((n-1) ^2(|Θ_2| t/|Θ _1|)-2
^2(|Θ| _2 t/|Θ| _1))
-|Θ _1|^2 ((n-1) ^2(t) - 2 ^2(t)) ).
The bounds on these coefficient functions follow from the fact that all of sinh^2(x)/x^2, sinh^2(x)/sinh^2(xt/|Θ_1|) and y^2 ((n-1) ^2y -2
^2y)) are increasing functions on (0, ∞), proven similarly as in the spherical case. The rest of the proof is identical, and we conclude that λ_k(Θ)sinh^2(Θ) decreases strictly on (-∞, 0).
§.§ Continuity and limits as Θ→ -∞, 0, π
Continuity away from Θ = 0 follows from the fact λ_k(Θ), sin^2(Θ) and sinh^2(Θ) are all continuous. Near Θ = 0, sinΘ is well-approximated by Θ, and so the Rayleigh quotient for λ_k(Θ)sin^2(Θ) is uniformly approximated near Θ = 0 by
Θ^2 ∫_^n-1∫_0^Θ(|∂_1 f|^2 + 1/θ^2|_^n-1 f|^2)θ^n-1 dθ dξ/∫_^n-1∫_0^Θ |f|^2θ^n-1 dθ dξ.
If λ_k(^n) is the eigenvalue of the Euclidean Laplacian for ^n, then the above quantity is the Rayleigh quotient for Θ^2 λ_k(Θ^n) in spherical coordinates. By Euclidean scale invariance, Θ^2 λ_k(Θ^n) = λ_k(^n). Apply the corresponding argument to sinh |Θ| and continuity at the origin follows.
When Θ > π/2, then 0 ≤λ_k(Θ)sin^2(Θ) ≤λ_k(π/2)sin^2(Θ) by Dirichlet domain monotonicity and hence λ_k(Θ)sin^2(Θ) → 0 as Θ→π
As Θ→ -∞, it is sufficient to show λ_1(Θ)sinh^2(Θ) →∞. From Borisov and Freitas <cit.>, we have λ_1(Θ) = π^2/Θ^2 + 1 for n=3 and the statement follows easily there. For n ≥ 4, they also provide the lower bound
λ_1(Θ) ≥j^2_n-2/2, 1/Θ^2 + n(n-1)/6
and again the result is clear.
§ ACKNOWLEDGMENTS
The author was supported by National Science Foundation award #2246537 to Richard Laugesen.
§ COORDINATE REPRESENTATIONS
Spherical Hyperbolic
_ ^n = ∂_θê_θ + 1/sinθ_ ^n-1 _^̋n = ∂_θê_θ + 1/sinhθ_ ^n-1
[3ex]
_^ n = 1/sin^n-1θ∂_θ(sin^n-1θ ∂_θ) + 1/sin^2θ_ ^n-1 _^̋ n = 1/sinh^n-1θ∂_θ(sinh^n-1θ ∂_θ) + 1/sinh^2θ_ ^n-1
[3ex]
dV = sin^n-1θ dθ dξ dV = sinh^n-1θ dθ dξ
§ NUMERICS
The plots in this section accomplish the following:
* Illustrate Theorem <ref> in Figure <ref>.
* Numerically support Conjectures <ref>, <ref>, and <ref> in Figures <ref>, <ref>, and <ref> respectively.
* Highlight non-monotonicity of certain eigenvalue functionals in Figures <ref> and <ref>.
The plots were generated by using the command in Wolfram Mathematica 14.1.
99
AB95
M. S. Ashbaugh and R. D. Benguria.
Sharp upper bound to the first nonzero Neumann eigenvalue
for bounded domains in spaces of constant curvature.
J. London Math. Soc. (2)
52 (1995), 402-–416.
AB01
M. S. Ashbaugh and R. D. Benguria.
A sharp bound for the ratio of the first two Dirichlet eigenvalues of a domain in a hemisphere of S^n.
Trans. Amer. Math. Soc. 353 (2001), 1055–1087.
BKN10
C. Bandle, Y. Kabeya, and H. Ninomiya.
Imperfect bifurcations in nonlinear elliptic equations on spherical caps. Commun. Pure Appl. Anal. 9 (2010), no.5, 1189–1208.
BKN19
C. Bandle, Y. Kabeya, and H. Ninomiya.
Bifurcating solutions of a nonlinear elliptic Neumann problem on large spherical caps. Funkcial. Ekvac. 62 (2019), no. 3, 285–317
BL07
R. D. Benguria and H. Linde.
A second eigenvalue bound for the Dirichlet Laplacian in hyperbolic space.
Duke Math. J. 140 (2007), 245–279.
B22
S. Berge. Eigenvalues of the Laplacian on balls with spherically symmetric metrics. Anal. Math. Phys. 13, 14 (2023).
BF17
D. Borisov and P. Freitas.
The spectrum of geodesic balls on spherically symmetric manifolds.
Comm. Anal. Geom. 25, no. 3 (2017), 507–-544.
BFK17
D. Bucur, P. Freitas, and J. Kennedy.
The Robin problem.
In: Shape optimization and spectral theory (pp. 78–119).
Warsaw: De Gruyter Open. (2017).
C75
S. Y. Cheng. Eigenfunctions and eigenvalues of Laplacian. In: Differential geometry
(Proc. Sympos. Pure Math., Vol. XXVII, Part 2, Stanford Univ., Stanford, CA, 1973). (1975).
C84
I. Chavel.
Eigenvalues in Riemannian geometry.
Pure Appl. Math., 115
Academic Press, Inc., Orlando, FL, (1984).
E10
L. Evans.
Partial differential equations.
Second edition
Grad. Stud. Math., 19. American Mathematical Society, Providence, RI, (2010).
GP17
A. Girouard and I. Polterovich.
Spectral geometry of the Steklov problem (survey article).
J. Spectr. Theory 7 (2017), no. 2, 321–359.
FK15
P. Freitas and D. Krejčiřík.
The first Robin eigenvalue with negative boundary parameter. Adv. Math. 280 (2015), 322–339.
KL18
D. Krejcirík and V. Lotoreichik.
Optimisation of the lowest Robin eigenvalue in the exterior of a compact set.
J. Convex Anal. 25, no. 1 (2018), 319–337.
KL20
D. Krejčiřík and V. Lotoreichik.
Optimisation of the lowest Robin eigenvalue in the exterior of a compact set, II: non-convex domains and higher dimensions.
Potential Anal. 52 (2020), 601–614.
KL23
D. Krejcirík and V. Lotoreichik.
Optimisation and monotonicity of the second Robin eigenvalue on a planar exterior domain.
https://arxiv.org/abs/2307.14286arXiv:2307.14286.(2023).
LL22
J. J. Langford and R. S. Laugesen.
Scaling inequalities for spherical and hyperbolic eigenvalues.
J. Spectr. Theory 13 (2023), no. 1, 263–296.
L12
R. S. Laugesen. Spectral Theory of Partial Differential Equations — Lecture Notes. https://arxiv.org/abs/1203.2344arXiv:1203.2344 (2017).
L17
G. Leoni.
A first course in Sobolev spaces.
Second edition
Grad. Stud. Math., 181
American Mathematical Society, Providence, RI, (2017).
S11
J. Segura.
Bounds for ratios of modified Bessel functions and associated Turán-type inequalities.
J. Math. Anal. Appl. 374, no. 2 (2011), 516–528.
DLMF
NIST Digital Library of Mathematical Functions.
W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
<https://dlmf.nist.gov/>.
|
http://arxiv.org/abs/2409.03687v1 | 20240905164247 | On moments of the derivative of CUE characteristic polynomials and the Riemann zeta function | [
"Nick Simm",
"Fei Wei"
] | math.PR | [
"math.PR",
"math-ph",
"math.MP",
"math.NT",
"60B20, 11M50, 15B52"
] |
format=hang,font=small
positioning
theoremTheorem
acknowledgement[theorem]Acknowledgement
remark[theorem]Remark
assumption[theorem]Assumption
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
conjectureConjecture
definition[theorem]Definition
equationsection
theoremsection
plain
=
=
=
Department of Mathematics, University of Sussex, Brighton, BN1 9RH, United Kingdom
[email protected], [email protected]
𝒜
[ 0
ıi
a
ḇ
ç
C
e
f
x
y
z
m
n
s
q
g
h
w
ŭ
𝕌
𝒮
T ] |
http://arxiv.org/abs/2409.02201v1 | 20240903181106 | Impact Evaluations in Data Poor Settings: The Case of Stress-Tolerant Rice Varieties in Bangladesh | [
"Jeffrey D. Michler",
"Dewan Abdullah Al Rafi",
"Jonathan Giezendanner",
"Anna Josephson",
"Valerien O. Pede",
"Elizabeth Tellman"
] | econ.GN | [
"econ.GN",
"q-fin.EC"
] |
1]Jeffrey D. Michler
2]Dewan Abdullah Al Rafi
3]Jonathan Giezendanner
1]Anna Josephson
4]Valerien O. Pede
5]Elizabeth Tellman
[1]Department of Agricultural and Resource Economics, University of Arizona
[2]Department of Agricultural and Applied Economics, University of Georgia
[3]International Rice Research Institute (IRRI)
[4]Earth Intelligence Lab, Massachusetts Institute of Technology (MIT)
[5]School of Geography, Development, and Environment, University of Arizona
Impact Evaluations in Data Poor Settings: The Case of Stress-Tolerant Rice Varieties in BangladeshCorrespondence to mailto:[email protected]@arizona.edu. A pre-analysis plan for this research has been filed with Open Science Framework (OSF): https://doi.org/10.17605/OSF.IO/YE7PVhttps://osf.io/ye7pv. We gratefully acknowledge funding from the Standing Panel on Impact Assessment (SPIA), the Bill and Melinda Gates Foundation (BMGF), and the CGIAR Research Program on Rice. We are especially grateful to Aileen Maunahan, Jorrel Aunario, Pavan Yeggina, and Renaud Mathieu for their work on the early stages of EO data generation, model building, and creating training data sets. We also greatly appreciate the work on Donald Villanueva and Humnath Bhandari in the 2022 data collection effort as well as Donald, Rose San Valentin, and Rowell Dikitanan for initial data cleaning and database construction. This paper has been shaped by conversations with participants at the Center for Environmental Economics and Sustainability Policy (CEESP) seminar at Arizona State University, the 7^th African Conference of Agricultural Economists in Durban, the 6^th International Rice Congress in Manila, and the the 32^nd International Conference of Agricultural Economists in New Delhi. An earlier version of this paper was presented at the AAEA annual meeting in Anaheim.
[
August 2024
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
§ ABSTRACT
Impact evaluations (IEs) of new technologies are critical to improving investment in national and international development goals. Yet many technologies are introduced at times or in places that lack the necessary data to conduct a well identified IE. We present a new method that combines Earth observation (EO) data, advances in machine learning, and survey data so as to allow researchers to conduct IEs when traditional economic data is missing. To demonstrate our approach, we study stress tolerant rice varieties (STRVs) introduced more than a decade ago. Using 20 years of EO data on rice production and flooding, we fail to replicate existing evidence of STRV effectiveness. We validate this failure to replicate with household panel data and through Monte Carlo simulations that demonstrate the sensitivity of past evidence to mismeasurement. Our findings speak to the challenges and promises of using EO data to conduct IEs in data poor settings.
JEL Classification: C51, C81, D83, O13, Q12, Q54
Keywords: Remote Sensing, Earth Observation, Machine Learning, Rice, Flooding, Bangladesh
§ INTRODUCTION
Getting one's hands on a sufficient amount of relevant data has always been a challenge in answering economic questions, whether theoretical or empirical. Karl Marx struggled to obtain firm-level production data for use in developing his theory of value <cit.>. John Maynard Keynes lamented the lack of sufficient data to definitively disprove Arthur Pigou's theory of unemployment <cit.>. Simon Kuznets, in part, won his Nobel Prize for creating standardized and regularly collected data on national accounts <cit.>. One reason why the earliest advances in econometrics were made in the sub-field of agricultural economics was because George F. Warren convinced the U.S. Census Bureau to conduct a separate national census of agriculture at more frequent intervals than the decennial census <cit.>. This meant that agricultural economists had much more data for empirical research than those working on other macro or micro questions.[Nearly as important as having any data is having sufficient data. Well know empirical findings, such as the Philips Curve <cit.> and the deterrence effect of capital punishment <cit.>, have proven much less robust than initially thought once additional data were added to the analysis <cit.>.]
The last quarter century has seen an explosion in the availability of data, economic and otherwise. Data, in combination with the personal computer, the internet, mobile information, and communication technology, has created a data rich world. This plethora of data has allowed economists to answer a host of new questions on anything from the historic effects of colonial institutions <cit.> to the current effects of misinformation <cit.> to the future effects of climate change <cit.>. Economists have also become adept at integrating traditional socioeconomic data with new types of data from a variety of sources. In particular, remotely sensed Earth observation (EO) data has become a favorite for use in economic research <cit.>. Economists have used EO data to examine questions about economic growth, agricultural productivity, land use, population growth, poverty, child mortality, and governance <cit.>.[A cottage industry exists of researchers using EO weather data to predict everything from human capital formation <cit.> to labor markets <cit.> to conflict and institutions <cit.> to agricultural production <cit.> to intra-household bargaining power <cit.> to technology adoption <cit.>.]
Yet the recency of so much of this data means that there is at least one place that remains relatively data poor: the past. As <cit.> show, the increase in the spatial and temporal resolution of EO data has primarily occurred since 2010, with the most substantial improvements (<10m, daily) occurring in the last five years. While EO data has been used to answer many research questions about recent or contemporaneous events, its usefulness in answering economic questions about the (relatively recent) past has, thus far, been limited. The requirements for ground truth or training data to improve the accuracy of EO derived outcomes also places perceived limits on where and when EO data can be used to answer a research question.
In this paper, we develop a methodology to overcome the recency bias in high quality EO data so that it can be used to answer economic research questions in environments that remain - and are likely to remain - data poor. We do this in the context of a long-term, large-scale impact evaluation on the effectiveness of stress tolerant rice varieties (STRVs) in mitigating yield loss to flooding in Bangladesh. The flood-tolerant rice variety, Swarna-Sub1, was first released in Bangladesh in 2010 and other varieties quickly followed <cit.>. While a randomized control trial (RCT) was conducted in 128 villages in Odisha, India in 2011 <cit.> and household panel data on varietal-specific rice production exists in South Asia starting in 2014 <cit.>, no data exist that would allow for a before/after comparison of the impacts of adoption on a large-scale. The relatively long time since the introduction of varieties with the Sub1 gene means that the high spatial and temporal resolution EO data now being used in impact evaluations <cit.> also does not exist for use as a baseline.
In order to conduct our impact evaluation in this data poor environment we combine data from a variety of sources (EO, administrative, household survey), develop new approaches to generating ground truth data, and leverage recent advances in machine learning (ML). Our first task is to build maps for where rice is grown in Bangladesh. The standard approach is to collect GPS data on rice and non-rice plots and use that as training data for an ML algorithm to predict rice area at a larger scale <cit.>. While we have this type of ground truth data for 2021 and 2022, it does not exist for previous years. To overcome this lack of ground truth data, we hand-annotate thousands of Google Earth images to build a training data set going as far back as 2002. Having identified rice area, we next build maps of where rice was flooded. The traditional method for identifying floods is to use an existing databases of large-scale floods based on optical sensors that might be blocked by clouds <cit.> or to handcraft flood measures using EO data <cit.>. Both methods miss many small flood events or have cloud blockage and thus under-measure the extent of flooding <cit.>. Substantial improvements in accuracy can be achieved using ML methods similar to those used in rice mapping but here again the problem is a lack of ground truth data. To overcome this, we use radar data from Sentinel-1, a satellite with high spatial and temporal resolution data not blocked by clouds, launched in 2014, as training data for a Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) model that then predicts past flooding in lower resolution historical data <cit.>.
Our first approach to estimating the impacts of STRVs combines these rice and flood maps with the EO derived enhanced vegetative index (EVI), a common proxy for yields, and administrative data on STRV seed release. This produces a panel data set that covers the entire country for the 20-year period 2002-2021. Using a variety of econometric methods (event study, difference-in-difference, two-way fixed effects), each with its own strengths and weaknesses regarding identifying assumptions, we find little evidence that, after the introduction of STRVs, EVI in flooded areas is higher than it was in those same areas before the introduction of STRVs.
Our failure to replicate the positive impacts of STRVs on yields during floods, which have been documented in numerous agronomic trials and in an RCT, sheds doubt on the effectiveness of our EO-driven approach. To assuage this doubt, we validate our approach using household panel survey data that covers the years 2014, 2017, and 2022 and is representative of rice growing regions in Bangladesh.[Prior to the collection of the new round of survey data, we pre-specified our analysis and archived a pre-analysis plan (PAP) publicly on Open Science Foundation (OSF) <cit.>.] We first use a two-way fixed effects (TWFE) model to estimate the impacts of household adoption of STRVs on household rice yields. That the household data all comes post-introduction of STRV means we have no baseline data in which adoption was completely exogenous (as the seed did not yet exist). Recognizing this, we implement a TWFE model with instrumental variables (IV) in which past flood events derived from our flood maps are used to predict STRV adoption. Results from our household-level analysis comport with the evidence from our EO-driven approach: there is little evidence that STRV adoption results in higher yields for the household during flooding relative to similarly severe floods experienced by the same household. Analysis using cross-sectional plot-level data corroborates the household and EO results.
Why then, using three different levels of analysis and a variety of econometric specifications, do we fail to find the large, positive impacts of STRVs on yield that exist in the agronomic and RCT data? One potential reason is that, unlike high yielding seed varieties which create unconditional yield effects, STRVs are a stochastic technology that generates higher order treatment effects. Based on agronomic field trials, the specific technology we study generates measurable treatment effects only when flooding is greater than seven days but less than 17 days. Outside that window, STRVs have yields statistically indistinguishable from non-STRVs. To test this conjecture, we conduct exploratory (not pre-specified) analysis using EO flood data on the exact number of days a region experienced flooding. We also conduct Monte Carlo simulations using the RCT data from <cit.> and <cit.> to examine the sensitivity of those results to injecting noise into the data. This exploratory analysis demonstrates that measuring a statistically significant treatment effect for STRVs is highly sensitive to mismeasurement or mis-classification, be it in the EO or RCT data.
We make three contributions to the literature. First, we contribute to the growing set of geospatial impact evaluations. A geospatial impact evaluation uses georeferenced EO data to measure either the intervention, the outcome, or both <cit.>. This geospatial data can be used as part of an RCT, as in <cit.>, <cit.>, and <cit.>, but is more frequently used as part of a quasi-experimental approach to establishing the causal effects of an intervention. Geospatial impact evaluations have been used to study the impacts of governance, anti-poverty programs, and conditional cash transfers on deforestation <cit.>, and the impact of insecticide treated bednets on child mortality <cit.>, to name a few. Many of these geospatial impact evaluations combine EO data with survey data on specific individual outcomes in specific areas. However, a growing proportion of studies, like <cit.> and <cit.>, rely solely on EO and administrative data. In this paper, we combine both of these approaches, relying solely on EO and administrative data in our first set of analyses and relying on a combination of EO and household and plot survey data in the second set. We believe this multi-pronged approach provides increased reliability in our estimates by providing evidence from a variety of data sources at a variety of scales.
Second, we make a methodological contribution to the broader literature that looks to combine EO data with socioeconomic data. Both the availability and quality of EO data is changing rapidly. So too is the sophistication of model building to use both optical and radar EO data to predict specific outcomes of interest. Early economic studies that integrated EO data typically relied on what <cit.> term “handcrafted features”, such as calculating ratios of reflectances at different wavelength, for use in shallow predictive models. One commonly used example of a handcrafted metric is the Modified Normalized Difference Water Index (MNDWI) <cit.>. The MNDWI is simply a ratio of the differences in surface reflectance such that positive values are inferred to be water and negative values are inferred to be land <cit.>. An example of a shallow predictive model is using a simple linear regression to predict values of pixels that might be missing or obscured by clouds <cit.>. We leverage recently developed techniques from ML to fuse optical and radar data, including models that use the spatial and temporal structure of the data, to improve predictive ability and reduce noise in our estimates of where rice is grown and where flooding occurs.
Finally, we add to the literature on the impact of agricultural technology by studying a stochastic technology that generates higher order treatment effects. The aim of the first-generation of Green Revolution technologies was to improve on the low yields of landrace varieties <cit.>. Newer generations of high yielding varieties now also embody genetics to allow the plant to tolerate biotic and abiotic stresses <cit.>. This creates a challenge for the researcher trying to measure impact, as the effectiveness of STRVs manifests only within a very specific window and whether an observations fits within that window is often a function of a stochastic event. As our results show, this “Goldilocks Problem” places a limit on the effectiveness of geospatial impact evaluations, particularly in data poor settings. However, as our exploratory analysis and Monte Carlo simulations show, this sensitivity to misallocation, mismeasurement, or mis-classification is not limited to EO data but applies to survey and experimental data as well - a point made by <cit.>, <cit.>, and <cit.> that our paper reinforces.
§ STUDY CONTEXT
Green Revolution technologies played a critical role in improving food security and reducing poverty in many developing countries <cit.>. Early in the Green Revolution, many technologies were developed for and disseminated in agronomically favorable environments, bypassing environments where abiotic stresses resulted in low or uncertain yields <cit.>. These earlier technologies often embodied unconditional yield effects. Simply planting them resulted in high yields under existing growing conditions and farming practices, though even greater yields could be achieved when high yielding seeds were bundled with other modern inputs.
Stress tolerant seed varieties, including stress tolerant rice varieties (STRVs), were developed to address the rising challenges of climate change-induced stresses, such as salinity, extreme temperature, drought, or submergence <cit.>. In South and South-East Asia, where seasonal flooding is critical to rice cultivation, farmers face a substantial risk of crop loss due to early and sustained flooding <cit.>. To address this, researchers isolated submergence tolerant traits (the Sub1 gene) in wild rice <cit.> to create Swarna-Sub1 <cit.>.
Swarna-Sub1 has no yield penalty in the absence of flooding. If and when flooding occurs, Swarna-Sub1 can survive complete submergence of between seven and 17 days, in up to 25 cm of water, with little yield penalty. But, while Swarna and other Sub1 varieties outperform non-STRVs, the yield effect is not unconditional. Figure <ref> traces the yield effect in relation to the duration of flooding. There is a clear “sweet spot“ in which Sub1 and other STRVs produce large yield gains. The effectiveness of STRVs is a function of an essentially random event (the length and intensity of flooding) making STRVs a stochastic technology and meaning attempts to quantify their impact requires measuring a higher order treatment effect. If flooding is outside the 10 day window (less than seven or more than 17 days), then we would expect STRVs to have a treatment effect equal to zero.[Additional details on Sub1 varieties is available in Online Appendix <ref>.]
Recent data collection efforts, including an RCT <cit.> and the Rice Monitoring Survey (RMS) <cit.>, have provided useful information on STRVs, but they each provide a circumscribed picture of STRVs that is limited by the purpose and methodology of each program. While the RCT has strong internal validity for measuring the impact of STRVs, it has weak external validity as it focused on a small set of villages in a single Indian state and as treated farmers were given the seeds. And, while the RMS provides strong external validity for the breadth of adoption, it has weak internal validity as it started collecting observational data four years after the initial introduction of STRVs. Neither source of data can - nor were they designed to - provide evidence on the long-term, large-scale impacts of IRRI's (and other's) investment in developing STRVs nor BMGF's investment in producing and disseminating STRVs.
As such, we are operating in a data poor environment. No historical data exist that provide the necessary information: 1) a panel, 2) observations before 2010 and well after 2010 (to allow time for dissemination and adoption), 3) representative at the national level or at least of rice cultivation, and 4) have data on rice varieties grown and yields.[There are a number of nation-wide panel surveys of households in Bangladesh that might appear to be good candidates for filling this data gap. In Online Appendix <ref> we discuss a variety of available data sets and why they are insufficient to answer our research question.] And current or future data collection efforts will not be useful as STRVs were introduced over a decade ago, meaning no relevant baseline can be constructed.
As an alternative, we use EO data to reconstruct the past. EO data provide a repeated time series for a given location (panel data), provide observations long before and after 2010, and provide coverage of an entire country. What EO data lack are information of varieties grown, which we can proxy for with administrative data on dissemination efforts, and yields, which we can proxy for with EVI. We can also enrich and validate the EO-based analysis using existing - if incomplete - survey data, such as the RMS, which has data of varieties and yields, the very items that the EO data is missing. Although, there is no way to generate the data one would want if one were planning a prospective impact evaluation of a new agricultural technology, our method provides a way for researchers to answer important questions about the impact of technology adoption in data poor settings.
As a test case for our method, we focus on the adoption of Swarna-Sub1 and subsequent submergence tolerant varieties (which we collectively term STRVs) in flood-prone rice-growing regions of Bangladesh during the Aman season. In the Aman season, farmers face more crop loss due to natural disasters like flood <cit.>. The intuition behind our identification strategy is simple: for a given rice-growing location, if no flooding occurs, EVI provides a consistent signal of crop growth up until harvest. For that location, prior to 2010, flooding of x amount would cause a consistent signal response in which EVI values abruptly drop. Post introduction of STRVs, if there is no flooding, there would be no change in the EVI signal relative to before 2010. And, if there is flooding of x amount and no adoption of STRVs, there would again be no change in the signal response relative to before 2010. But, if there is a change in the signal response to flooding, the only explanation would be the adoption of STRVs, as no other technology to reduce the negative impact of flooding on rice was concurrently introduced.
§ DATA
Data availability is the key challenge to establishing the long-term, large-scale impacts of agricultural technology in international development. As <cit.> point out, the relative infrequency of in-person surveys in many developing countries can potentially be overcome using remotely sensed EO. In the case of STRVs in South Asia, where no comprehensive baseline data was collected, EO data allows us to look back into the past and reconstruct outcomes before and after the technology was disseminated.
In this study we combine data from various sources to provide evidence on the impact of STRV adoption from a variety of angles. For our national-level analysis, we combine data from MODIS with two types of ground truth data to build rice area maps. We also use MODIS to extract the enhanced vegetation index (EVI), which is our primary proxy for rice yields. To this we layer on flood maps built through a fusion model developed by <cit.> that combines data from Sentinel-1 and MODIS. Finally, we combine the EO data with administrative data at the district-level on the production and dissemination of STRV seeds by both public and private entities. For our household-level analysis, we combine three rounds of panel data collected as part of the Rice Monitoring Survey (RMS) with historic flood data from the fusion model. We supplement this with cross-sectional, plot-level data also drawn from the RMS.
The combination of these various sources of data provide us with a picture of the impacts of STRV adoption at a variety of spatial and temporal resolutions. While we view no single source of data or econometric approach to analysis as definitive in establishing the causal impacts of adoption, the sum of these various sources and methods provides a clear picture and a robust understanding of the impacts of this technology.
§.§ Remote Sensing EO Data
Our construction of the EO data set starts by identifying where rice was cultivated in Bangladesh. From the time series of rice maps, we extract EVI values. We then construct flood maps for areas where rice was grown. All of this pixel-level data is aggregated to the district-level (there are 64 districts in Bangladesh) and then combined with administrative data on seed dissemination. In this section, we provide a non-technical, intuitive summary of the methods used to generate the EO data. Online Appendix <ref> provides the technical details sufficient to implement our procedures in alternative contexts.
§.§.§ Rice Area Mapping
As early as <cit.>, rice pixels have been identified by detecting surface water followed by a sudden growth of vegetation. The detection of surface water is critical for identification as it represents agronomic flooding, which is unique in rice cultivation and the most common practice for cultivating rice in Bangladesh.[In recent years there has been a push by IRRI and other development organizations to promote alternate wetting and drying (AWD) and/or direct seeded rice (DSR), both of which would substantially reduce or completely eliminate the need for agronomic flooding. However, promotion of both methods has mainly focused on use during the Boro (dry) season when groundwater irrigation is necessary for cultivation. To date, there is a dearth of reliable data to measure adoption of either technology in Bangladesh. See <cit.> for a recent overview of AWD in Bangladesh.] In early applications of remote sensing, which relied on handcrafted, indice-based features, agronomic flooding is signaled when the value of vegetation indices (e.g., the Enhanced Vegetation Index [EVI]), are lower than the value of water indices (e.g., Land Surface Water index [LSWI]). The sudden growth of vegetation is determined if vegetation index reaches half of the maximum value within 40 days immediately after agronomic flooding.
If our goal was to simply construct rice area maps that were contemporaneous with the RMS household data collection efforts (2014, 2017, and 2022), then we could follow a process similar to <cit.> in validating the maps. This would involve taking GPS measures of the area of a number of rice and non-rice plots and using them as ground truth data for training a machine learning classification algorithm. The algorithm could then use the ground truth training data to predict rice and non-rice areas for the rest of the country for that year. However, our goal is to build rice area maps for the 20 year period of 2002-2021, providing us with a large t in both the pre- and post-STRV release period. No traditional ground truth data exists for predicting where rice is grown during years when no RMS data was collected.
To address this data deficit, we take two approaches designed to complement one other. Our first approach involves collecting traditional ground truth data. In 2021, we conducted a small survey of RMS households to collect plot GPS coordinates, crop varieties, and planting dates (n = 787). There was no socioeconomic component of this survey as it was solely focused on building a ground truth data set for training. This effort was augmented in 2022 by the third round of RMS data collection, which included household socioeconomic information plus plot production information and GPS locations (n = 3,219). All plot location data was collected following guidance in <cit.> and allows for creating rice area maps in 2021 and 2022 following standard procedures as in <cit.>.
Our second approach re-constructs “ground truth” through visual inspection of high resolution Google Earth images. This visual reconstruction using optical imagery is commonly used in the remote sensing literature and is particularly useful for capturing difficult to measure activities, like illegal fishing <cit.>. In our case, it is difficult to rice cultivation, spanning back two decades. To build our historical ground truth data set, we selected three districts in Bangladesh (Barisal, Kurigam, and Rajshahi) that experience different hydrological characteristics and thus different distributions of riceland. We derived grids from the MODIS pixels and selected a stratified random sample of 150 points (75 points each for rice and non-rice areas) in each district for a total of 450 points (see Figure <ref>). This was completed for nine years for which Google Earth imagery exists: 2002, 04, 06, 09, 15, 16, 18, 19 and 21. The MODIS grids were overlaid on Google Earth imagery as the basis for visually determining the land cover of each grid for the particular season being validated. MODIS pixels were categorized as rice if, in the Google Earth imagery, more than 70% of the grid was visually determined to be rice. Pixels were categorized as non-rice otherwise.
We then classify rice areas for all years for the entire country by training a random forest (RF) algorithm on the ground truth data using a leave-one-out cross validation scheme. The input data to the model is a combination of surface reflectance and vegetation indices (specifically EVI) extracted from the MODIS eight- and sixteen-day composites at 250m resolution, as well as topographical indicators (elevation and slope, derived from the Digital Elevation Model (DEM) product FABDEM <cit.>). The RF model consists of 1,000 trees each with five leaves. We use a standard leave-one-out cross validation scheme in which we rotate the districts used as training, leaving one district out each time. This ensures generalizability of the model over space. To ensure generalizability over time, we reserve 2020 as an out-of-sample test set. Our model is accurate at predicting rice and non-rice pixels in 2020 82% of the time.[Online Appendix <ref> contains more details on the methods, including results of accuracy tests for the model.] The end results is a time series of rice and non-rice pixels for the entire country for all years spanning 2002 to 2022.
§.§.§ Flood Area Mapping
As with the rice maps, building reliable flood maps that look into the past raises data availability challenges. Until recently, two common approaches to using EO data for generating flood maps was either to rely on the historic flood data base curated by the Dartmouth Flood Observatory <cit.> or to use MODIS to construct the Modified Normalized Difference Water Index (MNDWI) <cit.>.
The archive from the Dartmouth Flood Observatory, which extends back to 1985, has been highly cited but suffers from several well-know data deficiencies. Most importantly, it only includes large flood events that members of the observatory were able to identify in news stories and governmental sources. Many small floods, that might destroy crops for some farmers but result in no deaths and generally do not make news, are not covered in the archive. Recently, <cit.> develop the Global Flood Database (GFD), which uses 12,719 scenes from Terra and Aqua MODIS sensors to produce daily images of flooding at 250m resolution from 2000 to 2018. The GFD builds on the flood event reported in the Dartmouth data and is validated on 30m resolution Landsat data. While the GFD is based on MODIS 250m, it provides only a binary flood/not-flood indicator at that resolution.
In terms of using MNDWI and MODIS, the MNDWI is a threshold method for identifying flooding developed by <cit.>. Water and non-water are identified based on surface reflectance and a pixel is defined as water if MNDWI is above zero. This handcrafted feature has been popular because it requires no ground truth data and is simple to construct. However, as <cit.> point out, methods that combine EO data with ground truth data and methods from deep learning can make substantial improvements to accuracy relative to previous approaches, like EO-only determined thresholds. Furthermore, using MODIS or any optical sensor only to map floods can cause large underestimations in inundation due to cloud cover.
Sentinel-1 provides a numbers of improvements over MODIS and the Dartmouth data that address these two issues (low resolution and underestimation). Specifically, Sentinel-1 comes at a spatial resolution of 10m and uses a synthetic aperture radar (SAR) sensor, which allows it to penetrateins cloud cover. The limitation to Sentinel-1, for our purposes, is that data exists only back to 2017. To overcome the lack of historic Sentinel-1 data, we train a Convolutional Neural – Long Short-Term Memory Network (CNN–LSTM) on Sentinel-1 fractional flood data to predict historic MODIS satellite data and then project back into the past, prior to the launch of Sentinel-1. The method is described in detail in <cit.> and the workflow is summarized in Figure <ref>.
To detect water in Sentinel-1 data we use a dynamic thresholding algorithm tailored to Bangladesh by <cit.>, which produces a binary flood indicator for the 10m pixel. This is upscaled to create a nearly continuous fractional index at 500m resolution, which matches the resolution in MODIS's eight-day composite images. We then feed this Sentinel-1 derived fractional index, along with hydrologically relevant indices (elevation, slope, and height above nearest drainage), and MODIS 500m eight-day composite data into a CNN-LSTM to predict fractional flood area back to 2002. Relative to other deep learning frameworks, the CNN-LSTM is able to exploit the spatial and temporal structure of the data. The data (features) from a single time step are feed into a first CNN which produces a single pixel of spatially contextualized data. This is repeated for each time step (t) to produce a time series of length T for that pixel. We then feed a time series of length T-1 into the LSTM to produce a single temporal value. The output of the LSTM is then combined with the output of the CNN for time T and feed into a second CNN to produce a fractional value for the extent of flooding in a given pixel for a given time step. As with the RF model for rice area, the CNN-LSTM is trained using leave-one-out cross validation, where a time step is withheld for validation one at a time. <cit.> shows that this method produces accuracy levels comparable to existing flood detection methods using MODIS data and previous methods like MNDWI. And, in our case, our model is accurate 90% of the time.[Online Appendix <ref> provides more details about the fusion model and tests of the model against traditional methods.] The result is weekly data on the fraction of a 500m pixel that was flood, covering all of Bangladesh from 2002 to 2022.
§.§.§ Creation of District-Level EO Data Set
The creation of the district-level panel data follows the workflow summarized in Figure <ref>. We start by using the rice area maps for Aman (wet) season for each year to mask the flood maps, so that we only consider inundation levels for rice area. We also use the rice maps to mask EVI data from MODIS so that we only consider EVI values for rice area.
Given the resolution of the data it is not possible to measure EVI on a rice field for use in predicting yields as in <cit.>. The rice maps exist at a spatial resolution of 250m while the mean size of a rice plot in the RMS is 0.68 ha. Nor is it possible to measure the exact, agronomically relevant inundation of a rice field, given that the flood maps exist at 500m resolution. To overcome this data limitation, we calculate a variety of EVI and flood metrics to ensure our proxies for relevant yield and flooding are robust. For EVI, we calculate the maximum value of EVI for the season, the minimum, the mean and median values. All of these metrics turn out to by highly correlated with each other, with correlation coefficients ranging from 0.75 up to 0.99, depending on the specific comparison. Because of this, all of the analysis in the paper is based on our preferred metric, the maximum value of EVI. Results are strongly robust to use of any of the other metrics.
For flood, we calculate four metrics. First, 1) cumulative flooding, which is measured as the sum of the weekly fractional pixel values for the growing season (June to December) for a given year. We also calculate 2) the maximum fractional values for the season as well as the 3) mean and 4) median values. Unlike with the EVI metrics, there is more variation in the flood data meaning the flood metrics are not as highly correlated (though mean and median are highly correlated). As one will see, our results are generally robust to the use of flood measure, though this is not always the case. Because of this, throughout the paper we present results using all four flood metrics.
Having derived our measures of EVI and flooding from the remote sensing time series, we when take the average value of that metric for each pixel within a district. This provides us with district-level values for each of our EO based metrics. We then combine this with the administrative data on seed dissemination. From IRRI and the Bangladesh Rice Research Institute (BRRI), we developed an exhaustive list of all seed production organizations, both private and public. While both IRRI and BRRI have been instrumental in developing submergence tolerant seeds varieties, neither is engaged in seed multiplication, which is conducted by a number of public and private entities. Investment in the seed system helped promote and disseminate STRVs from 2010 through 2019. We visited each seed producer/distributor and obtained disaggregate (district) level data on 1) the STRV variety names being multiplied or sold/distributed in the district, 2) the amount (tons) of each variety being multiplied in the district, and 3) the amount (tons) of each variety being sold/distributed in the district.[Private organizations sell seed while public organizations may sell the seed or distribute for free.] The administrative data provides a proxy for the availability of STRV seeds in each district in each year starting in 2010. A limitation of this data is that farmers may recycle seed or obtain seed from outside their district. We use the cumulative amount of seed available in each district in each year to try and account for the growing availability of STRVs.
§.§ Household Survey Data
To investigate the impact of adoption of STRVs on household welfare, we use data from the Rice Monitoring Survey (RMS). The RMS is a BMGF project designed to capture varietal turnovers over time in Bangladesh, India, and Nepal. The data was originally collected as two waves of a panel in 2014 and 2017 <cit.>. Households were selected following a clustered random sampling procedure to ensure the overall survey was representative of rice growing regions of each country. The plan was to collect a third wave in 2020, but data collection efforts were delayed due to COVID until 2022 and only Bangladesh was revisited. In total, 1,500 households were part of the initial sampling frame in Bangladesh. The RMS was able to follow-up with 1,490 households in 2017 and 1,484 households in 2022. This gives us an attrition rate of 1.7% over a span of eight years.
As the RMS was designed to monitor varietal turnover, the survey instrument is heavily focused on collecting information on rice production at the crop and plot level. While the data is a panel of households, limited information was collected about household well-being. These include demographics, asset ownership, membership with different farming organizations, farmer opinion on the prevalence of different stresses, such as flood, drought, and salinity, and the availability of different rice varieties. No information was collected on consumption, income, or other measures of welfare, such as food security or women's empowerment.
In terms of production data, the survey collected information by both plot and crop so as to be able to distinguish between different varieties of rice grown on the same plot. Plot data includes information on plot size, ownership, and land type. Crop data, within a plot, includes which rice variety was planted in which season, methods of planting, damage of crops by different abiotic shocks, input use, total production, and disposition of the harvest.
We construct an unbalanced panel of households that cultivated rice in Aman season. This gives us a total of 3,865 observations from 1,488 households. Our primary outcome of interest is rice yield at the household level, which we construct by summing up rice harvest (kg) on all plots and dividing by the sum of area (ha) for all rice plot. We consider a household as having adopted STRVs if they report planting any flood tolerant rice variety on at least one plot. To measure the incidence of flooding faced by the household, we combine household GPS location with the EO flood maps we generate.[While we would ideally use plot locations to match with flood data, only household GPS locations were collected in 2014 and 2017. Thus, we only know the precise location of plots in 2022.] We then calculate five measures of flooding for each survey year: cumulative, maximum, mean, median, and a measure of the flood experienced by the village. Our village measure includes cumulative flooding for all households in the village, minus the household in question. We also construct flood time series for each household going back to 2002. In some specifications we use the household's historic experience with flooding to calculate the probability of a household experiencing a flood in a given year as an instrument for STRV adoption.
§.§ Data Limitations
Before moving on to our empirical method, it is important to note and discuss the data limitations in the study. In an ideal situation, researchers would have collected baseline data from rice growing households prior to the release of STRV seeds and would have followed-up with these households periodically over the years. Unfortunately, no such panel data with pre-release baseline exists. This motivates our use of EO data to reconstruct how the EVI signal responds to flooding pre-STRV release and then to examine how that signal changes post release.
In terms of ideal EO data, one would like data from a sensor that is at both a high spatial and temporal resolution and that can penetrate clouds. Additionally, one would like to have ground truth data in terms of both flooding and rice area/yields across both space and time. As highlighted in Section <ref>, such data does not exist, mainly because high resolution, cloud penetrating EO sensors did not exist in the public prior to the introduction of STRVs. Thus, there is no way to construct a baseline at both a high spatial and temporal resolution. Additionally, baseline ground truth data do not exist because no baseline survey was conducted. This poverty of ideal data motivates our approach to constructing rice area maps using Google Earth images and flood maps using a fusion model.
In developing our approach, we prioritized products with a high temporal resolution (MODIS) at the expense of products with a high spatial resolution (Landsat) because of the importance of capturing transitory flood events and EVI values immediately prior to harvest. We are aware of this trade-off and the limitation it creates: our EO data set is constructed at 500m, which is an area equal to 25 ha. In a cropping system like rice in Bangladesh, where the average rice area for a household is less than one ha, this spatial resolution is not ideal. However, considering the types of measurement error introduced by the trade-off leads us to believe that prioritizing temporal over spatial resolution is the preferred approach. We have explored conducting the analysis using Landsat 30m data. However, Landsat's temporal resolution is 16 days, meaning it frequently misses known flood events and EVI readings at harvest are frequently missed. Given this temporal resolution, it is possible to under count by missing a flood or harvest but it is not possible to over count. This means that the measurement error introduced is non-classical and while the direction of the bias is known, the size of the bias is unknown. By contrast, the relatively course spatial resolution of MODIS, combined with the ground truth data, and our machine learning classification approach means that we may misallocate a pixel to rice or no-rice and to flood or no-flood, but that misallocation is likely random. This means the measurement error introduced should be classical - increasing the variance or noise in the data but not introducing any systematic bias. Essentially, the spatial fuzziness of MODIS allows us to make guided inference on whether or not a pixel is rice/non-rice or flooded/not-flooded while the missing temporal images of Landsat do not yield to making inference about what happened in those missing pixels. This spatial fuzziness can be measured in terms of accuracy metrics for our classification algorithms (RF and CNN-LSTM), which are in the range of 80-90% (see Online Appendix <ref> for more details).
In terms of data limitation to the RMS, as adoption of STRVs happen on a plot-by-plot and crop-by-crop basis, the ideal data would be plot-level panel data. Additionally, one would want historic and contemporaneous flood data at a high enough spatial and temporal resolution to measure flooding on that specific plot and not a neighboring plot. Unfortunately, in the RMS, it is not possible to track plots over time. Plot dimensions shift from year-to-year and farmers buy, sell, and rent plots from season-to-season. Additionally, GPS coordinates for plots were not collected as part of the RMS in 2014 and 2017, so we do not know the exact locations of plots in those years. Even if we did, like we do in 2022, the EO flood data is not at a high enough spatial resolution to measure flooding on one specific plot. We do, as a robustness check, conducted our analysis on the pooled plot data and compare these results to those from the household-panel.
Finally, we also lack objectively measured harvest information in the EO data and both harvest and rice area in the RMS data. In the EO data, we use EVI as a proxy for harvest, though this approach is not without its critics. Much of the research demonstrates strong correlation between EVI, and particularly peak EVI, and yields <cit.>. However, the accuracy of using EVI as a proxy for yields varies substantially by location <cit.> and crop, with EVI performing particularly poorly for rice in the U.S. <cit.>. For rice in Bangladesh, the correlation between EO derived metrics and yield can vary from 55-91$ <cit.>. However, <cit.> relied on government reported area yield statistics, a notoriously inaccurate source for accurate and objective measures of yield in low-income countries <cit.>, where statistical departments are often under-resourced and politicized. We are unaware of tests of accuracy of EVI with rice yields in Bangladesh that rely on objective measures of yield.
In the household survey data, we are forced to rely on self-reported measures of rice area and harvest. Since <cit.>, it has become standard to use handheld GPS devices to calculate land area for at least some plots. And since <cit.>, it has become common to conduct crop-cuts on at least some plots. While we did conduct GPS measures of plots in the 2022 round, these methods were not common when the first two rounds of RMS data were collected. As <cit.> shows that correcting for biased self-reporting in one (land area or harvest weight) but not the other can actually aggravate the bias, and as we have objectively measured land area for one year and lack objectively measured harvest in all years, we have decided to rely solely on the self-reported values in all years.
While these data limitations are not unique to our study, and in fact would be present in any attempt to measure impact of an agricultural technology introduced over a decade ago, they should be kept in mind when interpreting our results.
§ RESEARCH DESIGN
§.§ Pre-Analysis Plan
In developing the following empirical approach, we pre-specified our analysis and registered it with Open Science Framework (OSF) <cit.>.[Note that the PAP was registered on 9 February 2023. While in this paper we refer to the third wave of panel data as 2022, this is because we refer to all cropping season data by the year of planting not the year of harvest. Thus data from 2014, 2017, and 2022 all come from the Aman season in which planting occurred in 2014, 2017, or 2022 and harvest at the end of that calendar year, extending into the subsequent calendar year. Data collection for 2022 was not completed until 12 March 2023. Timestamps on the data are available for verification.] While pre-analysis plans (PAPs) are common in lab and field experiments, they remain relatively uncommon for observational studies. However, <cit.> argue that any studying collecting new data, regardless of the type of data, or trying to establish causal effects should develop and register a PAP. They point out that the very first PAP in economics was observational in nature and was created by <cit.>, who was studying the impacts of changes to minimum wage on employment - a particularly contentious issue - and one in which a PAP could be developed ahead of an announced wage increase.
Our study, though observational, is well suited to a PAP given that the research relies on a newly collected round of data and that it uses DID and IV methods to establish causal effects of STRV adoption. In addition to preventing the p-hacking that <cit.> shows can be common to DID and IV methods, the plan also helps limit research degrees of freedom - the ability of researchers to make many reasonable or justifiable decision during the data cleaning and analysis process. In our context, it is unclear what the best EO proxy is for agronomically relevant flooding or for yield. It is also unclear what the best IV for adoption might be. There are also a large number of reasonable criteria to use for deciding how to deal with outliers, how to convert, scale, and aggregate data, and how to deal with missing values. In the absence of a PAP, a researcher might be tempted to try many alternative options in these domains and then only present the choices that result it strongly significant results. Our PAP ties our hands so that we can avoid the temptation to selective report results favorable to our hypothesis.
Throughout the results section, where data definitions, approaches, or inference criteria differ from our plan, we highlight these differences. The most substantial deviation from the PAP is that we initially planned to measure flooding as the total number of distinct flood events. Since we filed our PAP, the research team developed the fusion model described in <cit.> that provides a fractional index of inundation in a pixel. This new approach provides much greater opportunity to develop more precise measures of flooding than the original binary indicator of flooding on a given day. Following advice in <cit.> and <cit.> to not stick to an inferior pre-specified method when a new, superior method becomes available, we adopted the fractional flood index data instead of the binary data. To help alleviate concerns that we selectively report a new flood measure that is particularly favorable to our hypotheses, we report results using four different flood measures based on the new data.
§.§ Empirical Method
Our empirical method follows that laid out in <cit.> to estimate the impact of Green Revolution technologies on yields. We first estimate the effects of the availability of STRV seeds at the district-level on relative rice yields as proxied by EVI with and without flooding. Our empirical strategy relies on the release dates of various varieties and the staggered accumulation of seed in each district, which we argue is plausibly exogenous to rice yields. We then estimate the effects of adoption of STRV seed by a household on relative rice yields with and without flooding. We take a variety of approaches to address the endogeneity of the adoption decision. These include using a fixed effects with-in estimator as well as instrumental variables based on the probability that a households would experience a flood. We enrich these results by also estimating the impact of adoption using the RMS cross-sectional plot-level data.
§.§.§ EO District-Level Framework
We estimate the effects of STRVs on EVI as a proxy for yields using annual data at the district-level. We start be estimating a simple event study model:
EVI_dt = ∑_j ∈ Tα_j·1_t^t = τ + j + δrice + ϕflood + μ_d + μ_t + ϵ_dt
where d indexes districts and t indexes time. The two terms μ_d and μ_t denote district and time fixed effects, meaning that only within-district time variation in relative EVI remains. The district fixed effects control for all district-specific time-invariant variation and the time fixed effects control for all time-specific district-invariant variation. We also include controls for the percentage of a district that is under rice cultivation and the cumulative flooding in the district, both of which vary by district and time.
We expect the introduction of STRVs in a district to increase EVI relative to that district's EVI before the introduction. To capture this effect in the regression, we include an indicator function 1_t^t = τ + j that takes a value of one j years after the introduction of the first STRV, which we denote as τ. As ϵ_dt captures district-specific trends in relative yields, α_j measures by how much the relative yield in the average district has changed j years after the introduction of STRVs relative to the benchmark year. We set the benchmark year in each district as the year immediately prior to the introduction of STRVs in that district. Our hypothesis is that α_j > 0 after the introduction of STRVs and that α_j = 0 before the introduction of STRVs.
There are two shortcomings to the event study approach. One is that STRVs reach different districts in different years. It is plausible that STRV seeds reached more flood-prone districts, or districts with more advanced farming techniques, earlier than other districts. If these characteristics are time-invariant, our district fixed effects will pick them up. However, one can imagine situations where time-variant, district-specific events accelerated the arrival of STRV seed. Such a situation would be if public or private seed multipliers rushed STRV seed to a district in a year immediately following an unexpected flood event. While we explicitly control for contemporaneous flooding, flood events in the recent past might make the year a district got STRVs no longer exogenous. A second shortcoming is that STRVs are likely to increase EVI only very slightly over time, as a change in EVI would only occur when there was a flood lasting more than seven but no more than 17 days. If a district did not experience substantial flooding in a year after STRVs were released, we would not expect any change in EVI. Thus, the event study will under estimate the effects of STRV release date on EVI as STRVs will only effect EVI if there was a flood.
To address these two issue, we next estimate a simple DID model:
EVI_dt = β( 1_dt^2010· 1_dt^flood)+ ρ 1_dt^2010 + γ 1_dt^flood + δrice + μ_d + μ_t + ϵ_dt
where 1_t^2010 is an indicator equal to one in the years after the first introduction of STRVs in 2010 and zero otherwise. Similarly, 1_t^flood equals one in districts that are prone to flooding and zero in districts that are not prone to flooding. To define a district as flood-prone or not, we first calculate the median amount of flooding across all districts and all years. We then calculate the median flood value in each district across all years. We categorize a district as flood-prone if its median value is above the median value in all the data. The remaining terms are as defined in Equation (<ref>).[Note that because we have converted both adoption and flood to binary indicators that do not vary over time, our DID estimator is not subject to the recently exposed potential biases of DID in complicated settings. For a summary of what settings complicate DID estimation and the numerous ways to correct for bias, see <cit.>.]
We expect EVI in flood-prone districts to increase after 2010, which was when STRVs first became available in Bangladesh. In this set-up, there are no pure control districts. Rather, we compare the difference in the differences that exist between flood-prone and not and pre/post 2010 to establish a counterfactual. Our hypothesis is that β > 0 while ρ = 0 and γ < 0. The main identifying assumption is that if STRVs had not been released, yields in flood-prone districts would have followed the same trend as yields in non-flood-prone districts. Obviously, flood-prone districts are likely to have lower yields than non-flood-prone districts. Thus we assume a mean shift in yields, as without STRVs mean yields are likely to be lower in flood-prone districts than in districts that are not prone to flooding, though the existence of parallel trends remains unobservable.
As with the event study set-up, the DID model is not without its shortcomings. In particular, in the DID model we assume that farmers in all districts have access to STRVs starting in 2010. This assumption avoids endogeneity concerns about targeting of certain districts for seed but, based on the data, we know that STRVs were not universally available in 2010. In fact, in our administrative data, only one district had seed locally produced and distributed that early. And in the RMS data, adoption in 2014 was at eight percent. This suggests that, like any agricultural technology, roll out, dissemination, and adoption was gradual and took time. Th end results is that our use of a dummy for seed release in 2010, while exogenous, likely overestimates the number of districts “treated” with STRV seed. A second shortcoming is that while a relative measure like flood-prone might drive adoption of STRVs and impact EVI, it is a coarse measure. It is also subjective and may not reflect agronomically important flood events.
Our third approach to modeling the district-level effects of STRV availability is a two way fixed effects (TWFE) approach that seeks to address the shortcoming of the event study and DID approaches. It explicitly accounts for seed availability and actual flooding in the district. The TWFE equation is essentially the same as the DID:
EVI_dt = β(seed_dt·flood_dt) + ρseed_dt + γflood_dt + δrice + μ_d + μ_t + ϵ_dt.
One main difference is that the TWFE approach uses administrative data on cumulative seed available in each district in each year instead of a binary indicator for if the observation comes before or after 2010. The other main difference is that we use EO data on actual flooding in a given year instead of a categorical variable for if the district is flood-prone or not. All other variables are the same as in the DID model.
The benefit of the TWFE approach is that it uses available data on seed availability and flooding. In the event study, we controlled for flooding but our variable of interest (α_j) measured changes in EVI in each year after seeds were introduce not changes to EVI when seeds were introduced and flooding occurred. This likely underestimates the effect of STRVs. In the DID model, we interact indicators for flooding and the introduction of STRVs but our variable of interest (β) considers all districts as having equal access to seeds in every year since 2010. This also likely underestimates the effect of STRVs. Our TWFE model addresses these issues but it comes with the caveat that if seed availability is a function of time-varying, district-level events it is endogenous. Recall that we control for time invariant district characteristics, like soil quality and climate, as well as district invariant time events, like changes to national policy. So, the administrative seed data will only be endogenous if dissemination changed based on some time varying, district-level event, such as targeting a district for increased dissemination in the year after an unexpectedly severe flood.
Our overall approach to estimating the long-term, large-scale effects of STRVs in Bangladesh is to balance the richness of the EO and administrative data with concerns about endogeneity. The event study and DID approach minimizes concerns about endogeneity by using coarser indicators for the year STRVs were approved in Bangladesh, a clearly exogenous event. The TWFE approach uses data on actual flooding and seed availability, the later of which may be endogenous. By presenting all three approaches we attempt to provide a robust picture of the causal effects of STRVs in Bangladesh.[Only the DID and TWFE approach was pre-specified. The event study approach was suggested by participants at a seminar presentation.]
§.§.§ RMS Household-Level Framework
Similar to our district-level framework, we estimate the effects of STRV adoption by a household on rice yields using a TWFE model:
lnyield_it = β(STRV_it·flood_it) + ρSTRV_it + γflood_it + δland + μ_i + μ_t + ϵ_it
where i indexes households and t indexes time. The two terms μ_i and μ_t denote household and time fixed effects, meaning that only within-household time variation in relative yields remains. The household fixed effects control for all household-specific time-invariant variation and the time fixed effects control for all time-specific household-invariant variation. We also include a control for the amount of land farmed by a household in a given year (land), which, analogous to our district control for rice area, varies by district and time.
In terms of identification, model (<ref>), like model (<ref>) relies on household and time fixed effects to identify changes in a household's yields resulting from a change in the household's adoption status. The model is identified only if the adoption decision is not drive by household-specific, time-varying events. We believe that adoption is plausibly exogenous and primarily driven either by time-invariant household characteristics like risk preferences and farming skill or by availability of seed, which changes over time but is not uniquely for a household. That said, there may be household-specific, time-varying events, like recent flood experience, that makes adoption endogenous in the current model.
To address this endogeneity concern, we estimate a TWFE model with instrumental variables (TWFE-IV). Note that the potentially endogenous variable, STRV, shows up twice in Equation (<ref>): once as a linear term and once interacted with the exogenous flood variable. For instruments, we use lagged values of flooding, and their interactions, for each household in order to calculate the probability that a household experiences flooding in each year of the panel. To instrument for the interaction between STRV and flooding, we use the interaction of lagged flood experience with the contemporaneous flood variable, which being exogenous, serves as an instrument for itself.
The first-stage equation, for either STRV or its interaction, is:
STRV_it = ∑_j ∈ Tψ_j flood_jt^t= τ - j + ∑_j ∈ T∑_k ∈ Tξ_jk( flood_jt^t= τ - j×flood_kt^t= τ - k)
+ ∑_j ∈ Tζ_j ( flood_it^t= τ - j·flood_it) + γflood_it + δland + μ_i + μ_t + ν_it.
The first summation captures the lagged values of household flooding, with j representing the number of years of the lag from the panel year, which we represent with τ. We use 12 lagged years, so that the lagged flood data for adoption in 2014 is flooding experience from 2002 to 2013 while the lagged food data for adoption in 2022 is flooding experience from 2010 to 2021. We choose 12 years since 2002 is the earliest year we have EO flood data for. The next summation captures all interaction effects between the different lags. As we do not know exactly what type of flooding is most relevant for a household's decision to adopt, nor do we know how long a past flood may remain relevant for the adoption decision, our goal with the interaction term is to saturate the model by accounting for as much of the mean and covariance of past flood events. Note that ξ_jk captures both the interaction effects from different years (j ≠ k) but also the squared value of a flood (j = k) so as to provide greater weight to extreme flood events in influencing the adoption decision. The final summation term instruments for the interaction between adoption and contemporaneous flooding by using the lagged values of flood as the instrument for adoption and interacts it with contemporaneous flooding, which serves as an instrument for itself. All other terms in the equation are exogenous and serve as their own instrument.
The first-stage equation attempts to account for all relevant past data on flooding based on the meteorological assumption that these past events predict the probability of flooding in the current year. The behavioral assumption underlying our instruments is that households that are more likely to experience flooding in a given year are more likely to adopt STRVs. Beyond the assumption about the relevance of the IVs, these instruments also need to satisfy the exclusion restriction. We argue that past flooding will not affect current yields, conditional on the other controls, except through the choice of a household to adopt STRVs.[Instruments must also the stable unit treatment value assumption (SUTVA). While flooding is clearly covariate across space, SUTVA only requires that the effects of floods be stable across unit. So, if one household experiences a certain level of flooding, its neighbor, who experienced the same level of flooding, will make the same decision and experience the same outcome as the first household.]
After estimating Equation (<ref>), we then estimate the second stage of the TWFE-IV equation:
lnyield_it = β(STRV_it·flood_it) + ρSTRV_it + γflood_it + δland + μ_i + μ_t + ϵ_it,
where STRV_it is the predicted value from the first-stage regression. All other terms are as previously defined. The TWFE-IV model measures the causal effects of a change in yield due to the adoption of STRVs when a household experiences flooding. It controls for time- and household-invariant factors and the instruments control for potential endogeneity in the adoption decision by controlling for household-specific, time-variant unobservables.
In all models we cluster standard errors at the level of the unit of observation. In our results, we present point estimates on the coefficients of interest as well as 95% confidence intervals calculated from the clustered standard errors.
§ RESULTS
In the following sections we discuss the results of our two levels of analysis. We begin by summarizing descriptive evidence of time trends in the EO data before moving on to discussing results from the event study, DID, and TWFE models. We then shift focus to the household survey data to see if it confirms results from the long-term, large-scale EO analysis. Again we begin by summarizing descriptive evidence and then move to discussing the econometric results. We then briefly summarize a set of robustness checks, with detailed discussion relegated to Online Appendix <ref> and <ref>. We conclude with a discussion of why our results fail to replicate existing evidence on the impact of STRVs and try to reconcile the differences with a set of exploratory analysis.
§.§ District-Level Outcomes
§.§.§ Descriptive Evidence
We begin by examining the increase in seed availability over time. Figure <ref> aggregates data from the 64 districts up to the division level. We see that until 2013 there was almost no STRV seed available in any district. Starting in 2014 availability in all divisions began to grow but especially in Rangpur. Availability continued to grow across the country until 2019, when growth began to slow or plateau, except in Khulna. This window of rapid expansion in the availability of STRV coincides with international investment in the seed system to expand multiplication and dissemination of new seed varieties.
Overall, the availability of seeds is clearly heterogeneous across space, with flood-prone regions of the country having greater need and thus greater access to the seeds. That dissemination is not random could create concern regarding the exogeneity of seed availability. However, if seed availability differs across districts based on how flood-prone a district is, this will be differenced out by our within estimators. Only if dissemination responded to year-to-year changes in districts, should we be concerned about endogeneity. Luckily, the growth trajectories in each division are fairly stable, though we do see some shifts in slopes that could be evidence of reallocation of seed in response to a district-specific event. Our inability to conclusively rule out endogeneity in the seed data is why we present multiple estimators that do not rely on the data.
Next we turn to trends in rice area per district and EVI values (see Figure <ref>). In Panel A we plot the share of each district that was planted to rice based on our EO generated rice maps. Along the horizontal axis, we plot these values relative to the year that STRVs were first available in the district, with event year -1 being the year immediately prior to introduction. To this data we fit non-parametric regressions both before and after the date of introduction and include 95% confidence intervals. For rice area, our concern is that households would start to cultivate more flood-prone land once they had STRVs. If more flood-prone land was brought into cultivation after STRVs, that could create a downward bias in estimates of the effect of STRVs as this land would have a greater probability of experiencing prolonged floods that even STRVs could not withstand. Examining the data, there is a slight upward trend in the share of rice area in Bangladesh, though there is no substantial change in that trend before and after STRVs became available in that district. This suggests STRVs have not increased the area planted to rice nor induced expansion of rice area into previously unused flood-prone land.
In Panel B, we also see a steady increase in EVI values over time. There does appear to be a change in the growth trend after STRVs became available, with growth rate slowing down or flattening. This is surprising because our hypothesis is that STRVs should result in a reduction in flood damage to rice which would manifest in an increase in EVI after introduction. One possible explanation is that the slope of the trend line is flattened by early “adopter” districts (those that have had STRV seed for more than five year) and these districts are the most flood-prone. These districts may have below average EVI values regardless of STRV status because of their propensity to experience flooding. So, while the seeds impact EVI for a given district, the average effect on EVI is muted because the most flood-prone districts get “treated” first. Regardless, the descriptive evidence regarding EVI does not appear to reflect an increase in the value as the introduction of STRVs as we would have expected.
Figure <ref> provides evidence on the change in our flood metrics relative to before and after STRV seed became available in the districts. Again, we plot district values and than fit a non-parametric regression to both the before and after data. While each metric has different level values or intercepts, they all have a similar downward slope. There does not appear to be a substantial change in the slope from before introduction to after. It might be surprising that the extent of flooding is decreasing over time, given the increased prevalence of catastrophic weather events due to climate change. However, event years do not correspond to calendar years, so in a year with substantial flooding, like 2017, the food values of districts from this year might end up as event year -1, or event year 5. A second reason is that, since the early 2000s, Bangladesh has made substantial improvements to an embankment system that has helped reduce the area of flooding even when streamflows are extremely high <cit.>.
Summarizing the descriptive evidence, STRVs only became widely available in most districts in 2015, five years after their introduction. Since then, availability has increased substantially in each division, though spatial heterogeneity is present in which regions have greater availability. Beyond that, rice area, EVI, and flood trends in the EO district data are mild. There has been little change in the area planted to rice across the 20 year study period. There has been a slight increase in the value of EVI and a slight decrease in flooding, but neither show dramatic changes in trends after the introduction of STRVs. We conclude that there is little descriptive evidence for the impact of STRVs at a national scale but also that there does not appear to be confounding factors that might bias our estimates of STRV impacts upward or downward.
§.§.§ Econometric Evidence
We implement three econometric approaches to establishing the long-term, large-scale impacts of the introduction of STRVs on rice production. These are an event study regression, a DID specification, and a two way fixed effect estimator. Each approach has its strengths and weaknesses, so we rely on the overall preponderance of evidence from the three methods in drawing conclusions.
Figure <ref> reports the results of the event study regression.[The event study regression was not pre-specified and was suggested to us by seminar participants.] Using administrative data on the date that a district first had access to STRV seed, we estimate the change in that district's EVI value before and after the introduction date. As we are looking at changes in district-wide EVI values, instead of changes in EVI values where flooding occurred, our event study is likely an underestimate of the effectiveness of STRVs in preventing flood damage. The event study supports our stated hypothesis. Prior to the introduction, EVI values were flat, fluctuating around zero change relative to the year before STRV availability (indicated as -1). In the years after STRVs were available in the district, EVI has increased relative to the value immediately before introduction. The longer the time period since introduction, the greater the increase in EVI. This is consistent with the slow but gradual increase in seed availability presented in Figure <ref>. As the availability and amount of STRVs in the seed system has grown, so too has EVI values. This suggests that STRVs are effective in reducing yield loss due to flooding, as before their introduction flooded rice would have died, resulting in a lower EVI value than after STRV introduction.
Panel A of Table <ref> presents results from the DID specification. We include an indicator for if the EVI observation is from after 2009, meaning all districts are including in the “treated” group starting in 2010, even if STRV seeds were not available until several years later. As can be seen in Figure <ref>, this is the case for most districts, meaning our DID estimate will also likely underestimate the true effect. Each column in the table represents a regression with one of our four measures of flooding (cumulative, maximum, mean, or median). Our variable of interest is the interaction between the indicator for post-STRV introduction and the indicator for if the district is considered flood-prone using that flood measure of interest. The DID specification provides week support for our stated hypotheses. Point estimates across the four flood measures are consistently positive, though confidence intervals extend below zero. The point estimates on flooding are consistently negative and significantly different from zero. The only hypothesis that is not supported is that the post-STRV indicator would be zero. Absent flooding, growing STRVs should have no impact on yields, though in our results we see EVI is positive and significantly different from zero.
Panel B of Table <ref> presents results from the TWFE estimator. This is our preferred specification, as it incorporates both flood data and seed data from each district in each year. As with the DID results, each column in the table represents a regression with one of our four measures of flooding (cumulative, maximum, mean, or median). Results are similar in terms of sign and significance to the DID results. Flooding significantly decreases EVI while the availability of STRV seed has a very small but non-zero effect on EVI. The interaction between flooding and STRVs is also very close to zero, with 95% confidence intervals centered around zero.
Summarizing the results, we find little evidence that the distribution of STRVs has had a significant impact on EVI, our proxy for yields, at a national-level. Only in the event study regression do we find statistically significant evidence for the positive impact of STRVs on EVI values in the years since STRVs became available in that district. We see marginally positive evidence in the DID specification, where flood-prone districts have higher EVI values after 2010 than before, though these estimates are not significant at standard levels of confidence. Finally, in the TWFE estimation, we see no significant evidence that EVI values are larger when it floods in a district with STRVs than EVI values when it floods in the same district before it had access to STRVs. Considering the preponderance of evidence, we conclude that the increased availability of STRVs has not reduced the loss of rice production due to flooding. This result stands in contrast to existing evidence from experimental data.
§.§ Household-Level Outcomes
§.§.§ Descriptive Evidence
We now turn to the households panel data to see if this more traditional data source supports or contradicts our EO-only approach for measuring long-term, large-scale impacts. First, we provide summary statistics for key variables across the three years of the panel (see Table <ref>).
The first three columns of the table report means and standard deviations for each variable in each year. The final three columns of the table report p-values on year-to-year comparisons of means using a t-test that allows for unequal variance. Statistically significant differences exist for almost every variable in almost every year-to-year comparison. Yields in 2014 and 2017 were around 3.7 tons per hectare and increased to 4.1 tons per hectare in 2022. Adoption was flat at eight percent of households for the first two years in the panel (in 2014, 111 households adopted STRVs, in 2017, 112 households adopted). Adoption grew substantially in the intervening five years, with 21% (234 households) adopting STRVs in 2022. Rice area has steadily declined over time and while statistically significant, the magnitude of the decline is not very large. In 2014, households had 0.72 hectares in rice, which fell by a tenth of a hectare to 0.62 in 2022. In terms of flooding, by most metrics households experienced the most flooding in 2014, followed by 2017, with the least flooding in 2022. However, in terms of the maximum intensity of flooding, 2022 was the worst. These differences highlight the importance of testing different flood metrics, as the duration (cumulative, mean, median), the extent (village), and the intensity (maximum) all are important considerations for the effectiveness of STRVs.
Table <ref> presents summary statistics by adoption status and for the data set overall. Households that adopt STRVs report higher yields in the years that they adopt compared to households not growing STRVs in a given year.[Households can and do dis-adopt STRVs, though given how low adoption rates were in 2014 and 2017, dis-adoption is much less common that adoption. Recall also that we classify a household as adopting if they use STRVs on at least one of their rice plot. Thus, yields for adopters includes yields from both STRV and non-STRV seeds. See Online Appendix <ref> for plot-level analysis.] The difference in yields between adopters and non-adopters suggests that the adoption decision is not random and may be correlated with household or farm unobservables, such as farmer ability or quality and location of plots. That said, we do not see significant differences in terms of rice area between adopters and not adopters. Surprisingly, we find that non-adopters experience great duration, extent, and intensity of flooding compared to adopters. One would assume households adopt because their plots are more prone to flooding. However, as adoption is not static or permanent, the difference in flood experience by adoption status may be the result of a non-adopting household experiencing an above average flood and then adopting STRVs the next year, when flood values are likely to regress to the mean. This potential time-varying adoption decision in response to flooding means that adoption is likely endogenous and also justifies our use of past flood experience as an IV for adoption.
§.§.§ Econometric Evidence
To understand the household-level impact of adoption, and to verify if they support our long-term, large-scale EO results, we implement two econometric approaches. First is a TWFE model that is the household equivalent to the district-level TWFE model. Second, given endogeneity concerns regarding the adoption decision, we estimate a TWFE-IV model in which the adoption decision is instrumented using lagged values of flooding, and their interactions, to calculate the probability that a household experiences flooding in each year of the panel data.
Panel A of Table <ref> presents results from the TWFE estimator which incorporates both contemporaneous household flood data and an indicator if the household was using STRVs on at least one plot. As with the district-level analysis, each column in the table represents a regression with a different flood metric (cumulative, maximum, mean, or median). To these, we also add a measure of flooding in the village, by calculating the village average of each household's maximum flood value, excluding the household in question. The TWFE results provide evidence that confirms the district-level EO analysis. Flooding significantly reduces yields and the use of STRVs during flooding has no significant impact on yields. The one results from the household data that does not match the EO data is the use of STRVs when it does not flood. We found a small positive impact in the EO data while we find no impact in the household data. This household result conforms to our hypothesis that STRVs should have no effect on yields absent floods.
Our final set of regressions (Panel B of Table <ref>) uses historic flood data as an instrument for the adoption decision. The use of the IV addresses concerns that there may be unobserved transitory and idiosyncratic events that induce households to adopt but are not controlled for by the household and year fixed effects. Comparing the IV results to the TWFE results that do not employ the instruments, we observe more uncertainty in all of our estimates (wider confidence intervals), save for the estimates of the exogenous flood variable. This is consistent with the loss of efficiency in employing instrumental variables. With this loss of efficiency, it is unsurprisingly that we continue to find STRVs, absent flooding, have no significant impact on yields. For STRV's impact on yields during flooding, coefficients are sometimes positive and significant, sometimes negative and significant, and sometimes not significant. Given this mixed evidence, we do not place a great degree of confidence in any one regression result, preferring to assume the mixed results are due to a loss of efficiency in estimation, which can increase both Type I and Type II errors.
The preponderance of evidence from the household-level analysis supports our conclusions from the district-level EO analysis. STRVs have a small or zero effect on yields when there is no flooding. Flooding, absent adoption, significantly decreases yields. Adopting STRVs and experiencing a flood has little to no effect on yields, relative to when households do not adopt and do not experience a flood. These results are all consistent with our EO analysis but are at odds with the existing body of evidence.
§ FAILURE TO REPLICATE
The previous results lead to one obvious question: why do we fail to replicate the evidence in RCTs <cit.> and field trials <cit.> on the efficacy of STRVs? Two potential answers exist. First: there is no signal of STRV efficacy in the data because STRVs are not effective outside of a controlled experimental setting. We find this answer unlikely given the large number of observational and correlational studies that also find a positive association <cit.> and the anecdotal reports in the press on the effectiveness of STRVs. That said, we cannot completely discount this answer, as there are no other long-term, large-scale causal studies in other countries to compare against. A second, and to our minds more plausible, answer is that measurement error in our data generates sufficient noise to obscure the true signal. We find this answer particularly compelling given that STRVs are a stochastic technology, dependent on floods occurring within a narrow window. As we are attempting to capture a higher order treatment effect, our analysis may be highly sensitive to mismeasurement and/or mis-classification.
We follow two avenues to explore the possibility that the true signal of STRV effectiveness has been lost in the noise inherent to conducting an impact evaluation in a data poor setting. First, we return to the flood data and work to make a number of improvements to capturing the length and depth of floods. Second, we use RCT data from <cit.> and <cit.> to run Monte Carlo simulations on the sensitivity of their results to mismeasurement or mis-classification. In addition to the following exploratory analysis, we also conduct a number of pre-specified robustness checks. Summaries of those results, along with accompanying tables and figures, are in Online Appendix <ref> and <ref>. These robustness checks confirm that lack of statistically significant impact of STRVs on EVI or yields under a variety of variations to the data and econometric specifications.
§.§ Improved Flood Metrics
Two key challenges exist in measuring the impact of STRVs. The first is a paucity of data, which we address through our approach to reconstructing the past using EO data. The second is the “Goldilocks Problem” or attenuation problem inherent in stress-tolerant seed varieties. For Sub1, any flooding outside the seven to 17 day window drives any impact on yields to zero. Additionally, inundation needs to be sufficiently deep to submerge the rice but not too deep (no greater than 25 cm). So, if our EO approach is to be successful, we need to ensure our flood maps capture the relevant flood event. Our main results are based on our pre-analysis plan, which in turn is based on what we knew was possible for capturing floods in the EO data. Subsequent to our initial analysis, we returned to the data and our fusion model in an attempt to refine the flood metrics. We generate new measures of flooding to more precisely estimate the depth and duration of the flood.
First, we create new pixel-level data where the value of the fractional index varies based on how we define the baseflow of water. To do this, we draw the distribution of flooding across the entire country for the entire year. We then define a pixel as being flooded if its fractional value is in the top five percent of values and zero otherwise. We repeat this process for every five percentile increment up to the 95^th percentile. The upper panel of Figure <ref> shows the results for all of Bangladesh. When the quantile is set at the top five percent of values, almost nowhere in Bangladesh would be considered flooded. When the quantile is set at the top 95 percent of values, almost everywhere in Bangladesh would be considered flooded. As we have no direct measure of the depth of water, varying the level of water that we consider baseflow allows us to indirectly capture flood depth. Lower quantiles require a higher threshold above baseflow (deeper water) to be considered a flood. Higher quantiles requires a lower threshold above baseflow (shallower water) to be considered a flood.
Second, we create three new flood metrics based on the new data that attempt to more directly capture floods that occur inside or outside the necessary “sweet spot“ for STRVs to have a measurable impact. These metrics are 1) the maximum number of consecutive flood days, 2) the number of days within the seven to 17 day window, and 3) a binary equal to one if the number of consecutive flood days fell within the window. We calculate each of these metrics at each quantile threshold. The lower panel of Figure <ref> graphs the distribution for the number of consecutive days of flooding for each quantile. At lower quantiles, where inundation needs to be extreme to be considered a flood, the duration of a flood lasts very few days. At higher quantiles, where inundation needs to be just slightly above baseflow to be considered a flood, the duration of flooding lasts almost the entire season.
Using the three new flood metrics measured at each of the 18 quantiles, we estimate TWFE models to determine the effect of STRVs on our four EVI measures during periods of flooding. This produces 216 different combinations of the data, which we represent using specification charts in Figure <ref>. Each panel in the figure displays coefficient estimates and 95% confidence intervals on the interaction of STRVs and a new flood metric for a specific EVI measure (cumulative, max, mean, median). Specifications are sorted based on coefficient size from smallest (left) to largest (right). The gray diamonds below the coefficients indicate which of the three measures of flood duration was used in the regression along with which quantile was used to determine depth or intensity of flooding. As one can see, the vast majority of results are statistically insignificant.
Yet there are patterns which are clearly visible in the results. A first cluster of outcomes in the upper left of the specification charts that represent negative and (sometimes) significant effects on EVI. These occur when we have very high thresholds for inundation to be considered a flood and when we focus on the seven to 17 day flood window. A high threshold means that inundation must be extreme to be considered a flood and only the most extreme events would fall within the window in which STRVs are effective. The coefficients are negative when using this data signals that setting such extreme thresholds are likely too high, and what we are capturing in reality are flood events beyond the capacity of STRVs. A second cluster is indicated by the downward sloping diamonds marking maximum number of days at decreasing quantile thresholds. Coefficients from regressions using these data are all statistically insignificant, with coefficient sizes centered around zero. This indicates that just counting the maximum number of consecutive days of flooding, without regarding to the window of STRV effectiveness, is not a useful measure of flooding at most quantile thresholds. Finally, there is a cluster of data in the center right of the charts that represent positive and (sometimes) significant effects on EVI. These results mostly occur when we use a flood indicator that accounts for STRVs' window of effectiveness and when we set a threshold for flooding in the 25^th to 50^th percentile range.
There is a pessimistic and optimistic way to interpret the results in Figure <ref>. A pessimistic interpretation is that, even when we increase the precision of our flood maps, STRVs have no significant impact on EVI. This could be because the technology is ineffective or because or EO methods remain unable to reconstruct the past with sufficient precision to pick up the STRV signal among the noise. This interpretation is pessimistic because it assumes either the method or the technology does not work. The pessimistic interpretation recognizes that there are some significant results for very specific flood measures, but discounts these as outcome occurring at random because we are running so many regressions. A correction for multiple hypothesis testing results in a loss of significance for all coefficients. The optimistic interpretation is that, given sufficient care in defining floods in the relevant way, STRVs have a significant impact on EVI. The problem is not the method or the technology but refining the method to a sufficient degree of precision so as to capture a treatment effect of a technology that only occurs under very specific circumstances. The optimistic interpretation discounts the plethora of null results as the outcome of mis-classification in what constitutes a flood and mismeasurement of the relevant flood window. The small number of positive and significant coefficients is not due to running many regressions but of defining flooding in the appropriate way. In the next section we turn to Monte Carlo simulations to try and assess the believability of the optimistic interpretation by determining just how sensitive results are to mis-classification and mismeasurement.
§.§ Monte Carlo Simulations
Evidence from experimental trials, both agronomic and economic, have reliably show positive effects of STRVs on yields during floods. One benefit of experimental data is that the experimenter typically has substantial control over how the experiment is implemented as well as how data is measured and collected. In trying to understand why we fail to replicate the existing experimental data, and to try and determine if the pessimistic or optimistic interpretation of our specification charts is more appropriate, we conduct a number of Monte Carlo simulations using the experimental data. Specifically, we use data that form the basis of <cit.> and <cit.>. We then, in separate Monte Carlo simulations, add noise to the flood data, the yield data, and the adoption indicator in order to simulate mismeasurement or mis-classification.
We begin by conducting a pure replication of the results in <cit.>, which reports on the findings from the first year of an RCT in Odisha, India. They find consistent, positive and significant impacts on yields for adoption of Swarna-Sub1 during flooding, relative to both classic Swarna and other traditional varieties. This evidence is consistent with the field trial data presented in Figure <ref>. We are also able to replicate these results using the complete two years of RCT data from <cit.>.[Though the two papers rely on the same source of data, the regressions run in <cit.>, which closely match tests for efficacy in the agronomic field trial literature, and not the specifications used in <cit.>. We are not replicating any specific result in <cit.>. Rather we are using the full two years of data to reproduce results that match <cit.> in magnitude, sign, and significance.]
The simulations for mismeasurement in the yield and flood data are conducted in the same way but are completed separately. Here, we describe our method for injecting noise into the yield data but the method applies exactly to the flood data, as well. We start by calculating the mean (μ = 2,214) and standard deviation (σ = 1,587) of yield in the RCT data. We then draw, with replacement, from a normal distribution with mean zero and standard deviation equal to one percent of the standard deviation in the data (0.01σ = 15.87). For each observation of yield, we add these randomly drawn values. We run the regression in <cit.> to estimate the impact of STRVs during flooding on the new yield data, which has the same mean but a larger variance. We repeat this 10,000 times. We then increase the noise by one percent, so that the new distribution that we draw from has mean zero and a standard deviation equal to two percent of the original standard deviation (0.01σ). We again rerun the regression for 10,000 draws from this new distribution. We conduct this process starting at one percent intervals from 0.00σ up to 0.50σ.
Results for both yield and flood are presented in the top two panels of Figure <ref>. In this graph, we plot the distribution of p-values on the interaction of STRV adoption and flood for each level of added noise (from 0.00σ to 0.20σ). The vertical dashed red line marks p < 0.05. In the top column for both panels, all p-values from the 10,000 simulations are statistically significant. This is because we are just replicating <cit.> and <cit.> 10,000 times. When we add 0.01σ to the true yield data, 99% of p-values are still significant. But just adding two percent of the true standard deviation as noise to the yield data results in only 21% of regressions producing significant results. Adding any more than nine percent of the true standard deviation to the true yield data, so that the yield data is distributed with mean μ and standard deviation σ + 0.09σ results in only five to 10 percent of significant p-values: the same amount of p-values one would expect to be significant due to random chance with a 90% or 95% critical value. Results are similar for flooding, but the loss of significance happens even quicker, with only six percent of p-values significant after adding just 0.02σ to the flood data.
We follow a similar process for simulating mis-classification in the adoption data. However, as adoption is a binary indicator, it is not useful to use the value of standard deviation to add noise to the data. Rather, we calculate the percentage of adopters (42%) in the data and then re-assign adopters and non-adopters at random. So, for one percent of the data we draw at random four households, determine if they were an adopter or non-adopter, and then switch their adoption status. We repeat this 10,000 times. As before, we start the process at zero, so that we replicate the original results, and then move upwards at one percent intervals until we are re-assigning half of all observations.
Results from the mis-classification simulations are graphed in the bottom panel of Figure <ref>. Clearly results are much more robust to mis-classification of adoption status. Mis-classifying up to five percent of observations has no meaningful impact on the outcomes. Mis-classifying 14% of outcomes results in about half of all regressions producing significant results. Only if one mis-classified 37% of observations does the share of significant p-values fall into the five to ten percent threshold, where their significance could be wholly ascribed to random chance.
The take-away from this exercise is that even results on the efficacy of STRVs using experimental data can be very sensitive to mis-measurement (though less so to mis-classification). Perturbing the true yield data by adding to each value a random number drawn from a distribution 𝒩(0,31.75), or two percent of the standard deviation, produces non-significant results in nearly 80% of regressions. Significance goes away adding even less noise to the flood data. In terms of what these means for our failure to replicate, the Monte Carlo simulations add support for the optimistic interpretation. STRVs are effective during floods and we can measure that effect using a geospatial impact evaluation that relies on EO data. But the method requires extreme care and precision in accurately capturing the variables of interest. The acute need for precision is likely the result of trying to measure higher order treatment effects from a stochastic technology.
§ CONCLUSION
There are numerous research questions that economists would like to answer but are unable to because of a lack of data. Recent years have seen a proliferation in the availability of data, in particular remotely sensed earth observation (EO) data. This has allowed economists and other quantitative researchers to be able to answer questions that were previously viewed as unanswerable. However, there is a recency bias in these new data, meaning the past is still a place that remains data poor.
Our study details a new method for overcoming this recency bias in EO data and with this method we conduct an impact evaluation in a data poor environment. To demonstrate this methodology, we use the case of STRV dissemination and adoption in Bangladesh. We combine data from a variety of sources and use innovative approaches to generating ground truth data. Using recent high resolution EO data and deep learning algorithms, we are able to infer where flooding occurred in the past and generate EO data sets at a higher accuracy than previously existed. We combine these new flood and rice area maps, which cover the entire country of Bangladesh for 20 years, with administrative data on seed dissemination efforts in order to conduct a large-scale, long-term evaluation of the effectiveness of STRVs.
Surprisingly, we do not find evidence that STRVs significantly increase EVI values. We confirm this failure to replicate long-standing results from experimental research by using a three year panel of household survey data and more traditional methods for economic impact evaluations. We hypothesize that our failure to replicate is due to the stochastic nature of STRVs, which require flooding of a very specific amount if the seeds are to produce treatment effects greater than zero. We examine this hypothesis by exploring if improvements to our flood metric in terms of duration and intensity. We find that we can produce positive and significant results but only for a narrow and specific set of measures. We buttress this evidence by conducting Monte Carlo simulations using existing experimental data to show just how sensitive results are to mismeasurement.
This paper demonstrates the possibility and challenges in conducting impact evaluations in data poor settings. It also reveals the added dimensionality of the challenge when trying to capture higher order treatment effects associated with a stochastic technology. As the recent economic and geospatial literature shows, and this paper reaffirms, the mismeasurement problem in data can be acute, often biasing or obscure a true signal. However, the methods developed and deployed in this paper open the possibility to answer previously unanswerable economic questions, allowing us to reduce the number of places that remain data poor, including the past.
chicago
§ ONLINE-ONLY APPENDIX TO “IMPACT EVALUATIONS IN DATA POOR SETTINGS: THE CASE OF STRESS TOLERANT RICE VARIETIES IN BANGLADESH”
§ SUB1 VARIETAL DEVELOPMENT AND DISSEMINATION
In the paper we briefly summarize the development, agronomic characteristics, and dissemination of STRVs. In this is appendix we expand on those details and provide additional evidence regarding the when and how STRVs will generate non-zero treatment effects relative to non-STRVs.
The process of developing high yielding STRVs began when researchers identified a rice variety in India named Dhalputtia, which, despite its poor grain quality and yield, possesses an unusual ability to survive complete submergence for over 14 days. In 1980, scientists identified the Sub1 locus and its associated gene, which subsequently led to the isolation of the Sub1 gene <cit.>. This process of genetic mapping began with two parent varieties: a japonica rice from California and a submergence-tolerant line derived from the donor variety Dhalputtia.
All STRVs exhibit a similar cellular and molecular mechanism. The Sub1 sequence closely resembles a protein that functions as a transcription factor[Transcription factors are proteins that regulate genetic expression by binding to specific DNA sequences <cit.>]. This transcription factor facilitates the accumulation of the ethylene hormone in response to submergence stress. The accumulation of ethylene also triggers the production of gibberellic acid. The ethylene hormone is critical for the plant's vegetative growth, while gibberellic acid promotes the elongation of plant shoots. During complete submergence, however, shoot elongation is reduced. Additionally, other plant metabolic processes, such as carbohydrate consumption and chlorophyll breakdown, are slowed down, activating alternative energy pathways. As a result, the plant enters a “hold your breath” state, conserving energy until the flooding subsides <cit.>. In contrast, normal rice varieties attempt to elongate their stems and leaves to escape deep submergence. This strategy often results in excessive energy expenditure, leaving the plant unable to recover once the submergence is over. So, the flood-escaping strategy for conventional high-yielding rice varieties differs from that of STRVs, making the latter more adaptable and economically viable <cit.>.
Having isolated Sub1 and understood how it conveys submergence tolerance, researchers at IRRI were then able to use a combination of traditional cross-breeding methods and marker assisted backcross breeding to create Swarna-Sub1, a hybrid with the popular high yielding Swarna rice variety <cit.>. <cit.> have estimated that there is no yield penalty for STRVs under normal condition but a yield advantage of around 1 to 3 t/ha during flooding situation. Additionally, <cit.> highlight that the Sub1 cultivar out performs after complete submergence at reproductive stage compared to traditional varieties in terms of carbohydrate quantities and total dehydrogenase.
Numerous studies have been conducted to assess the yield of STRVs at various stages of their growth cycle. Researchers have experimented with different levels of submergence, including variations in duration and water depth, yielding distinctive results. For instance, <cit.> report that after five days of complete submergence, Swarna-Sub1 rice achieved a yield of 3.98 t/ha, compared to 2.68 t/ha for the traditional Swarna. When submergence extended to 7 to 9 days with water depths of 6 to 9 feet, traditional rice varieties failed to produce any yield. Conversely, the Sub1 variety experienced only a 10-30% reduction in yield.
In a separate study, <cit.> found that brief, late flooding during the panicle initiation stage could diminish the yield of all traditional rice varieties to 1.7 t/ha, whereas the Sub1 variety yielded 4.75 t/ha. However, if flooding persisted for more than 10 days at this stage, the average yield of the Sub1 variety dropped to 3 t/ha. This observation suggests that while mortality during the reproductive stage is not a significant concern, maintaining yield remains a critical issue <cit.>.
Given the success of the Sub1 varieties, the Bill and Melinda Gates Foundation (BMGF) supported production and dissemination of the seeds under the IRRI-Africa Rice collaboration Stress-Tolerant Rice for Africa and South Asia (STRASA) program <cit.>. As part of the dissemination effort, BMGF funded, through a variety of initiatives, both an RCT on STRV impact in Odisha (2011-2012) <cit.> and the Rice Monitoring Survey (RMS) in Bangladesh, India and Nepal (2014-2017) <cit.>. The RCT produced evidence of the impact of STRVs in the 64 treated villages for farmers who were given seed. STRVs reduced downside risk and resulted in farmers investing more in production through increased cultivation area, fertilizer use, and credit utilization <cit.>. The RMS produced evidence of the extent of adoption in South Asia. Farmers who experienced flooding in the previous year were more likely to adopt and the adoption impact was larger for neighbors of early adopters <cit.>. There are also numerous studies coming out of the STRASA program in Asia and Africa that examine the correlation between adoption of STRVs and various outcomes, such as yield, profit, and rice consumption. A non-exhaustive list includes <cit.>.
§ WHY NOT OTHER DATA
In the paper we make the claim that no socioeconomic data exists to allow for a traditional impact evaluation of the long-term, larve scale effects of STRV adoption in Bangladesh. In this appendix we provide evidence to support that claim by discussing the existing nationally or sub-nationally representative surveys, along with their strengths and weakness for answering our research question.
We searched through government, non-government, and university databases to collect a set of possible socioeconomic survey data that would allow us to conduct a strongly identified impact evaluation of STRVs. Recall, our criteria encompass the following prerequisites:
* Needs to be panel data
* Needs to have observations before and after 2010
* Needs to be representative at the national level or at least of rice cultivation
* Needs to have data on rice varieties grown and yields
Many candidate data sets satisfied none or only one of the criteria. Frequently a data set would have multiple observations over time but not contain information on rice variety, the critical data need. Only a few satisfied three of the criteria and none satisfied all four. Te best three data sources, other than the RMS data we used in this study, are:
* Village Dynamics in South Asia (VDSA): This dataset is an outcome of a research initiative by the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT).[https://vdsa.icrisat.org/vdsa-database.aspxhttps://vdsa.icrisat.org/vdsa-database.aspx]. Spanning 42 villages in Bangladesh, the dataset comprises survey data from 1,831 households. The survey is designed to be representative of agricultural households in Bangladesh, is a panel, and has data on varieties and yields. However, there is one critical limitation:
* The temporal scope of the data panel is limited, spanning from 2009 to 2014, in contrast to the RMS dataset, which extends from 2014 to 2022. Given that seed availability remained very low as late as 2015, the VDSA does not extend far enough past 2010 to capture adoption. In fact, adoption of STRVs in the 2014 round of the VDSA is 0%, making the VDSA unusable for studying STRV adoption.
* Bangladesh Integrated Household Survey (BIHS): This survey was implemented by the International Food Policy Research Institute (IFPRI).[https://dataverse.harvard.edu/dataverse/IFPRI/?q=BIHShttps://dataverse.harvard.edu/dataverse/IFPRI/?q=BIHS] It is nationally representative, a panel, and contains a host of information on plot-level agricultural production and practices, dietary intake of individual household members, anthropometric measurements (height and weight) of all household members, and data to measure women’s empowerment in agriculture index (WEAI). However, it faces the same limitation as the RMS data set:
* The panel is constrained to three rounds of data (2011/12, 2015, and 2018/19), all post-STRV introduction. The absence of pre-introduction data limits our capacity to evaluate the impact of STRV adoption in the same ways as the RMS.
* The Bangladesh Rice Research Institute-Rice Database (BRRI-RD): As a government-owned rice database it provides comprehensive data across all years and districts regarding total production[https://bit.ly/47McOjIhttps://bit.ly/47McOjI]. It is nationally representative and contains production information. Nevertheless, this dataset is encumbered by certain limitations:
* While a panel, it is not a micro-level panel (plot, household, farm) but constructed as an aggregate district or division-level panel. Thus, it only has data on total production and acreage data by divisions and districts of Bangladesh. Thus, there is no variety-specific production, which restricts the use of this data set.
§ REMOTE SENSING DETAILS
In the paper we provide a non-technical, intuitive summary of the methods used to generate the EO data. In this appendix we provide the technical details sufficient to implement our procedures in alternative contexts or point to the existing publications, data, and code to replicate the deep learning and fushion model work.
§.§ Rice Area Map
In order to train the rice area mapping algorithm, ground truth data first needed to be generated. Three districts were selected to be the base for the study to extrapolate to the country: Barisal, Kurigram, and Rajshahi). The three districts come from the north, central, and south of the country and experience substantial flooding.
Google Earth imagery were inspected as the basis in determining the land cover of each cell. Pixels were categorized as rice if more than 70% of the cell was rice and categorized as non-rice otherwise. In total 150 points were in each district (75 rice, 75 non-rice). This generated 450 rice-no-rice (RNR) points for each year (2002, 2004, 2006, 2009, 2015, 2016, 2018 - 2020) or 4,050 RNR points as a training data set (see Figure <ref>).
We developed a RF model using the training data generated above and used a leave-one-out cross-validation scheme. The input data of our model is: a PCA is applied on the median band values, calculated from the time series covering the growth period of Aman rice. The two principal components are used as input to the model. Additionally, we use the median, 5^th and 95^th quantile EVI values over the growth period for each pixel, as well as elevation and slope as static features.
In summary, we apply the following process in Google Earth Engine to prepare the data before applying the RF algorithm:
* Extract the MOD09Q1.061 Terra Surface Reflectance 8-Day Global 250m data (band 1 and 2) images over the Aman rice growth period.
* Calculate the median value for each pixel.
* Compute the PCA over the bands.
* Extract the time series of EVI from MOD13Q1.061 Terra Vegetation Indices 16-Day Global 250m, and compute the median, 5^th and 95^th quantiles over the period for each pixel.
* Extract the elevation from FABDEM <cit.>, and compute the slope for each pixel.
* Stack all layers together to create the input to the RF.
The RF model consists of a 1000 trees, with each tree having a minimum of 5 leaves. We used the Statistical Machine Intelligence and Learning Engine (smile) RF implementation of Google Earth Engine. The random forest is trained with a leave-one-out cross validation scheme, with an additional separated test set. The leave-one-out scheme consists of rotating the districts used for training and leaving one of the districts out each time. The data used here consists of all years except 2020, which is removed and kept as test set. The leave-one-out validation scheme ensures the model is generalizable in space, meaning that it performs similarly outside of the area used for training. The hold out test set of 2020 allows us to validate the model in time, meaning that it will perform similarly outside of the observed years. The accuracies are reported in Table <ref>.
This way of validating and training the model naturally creates as many models as leave-one-out sets. To run the inference on the whole country, we implemented a majority voting scheme, i.e. each model infers the presence/absence of rice for the whole country, and the category with the most votes receives the final classification. This means that here, with three regions, we have three models, and each pixels needs at least two of the models to classify the pixel as rice, for the pixel to be set as rice in the final map.
After reviewing the result, and comparing them with a study performed by IRRI in 2012 <cit.> on Aman rice presence for 2010, we realized that submergence-rice, i.e. rice irrigated by submergence, was not being detected by our RF. According to <cit.>, submergence-rice is only present in the north east of Bangladesh, and our ground sampling did not covering any of the north east, the algorithm could not pick up on it. We thus decided to add the submergence rice area from <cit.> as additional training data and extended the time frame used in the training data at point 1 and 4 to capture post-flooding periods. This allowed us to successfully capture the submergence rice in the northeast.
After generating the rice maps for all of Bangladesh with MODIS, we had to investigate the necessity of generating them with Landsat. The same process as described at the district level cannot be reproduced: the satellite images were hand-picked for each of the three districts, as cloud free as possible, and representative of the growth period. This process cannot be automated at the country scale. Additionally, because of the technical challenges of working with Landsat 7 (gap filling due to SLC failure), it made sense to investigate the necessity to generate such data before producing it.
In this study, the rice field data is primarily used to mask the inundation and EVI data, but is then aggregated at the district level. Additionally, the data being masked is given at 500 meters (inundation), and 250 meters (EVI). In order to understand the necessity of the higher resolution map, we downloaded a high resolution map based on Sentinel-1 (10 meters) generated by <cit.>, of all of Bangladesh for 2017. We then masked the inundation data with this high resolution mask, and with the 250 meters mask generated through the RF algorithm, and compared the two at the district level.
The figure shows that the two datasets do not significantly differ from each other. Given the complexity of generating high resolution masks with Landsat, the source of errors would be relatively high, and would not significantly improve the accuracy of the data at the district level.
§.§ Flood Map
As with the rice maps, building reliable flood maps that look so far back into the past raises data availability challenges. While Satellite data allows for analysis geographical areas, particularly in remote or inaccessible regions it suffers from a recency bias, at least in terms of high spatial and temporal resolution products that can see beyond the visible spectrum. For flood mapping, MODIS is often used because of its long history, its availability on a daily basis, and its relatively cloud free image derived from an 8-days composite image. Products like Landsat, which come at a higher spatial resolution and have a longer time series, do not provide such a consistent image series. In particular, Landsat has a low revisit frequency, harmonization issues between Landsat versions, and instrument failure in Landsat 7.
However, MODIS is not without its own limitations. It is an optical sensor that acquires two images daily. But, being an optical sensor, the content of these images is often obscured by clouds. This means that during the tiime of the year when floods are most frequent (the rainy season), MODIS is at its least useful because cloud cover often makes it difficult to see flooding. Furthermore, MODIS's coarse spatial resolution (250m, 500m, and 1km) compared to LANDSAT, makes it unsuitable for accurately extracting flooded information in urban regions or other complex and heterogeneous landscapes. For us, studying the flat rice growing plains in rural Bangladesh, this last issue is not a major concern.
In recent years, radar sensors can overcome the limitations of optical sensors by penetrating cloud cover to consistently detect water signals. Sentinel-1 is an ideal satellite for flood detection as it captures imagines at a high spatial resolution (10m) and is equipped with a radar sensor so that even in cloudy conditions it can capture accurate and consistent flood events. However, Sentinel-1's weakness, relative to MODIS and Landsat, is that it only became available since 2017 and is thus unsuitable for historical flood mapping.
Given the strengths and weaknesses of each product, we first ran comparisons of food detection between MODIS and Landsat. While Landsat has greater spatial resolution, which is extremely beneficial in pinpointing floods on small rice plots, its coarse temporal resolution was a critical detriment. Landsat often completely missed flood events detected by MODIS because of Landsat's infrequency of images and the extent of cloud over. Because of this, we decided to use MODIS as are base product for both flood detection and rice mapping.
Having settled on MODIS, we then developed a method to fuse together the shorter time series of Sentinel-1 with the longer MODIS time series as a potential solution to the disadvantages of either sensor. This fusion model allows us to leverage the advantages of high resolution modern flood detection and historical data availability. The data used for building our flood maps are the product of a deep learning model trained on Sentinel-1 derived fractional inundated area to predict historical MODIS satellite data. We utilize a Convolutional Neural - Long Short-Term Memory Network (CNN-LSTM) to take advantage of both the spatial and temporal feature representation of inundation and compare it to a traditional Convolutional Neural Network (CNN). We apply the CNN-LSTM model from 2002 to 2020 over all of Bangladesh at 500m resolution. From this inundation dataset, we extract one map per year with various statistics over the monsoon season: mean, median, max, min, quantiles, day of max and cumulative flooding.
Figure <ref> provides tests for accuracy of the fusion model by comparing results with the high resolution Sentinel-1 data. Our model produces results that match the high resolution data with a high degree of accuracy, ranging from .79 to .92. Additional details and validation tests, along with reproducible data and code, has been published by the research team in <cit.>.
§ ALTERNATIVE EO RESULTS
In the analysis relying on EO data, we are concerned about two things. First, that our choices regarding key variables (EVI, flooding) are driving the results and that, had we made different choices, our results would not be significant. Second, that our choice regarding the unit of analysis (districts) is driving the results and that, had we conducted the analysis at a more dis-aggregated level, our results would not be significant. To address the first concern, we re-run all of the district-level analysis using three measures of EVI (cumulative, maximum, mean) that differ from the one presented in our main results (median EVI).
As one can see in this appendix, results are not substantially different when we use one of these alternative EVI measures. We also re-run the DID analysis using various cut-offs for flood-prone. The variable of interest in the DID specification was not significant in our main results, and we see this is true for most other definitions of flood-prone. There is, in fact, one definition of flood-prone that produces significant result, but this is one of 42 possible definitions that we test.
To address the second concern, we re-run all of the analysis using upazila (county) as the unit of analysis.[There are 64 districts in Bangladesh and these are divided into 553 upazilas. We combine many extremely small upazilas in the city and metro area of Dhaka, where rice is not grown, to get a final data set of 503 upazilas.] The caveat with using upazilas is that the administrative seed data is at the district-level, so all upazilas in a district share the same year of introduction and same availability of STRV. The event study results are not robust to the choice of the unit of analysis. This may be because while EVI varies from upazila to upazila, the event year is the same for every upazila in the district and we do not consider the degree of flooding within an upazila. When we estimate the DID and TWFE models, upazila-level results reinforce to the district-level results.
§ PLOT-LEVEL RESULTS
In the analysis relying on household data, we are concerned that our choice of the household, and not the plot, as the unit of analysis may be driving our results. If anything, conducting the analysis at the household-level is likely to create a downward bias, as we consider a household to have adopted if they grow STRVs on at least one plot, meaning many non-STRV plots cultivated by a household are considered “treated” even though they are not. Because of this, our preference would be to conduct a plot-level analysis. However, we cannot track plots over time in the household panel, meaning we must treat the plot data as a repeated cross section.
We start by presenting summary statistics at the plot level which mirror the household level summary tables in the paper. Using plots as the unit of analysis, and controlling for household and year fixed effects, results are generally robust. In five of the eight regressions across both TWFE and TWFE-IV estimators, we reject that yields are equal for plots that experienced flooding and were planted with STRVs and plots that experienced flooding but were not planted with STRVs. These results should be interpreted with caution, as we cannot compare a plot with itself, meaning plot-level unobservables are left uncontrolled for.
|